“DevOps is not a tool. DevOps involves the human element. It’s about efficient collaboration between the ops and dev teams. DevOps is a process. DevOps is a culture.” Haven’t you already heard enough of this? I know … right! However, in this post, let’s view DevOps in a new angle. Let’s explore what’s in it for your customers. What are the challenges you face that hinder an efficient rendering of services? We all know, no matter what the product or service you are delivering, that the customer is king. And, of course, the goal of any business is to target the right group of people with the right services or products. Following is an outline that summarizes the challenges, scenarios and needs that any organization faces in setting up a DevOps infrastructure. Align Your Team’s Needs With Your Customers’ The first and the foremost goal of any organization is to satisfy its customers. This is even more significant an issue for businesses that deal with varying surges in customer demands. The question to ask is: “How well does your system scale?” The challenges in meeting customer needs start with your organization’s internal teams. Yes … you are responsible. Not being convinced of this statement only implies that you haven’t yet started thinking about DevOps. Some of those challenges being dealt with by your internal teams include: Developer Onboarding: Organizations are bound to change due to new technologies and new hires. This is an inevitable change. However, this also affects internal teams in many ways. Ramping up a new hire to the already existing process of the organization is bound to require handling different levels of difficulty. Continuous Integration: A constant and quick feedback framework is a necessity.
It’s kitchen time again. Put that Chef hat on and let’s check out some more cool stuff. In our last post “Part 1: Understanding Chef Basics with 3 ‘Wh’ Q’s,” I bundled a whole lot of Chef ingredients. (That features for you lay Chefs.) To quickly refresh the list, and your memory, we discussed Chef components and elements using three Wh Q’s: What are the elements of Chef and their functionalities? Chef Nodes: Any machine type that is managed by a Chef-client. Chef Servers: The hub of the organization. Workstation: User-run computer for configuration related tasks. What are the types of Chef elements, if any? Chef Nodes: Physical, cloud, network, virtual machine. Chef Servers: Enterprise Chef, Hosted Enterprise Chef, Open Source Chef. What are the components of Chef elements and their functionalities? Chef Nodes: Chef-client, Ohai. Chef Servers: Search, Manage. Workstation: Knife, Chef-repo. Again, you can read about Chef components and elements in more detail here: “Part 1: Understanding Chef Basics with 3 ‘Wh’ Q’s.” In this post, I’ll discuss a few more major concepts and dive right into setting up Chef for your use. Node Objects As mentioned above, Chef-nodes are nothing more than machines that are managed by Chef-client. Each node has node objects that constitute the important aspects of Chef-client. There are two primary node objects: Attributes Run-list Attributes Attributes are just profiles for each node. They hold any and all details about the node. What do they say about a node? Attributes say three things about a node: its current state, its state at the end of a previous Chef-client run, and its expected state at the end of the current Chef-client run. How do they define a node? Using the current state of the node, it defines cookbooks, roles and environments.
We’re getting ready to kick off a series of interactive webinars focused on addressing issues we’ve uncovered during our recent IT assessments in regard to DevOps and the cloud. So, we thought now is a good time to share an article, featuring Flux7 CEO Aater Suleman, that sifts through how to successfully move to a DevOps culture. The article, posted at DevOps.com, is entitled: “Q&A with Aater Suleman: Successfully Moving to DevOps,” and it answers the questions: What do teams need to do in order to move forward with good workflow? What are some of the common lessons learned in moving to DevOps? What are some of the ways enterprises often fail? What are some easy wins enterprises can achieve when it comes to being able to innovate more quickly? Creating an optimized software development workflow is a key area we focus on during our IT assessments. We strongly believe it should be part of any IT infrastructure audit. It’s essential for web-based businesses that use their sites to deliver services and transact business to shift their thinking about developing and delivering apps in the cloud. Web development workflow is a critical business factor. And creating a stable, secure site supported with a local development environment that streamlines developer workflow and requires minimal maintenance time can help get solutions to market faster. As Aater (@futurechips) stated in the DevOps.com article: “A very strong focus on automation and continuous improvement is required. And to succeed there, you need to monitor developer workflows very cautiously.” We all know that measuring is the first step to improvement. But who is really doing this? We’d love to hear your thoughts and comments. Just send them to us at firstname.lastname@example.org.
It’s time for a change. Undoubtedly, for the good. With the help of movers, migration officers, technicians, and a lot more good souls, Flux7’s blog site has now successfully moved to blog.flux7.com. Our previous home was at flux7.com/blogs. Okay, so I know were overstating things a bit. But, it’s official. Now, you need to make a note of the change! You may ask: What’s different? Then let me tell you. At first glance, you will see that the design and layout haven’t changed much. However, here’s what you can look forward to seeing that’s new: Exciting offers! That includes links to free webinars, white papers, and a lot more DevOps and Cloud resources. A complete compilation of industry updates about DevOps and the Cloud — all in one place! The mission of helping you understand the significance of Devops and the Cloud. And, there’s a whole lot more of what we previously offered: Tutorials Best Practices Benchmarking And more stuff! In addition, we know you should be appreciated for your loyalty to our blog.
Throughout the history of our blog, we have shared many posts in regard to benchmarking, such as explaining how to setup and use sysbench for MySQL benchmarking. You can just do a search in the upper right-hand corner for “benchmarking” to find all of these. Today, we are continuing to add to the library! In this post, we are sharing our experiences using sysbench for MySQL benchmarking. To start, let’s explore the setup we used for benchmarking. Setup Here it is: Machine: AWS m3.large instance (64 Bit, paravirtual) Storage: 32 GB SSD instance store OS: Ubuntu 14.04 LTS (3.13.0-24-generic) MySQL Version: 5.5.35 Sysbench Version: 0.4.12 We used four different tables sizes for our benchmarking. They ranged from 50,000 to 50,000,000 whereby each table is 10 times larger than the previous one. Initially, the benchmark was run without applying any optimization and used the default “my.cnf.” We then applied several optimizations for MySQL based on best practices recommended by MySQL documentation. Then we ran the benchmark again. Optimizations We applied the following optimizations to the MySQL configuration file “my.cnf” (/etc/mysql/my.cnf). A short description of the system variables is given below. Caches and Limits max_heap_table_size → The maximum size for in-memory temporary tables is the minimum of the tmp_table_size and max_heap_table_size values. The default value of tmp_table_size. The default value for max_heap_table_size is 16M and is now set to 32MB so that it will be equal to tmp_table_size. query_cache_size → We increased the query_cache_size so that the results are cached to some extent. thread_cache_size → Although for benchmarking purposes, we do not need to use this variable as there will only be one connection. We have included this just to make sure having this variable does not affect the performance. open_files_limit → Increase the open files limit.
Figuring out [MAIN CHALLENGE] isn’t easy. But once you figure out the basics, you’ve opened the doors to tremendous opportunities for growth and learning. That’s why we’ve covered the fundamentals in [LINK TO EBOOK]. This blog post offers a quick overview by examining the five basic ways to get started. 1. First Way to Get Better Start by embracing the lowest hanging fruit. 2. Second Way to Get Better Then, find a way to move up a notch. 3. Third Way to Get Better Ask your peers and managers for advice. 4. Fourth Way to Get Better Do research using Google Trends and other helpful tools to reveal patterns in behavior. 5. Fifth Way to Get Better Now, tie this back to a life lesson that everyone takes for granted but doesn’t consder much in everyday life.
A couple of weeks ago, we attended DockerCon, the inaugural Docker-centric conference for developers, and anyone else, with an interest in the open platform to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud. We were there, not only as a founding System Integration partner but also as a presenter.
In part one of our Docker Tutorial Series, we learned about the basics of Docker. We examined how it works and how it’s installed. In this post, let’s now learn the 15 Docker commands and get some hands-on experience in how they are used and what they do.
Docker, the new trending containerization technique, is winning hearts with its lightweight, portable, “build once, configure once and run anywhere” functionalities. This is part one of Flux7’s Docker tutorial series. As we move forward, together, we will learn and evaluate how Docker makes a difference, and how it can be put to the best use. Let’s learn Docker and nail it in less than six to seven weeks.
The current trends in technology indicate that more than 60% of businesses use cloud computing for their IT operations. Among the various cloud service providers, Amazon Web Services [AWS] is a pioneer and continues to be a leader in the cloud market.
We published a post a few months ago titled “Must-know Facts About AWS ELB” in which we explored some of the peculiarities of Amazon’s Elastic Load Balancer. We thought we’d go a bit deeper into the details of what the ELB is to better understand its limitations and appreciate the engineering behind it. So, what are the requirements for the ELB?
Read our latest blog comparing RabbitMQ for AWS with AWS SQS FIFO queues Every day in the world of modern technology, high availability has become the key requirement of any layer in technology. Message broker software has become a significant component of most stacks. In this article, we will present a RabbitMQ tutorial: how to create highly available message queues using RabbitMQ. RabbitMQ is an open-source message broker software (also called message-oriented middleware) that implements the Advanced Message Queuing Protocol (AMQP). RabbitMQ server is written in the Erlang programming language. The RabbitMQ Cluster Clustering connects multiple nodes to form a single logical broker. Virtual hosts, exchanges, users and permissions are mirrored across all nodes in a cluster. A client connecting to any node can see all the queues in a cluster. Clustering enables high availability of queues and increases the throughput. A node can be a Disc node or RAM node. RAM node keeps the message state in memory with the exception of queue contents which can reside on a disk if the queue is persistent or too big to fit into memory. RAM nodes perform better than Disc nodes because they don’t have to write to a disk as much as disk nodes. But, it is always recommended to have disk nodes for persistent queues. We’ll discuss how to create and convert RAM and Disk nodes later in the post. Prerequisites: Network connection between nodes must be reliable. All nodes must run the same version of Erlang and RabbitMQ. All TCP ports should be open between nodes. We have used CentOS for the demo. Installation steps may vary for Ubuntu and OpenSuse.