Interested in how DevOps, IT Modernization and Agile practices can positively impact customer experience?
Docker recently unveiled version 1.10 of its popular container technology. Security was a major focus of the release with several features designed to strengthen the security of Docker containers. According to the Docker blog, “All the big features you’ve been asking for are now available to use: user namespacing for isolating system users, seccomp profiles for filtering syscalls, and an authorization plugin system for restricting access to Engine features. Another big security enhancement is that image IDs now represent the content that is inside an image, in a similar way to how Git commits represent the content inside commits.” We are excited about these additions because as adoption of Docker grows, so do the number of questions we get about container security. The new features provide real value in that they build-in a number of security controls, allowing organizations to focus more time on driving strategic value. Let’s look a little more deeply into why that is the case. Namespacing is the most anticipated of these new features as it directly addresses the common concern that containers have access to the root on the host, leaving every instance equally vulnerable to the damage a breach could cause. Now Docker supports the ability to use namespaces and with it advanced OS functions inside a container without affecting every container running on the same server. Said another way, Docker 1.10 separates the processes in the container from the processes of the host and now each process can have its own set of user and group IDs. While all containers on a given server still share the same kernel, additional security controls, such as those recommended by the CIS Docker 1.6 Benchmark, are helpful and advisable.
On March 29, 2016, Amazon released Change Sets for AWS CloudFormation, an important new update with far reaching benefits. Anyone using CloudFormation templates, anyone pursuing an infrastructure as code strategy on AWS, should pay attention. AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, allowing them to provision and update them in an orderly and predictable fashion. The truth is, updates can fail, leaving the stack in an inconsistent state. The result is unwanted cost, wasted resources, delay and downtime. As AWS partners, we push multiple AWS stack updates every week to help our customers bring new solutions to market quickly. At every update, there is a risk to business continuity; when changes have been made manually to a stack created by one template, a new update can spell failure for services built on the newly modified template. Even when an AWS consultant was able to spend hours carefully reviewing updates, there was no way to know for certain what effects the update would have. Thus, the engineers had to engage in the “Push and Pray” method, often requiring that additional resources be on-hand in case the update failed part of the way through a stack, or resulting in the downtime of key applications or services. This constant “fear factor” when updating the stacks inherently prevents engineers from innovation, trying new changes, and impacts the company’s very ability to be creative, agile and efficient. As an interim solution, AWS launched the “Continue Rollback Update” feature detailed in the AWS DevOps blog Continue Rolling Back an Update for AWS CloudFormation stacks in the UPDATE ROLLBACK FAILED state. This feature allows engineers to quickly revert back to the last known good state.
Flux 7 Helps HomeAway Save Christmas in the Nick of Time As the world’s leading online marketplace for the vacation rental industry, HomeAway aims to help families and friends find the perfect vacation rental to create unforgettable travel experiences together.
A Fortune 1000 Retailer Transforms IT with DevOps in the Cloud Increases Global Agility, Availability and Market Competitiveness while Maintaining PCI Compliance This leading retailer decided that the creation of a new portal was just the proof of concept it needed for a larger initiative to transform its IT function, addressing the weaknesses in its traditional on-premises infrastructure and lengthy, manual IT processes. The IT team’s goals to help the business deliver more quickly to market in a secure, highly available, agile fashion fell in lockstep with the DevOps approach and as a result, they quickly set a path to use Amazon Web Services (AWS) as a platform to launch both the new portal and DevOps initiative. However, with a deadline looming, IT quickly realized that they lacked the in-house expertise to ensure delivery of a high availability and secure, PCI-compliant platform on AWS. Solution Flux7 advised the retailer to approach the project with a DevOps in the cloud strategy. The recommendation entailed a move to AWS, for which Flux7’s experienced consultants were able to provide education, guidance, and comparison of AWS with vendors to ease the decision. We then partnered with the organization’s technology leaders to address the three main criteria of its new cloud-based, streamlined infrastructure: security, high availability, and a high degree of automation to increase agility. Prior to transitioning to a more modern infrastructure, this organization’s IT was defined by lengthy deployment cycles and numerous manual steps for processes such as provisioning new servers or preparing OS and server images. Moving to a more elastic environment in AWS, Flux7 AWS solutions experts helped design and build repeatable, automated processes for provisioning infrastructure and Amazon Machine Images (AMI) using Docker, CloudFormation, and Jenkins.
In our last blog post, we discussed how Ansible’s configuration management tools can benefit Amazon Web Services (AWS) environments – especially for DevOps focused organizations. Today we’d like to share how to realize those benefits with Ansible Playbooks. Playbooks are Ansible’s configuration, deployment, and orchestration language. Keeping in line with Ansible’s focus on simplicity without sacrificing security and reliability, Playbooks purposefully have a minimum of syntax because they aren’t meant to be a programming language or script, but rather a model of a configuration or a process. To give you a quick rundown: Each Playbook is composed of one or more plays. A play maps a group of hosts to well-defined roles represented by tasks. A task is a call to an Ansible module. Ansible has hundreds of modules that can run using an Ansible Playbook command. <= Ansible suggests thinking of modules as the tools in your workshop and Playbooks as your design plans. Provisioning with Ansible PlaybooksComposing a playbook of multiple plays allows you to orchestrate multi-machine deployments in an easy-to-learn yet repeatable matter. In this way, Ansible makes it easy to provision – and apply secure configurations to — instances, networks, and more across your Amazon Web Services. With the simplicity of Ansible’s format, even the most complicated of AWS environments can be easily described in Ansible Playbooks. As a result, once your AWS-based application environment(s) are described with Ansible Playbooks, you can deploy them again and again, with predictable, repeatable results. Manage AWS with Playbooks For AWS specifically, Ansible has over 50 modules that support 20 different capabilities such as Virtual Private Cloud (VPC) and Identity Access Manager (IAM).
AWS Case Studies: DevOps A Fortune 500 manufacturer was using Hadoop, internal data centers, Rackspace and CenturyLink to facilitate services that connected its customers with data insights using an Internet of Things model. The overarching goal: to facilitate continuous data-driven improvement within its customers’ operations. To help achieve this goal and overcome its Hadoop scaling issues, the company engaged with Flux7, DevOps consulting group and AWS partners. Additionally, the manufacturer sought a global solution that would comply with EU data privacy laws. Solution: Understanding the organization’s scaling challenges and need for EU data privacy compliance, Flux7 recommended the organization approach the project with DevOps best practices, executed on Amazon Web Services (AWS) infrastructure. The Flux7 recommendation meant the company needed to migrate to AWS for which Flux7’s experienced consultants were able to provide education, guidance and comparison of AWS with other solutions to ease the decision. Under the guidance of Flux7’s AWS cloud architecture experts and certified consultants, the manufacturer’s internal teams were able to quickly understand what could be done to enable rapid setup of AWS IoT infrastructure. Moving forward, Flux7 and the customer brainstormed and co-invented a DevOps workflow that is agile, leverages AWS and Ansible, while maintaining tight AWS security controls that simultaneously meet EU data privacy laws. Flux7 used this opportunity to add additional features to the company’s internal toolchain and support legacy application modernization by migrating the application onto AWS. Specifically, Flux7’s unique approach to dynamically assigning DNS names to machines in an AWS Autoscaling Group (ASG) enabled legacy application migration onto AWS that required hard-coded DNS names. While already an agile organization, Flux7’s deep experience with DevOps approaches provided an additional layer of IT process automation that further propelled the company’s time to delivery.
One of the key benefits of cloud computing is the opportunity to replace up-front capital infrastructure expenses with low variable costs that scale with your business. And, while it is easy to quickly spin up hundreds or thousands of new servers in minutes with Amazon Web Services (AWS), it’s much more difficult to ensure that those new machines are configured appropriately. Enter the marriage of configuration management tools and AWS. While most of you are likely familiar with first and second generation configuration management tools, you may be missing out on Ansible. A radically simple agentless configuration management and orchestration engine, Ansible makes apps and systems effortless to deploy. Why? Because Ansible focuses on simplicity and low learning curve, without sacrificing security and reliability. Let’s look at a few key ways the Ansible approach helps achieve multiple goals in AWS: DevOps:Ansible’s focus on simplicity, flexibility and reusability allows both developers and IT staff to master it quickly. The foundation of DevOps is the ability of Dev and Ops to collaborate effectively to address issues without invoking a series of tickets and other time-consuming processes that burden the business. Ansible directly supports DevOps by giving both teams an easy-to-use tool they can use to collaborate in a natural fashion, thus streamlining the DevOps promise to bring solutions to market faster, giving the business the ability to focus less on execution and more on strategy. Security:While great for school kids, snowflakes are an unwelcome guest in your AWS environment.
As DevOps adoption grows, the demand for DevOps engineers grows with it and a field of highly diverse applicants. Here at Flux7, we are frequently asked for best practices and tips when sourcing and hiring a DevOps Engineer. While we highly recommend “breeding” DevOps experts in-house, when our customers do feel the need to hire a DevOps resource, the following are our top five recommendations: DevOps is a concept, not a technology. You are not building a new team, you are building a center of excellence (COE) to instill change, and the two are very different. A COE must stay small and strive to instill the knowledge in the company. The new hire shall have this concept clear: they are not a do-er of day-to-day work but rather an agent of change whose job is to improve things via process improvements and automation. Success is defined by others’ comfort with using and managing the process. The individual must have the following non-technical skills: Ability to learn big picture and technical details. Technologists who jump in to “work” without learning about the company first are more harmful than helpful. Ability to explain their past technical decisions with apt business and technical reasoning. The decisions he/she will make will not only shape future budgets, scaling and the HR needs of your company but also your go-to-market strategies and ability to compete in the market; hence strong decision-making capabilities are critical. We suggest that, at the interview, you ask them to walk you through a design decision they made in the last 90 days.
Many developers born in the world of agile startups view continuous integration (CI) and continuous delivery (CD) as accepted standard requirements for software development. Yet many companies, particularly large enterprises with traditional infrastructure, still struggle to make this approach part of their development process.
According to Gartner, DevOps tools, in general, will be a $2.3 billion market in 2015, yet current DevOps solutions remain mired in complexity. That’s because simplifying and standardizing the process of deploying infrastructure is a complex, time-consuming task. Luckily, that can be greatly improved by using configuration management tools like Ansible.
Part 2: How to Make AWS Config Work for You One of the biggest fears that CIOs of the digital age have is not only server crashes, but the inability to recover the system to its last-known state. This is particularly painful in compliance-heavy industries that are subject to external audits to make sure everything is being performed to industry standards and within federal compliance. AWS Config is a service which picks out a detailed account of what happens with your AWS configuration while giving you the critical ability to go back in time and verify or check the state your AWS resources were at a given point of time. In Part 2 of our account of fictional CIO Ashok Kumar, whose company ABC Media Solutions has just suffered an irrecoverable server crash, we dig deep into the technical aspect of AWS Config to explore how it works, and more importantly, how it can work for you. When and How? On a broader level, the AWS Config can be used for one or more of the below purposes. Security analysis (Safety and security considerations for the resource and environment) Audit compliance (HIPAA, PCI DSS, etc.) Change management (Effect of change in one resource to another) Troubleshooting configuration changes Discover (Resource discovery) The simplest way to activate the AWS resource is through the AWS Management Console. During the activation, choose a simple S3 bucket and SNS service from the console to enable this service. Diving Deep The AWS Config deals with three basic parameters, while storing the AWS configuration information. AWS Resource – e.g Amazon EC2 instances, VPC, Elastic IP, etc. Relationship – Relationships between different AWS resources, such as a particular Amazon EC2 relation with an RDS instance.
Part 1: Why AWS Config Serves as a Backbone to Your Existing AWS Architecture: What keeps CIOs in compliance-heavy industries up at night? Audits. AWS Config is helping them sleep better by providing an easier way to confirm and return to the last known state. We show you how it works in practice in this fictional example.