Containers offer a wide variety of benefits. They are immutable, portable, lightweight, and efficient. At the TechStrong Conference, our very own Lead Solution Architect, Matt Buchner, shared the story of two organizations’ journeys to cloud containerization. Through a narrative of our projects with them, he shares several lessons learned from their container migrations, including the importance of standardization and a culture of continuous evolution.
A Tale of Two Container Migrations and the Lessons Learned
The Journey of a Legacy IT Shop
This organization was new to AWS, containers, and DevOps when we first engaged with it. While the development team had used Agile processes, neither the security nor the infrastructure team had worked with the methodology. As we educated them about the mindsets and technologies to be successful, we conducted an assessment to understand the current state of their environment. As you may expect, they relied heavily on traditional on-premises technologies like Dell, Oracle, Red Hat, VMware, and Cisco. Of these, they carried Red Hat forward in their AWS migration.
Starting the cloud journey, the company did not have a strong AWS relationship. As a result, there was a definite desire to remain cloud-agnostic; the goal was to start the cloud journey with AWS, and in the future run the same application in other clouds. This goal drove the decision to run the company’s containers on Red Hat OpenShift, a flavor of Kubernetes by Red Hat. In this way, the company can run OpenShift and the application on top of it in AWS — and later on in Azure or GCP if it so desired.
Starting With a Sound Foundation
However, as the firm was new to AWS before we could begin containerization, we needed to create an AWS Account architecture, creating accounts for billing, production, non-production, etc., network connectivity, and more. Moreover, there are several AWS services that establish the foundation for workloads, before an application is even deployed. AWS Control Tower, AWS Organizations, AWS IAM, AWS SSO, AWS CloudTrail, AWS Security Hub, AWS Config, and Amazon GuardDuty work together to form a security baseline. While you may think that these AWS security services should be enabled when you start an AWS Account, it’s important to note that they are not.
Lesson Learned: Many people starting in AWS assume that they will only need one or two accounts. They quickly realize they need many more. It’s not uncommon to end up with ten, 50, or more accounts. Therefore, it’s very important to find a way to create AWS accounts with a consistent security baseline and do this in an automated fashion.
Running the Application
With a good, solid foundation in place, we were ready to deploy the environment to run the application. The company had an API-based microservice application set up such that customers and partners come in through the API Gateway which reaches out to the application on the OpenShift cluster. OpenShift components have their software installed and configured with Ansible. Replicating this environment many times — in multiple Availability Zone (AZs) for resiliency, multiple VPCs for each environment, multiple AWS Accounts to separate non-production, production, and PCI workloads — allows us to create a consistent resilient infrastructure and application.
Lesson Learned: The customer requested that this work be done quickly. We were able to help the company deploy in record time using the AWS Quick Start for OpenShift. Before embarking on a new undertaking, check first to see if AWS has a Quick Start for the technology you want to use. While they have certain limitations, they are often a good starting point and knowing their limitations ahead of time, you can proactively address them to achieve your desired final stage faster.
The company became so confident in its AWS relationship that it decided to standardize on AWS. As a result, we migrated the company from OpenShift to Amazon EKS, the Kubernetes managed service by AWS. Now in production, the company is seeing many business impacts, including 100% uptime, a goal it had previously struggled to achieve with their on-premise applications.
The Journey of a Mature Development Team
In juxtaposition to the legacy customer, this team was quite mature in its work with AWS. Responsible for the operation of the application, we quickly learned that the company’s AWS accounts were overrun. Its VPC was running out of IP addresses, which can be devastating for applications that need to scale. In talking to different development groups, we learned that the Center of Excellence (CoE) had a standardized AWS landing zone they could use. While they wanted to transition from their legacy AWS to the one provided by the CoE, we first needed to help them change their application scalability and security.
In embarking on this process, we also learned that each development team was given the freedom to choose any technologies they wanted. While Team One chose AWS CloudFormation, Amazon ECS, and GoCD by ThoughtWorks, Team Two opted for HashiCorp Terraform and Kubernetes. Without a CI/CD tool, Team Two deployed with command lines.
Lesson Learned: This situation does not scale well. When an application is deployed to production, and it sees frequent use, it will inevitably run into a production issue. While the team solves and learns from it, learnings from Team One aren’t applicable to Team Two (and vice versa) because they use totally different technology stacks. To scale learning and continuously improve, standardization is important.
Standardizing Technology Stacks
Based on this key observation, the decision was made to standardize on AWS CloudFormation, ECS, and GoCD. Note that while one set of tools is not better than the other, standardization had to take precedence over a preferred set of tools.
With the decision made, we migrated Team Two’s application to the new standard technology stack. Then we helped Team One migrate its application to the COE-provided AWS landing zone, which then gave us two applications running on a standardized landing zone and technology stack. At this point, we were introduced to Team Three who was deploying a brand new product. It reused the designed and approved tech stack, saving a great deal of time.
Lesson Learned: To build consistency across multiple applications, an application-agnostic pipeline/process is critical. We built just that for this customer, including QA, staging, and production environments, saving them from rebuilding it for each application.
In addition to the benefits of containerization, these two cloud container journeys illustrate the importance of standardization and knowledge sharing to ensure consistency and continuous improvement. And, they highlight the importance of inquiry-driven agility. While ten years ago, organizations may have looked at buying a 5-year license to get the best price, the ability to ask questions and evolve quickly based on the answer is a new business imperative.
Interested in an experienced sherpa for your cloud container journey? Reach out to our consulting team today.
Written by Flux7 Labs
Flux7, an NTT DATA Company, is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.