AWS Service Discovery: Facilitating Connections in a Dynamic World
Service discovery is not new. The idea of a tool that can discover how processes and services talk to each other and help facilitate connections has been around for some time. However, with the rise of increasingly dynamic environments, the important role service discovery plays continues to grow. Indeed, since the beginning of the year at Flux7 we have seen a surge of customers looking for container-based microservices architectures that highlights the need for service discovery due to its dynamic nature.
Microservices do in fact offer a great deal of agility and resiliency and when coupled with container technology, they bring immense portability. However, this new container-based microservice architecture does present a challenge that is an ideal example of why Amazon’s reference architecture for AWS service discovery is so helpful. Namely, the ability to keep track of the necessary information to communicate with the service.
As I mentioned, Amazon Web Services has offered a reference architecture which Amazon describes as,
a reference architecture to demonstrate a DNS- and load balancer-based solution to service discovery on Amazon EC2 Container Service (Amazon ECS) that relies on some of our higher level services without the need to provision extra resources.
The service uses AWS CloudWatch events to invoke an AWS Lambda function which can automatically create AWS Route53 entries for newly created services.
Why the Microservice Surge
Microservices are a way of breaking a single large monolithic application into smaller composable services. These services offer APIs that other services or outside parties can use to get certain tasks accomplished. While containers are a natural fit for microservices — as they allow any application or language to be used; you can test and deploy the same artifact; and they solve the challenge of running distributed applications on an increasingly heterogeneous infrastructure. AWS customers typically choose to use ECS as it is a native engine for container orchestration in AWS.
Key tenets of a microservices approach are that it must be easy to:
- update a service,
- add a new service,
- and remove an existing service.
Further, all actions require that the updated state and location of a service be made known to other services and users looking to communicate with that service. This step is called service discovery. And, any service discovery framework by definition really must have the following fundamental components:
- A central directory which maintains the single source of truth regarding services including the hostname/IP and port where the service can be reached.
- A distributed update mechanism for revising the directory as services’ URL or ports change as a result of a new service being created, old service being deleted, new code being deployed to a service, or a scale up/down event.
- A mechanism for the consumers of the service to have the latest information about the service.
Other than Amazon’s reference architecture for AWS service discovery, a very commonly used service discovery solution is Consul for Hashicorp. Consul itself is a highly available, scalable, and consistent key-value store. It can be used with service discovery as follows:
- Store directory in Consul
- Use registrator to listen to Docker events and update Consul automatically
- Consumers can query Consul as a DNS server
Combined, these three elements provide a very comprehensive solution to service discovery for ECS-backed architectures.
The reference architecture AWS proposes takes a slightly different approach:
- AWS Route 53, Amazon’s DNS solution, is used as a directory.
- It then updates on works with CloudWatch events that generate an event for every ECS action regarding service updates. This data is sent to an AWS Lambda function which receives the event, decides if this event mandates an update to the service directory (Route53), and updates the directory accordingly.
- Being a DNS server, querying AWS Route 53 is quite natural and easy.
This solution has the following benefits over the more typical Consul-based solution: It doesn’t require creating and maintaining a Consul cluster. While not a major overhead, taking advantage of a managed service, especially when discussing data, is always preferable among our customers.
However, this benefit also has a cost as the solution requires that every service be accessed via an AWS Elastic Load Balancer (ELB). This limits the viability of the solution to services which are not load balancer-friendly. For a service to be load balancer friendly, all of its nodes must be identical such that the ELB can forward incoming traffic to any of the nodes without loss in functionality. Not all services overlay this design rule which can lead to complications.
For example, our AWS experts had an experience recently with a customer wanting to use Nsq. Nsq requires that all consumers of the service first connect to nsqlookdup, a service which redirects them to one of the many nsqd nodes. Thus nsqd containers cannot be put behind the load balancer as the node selection requires more smarts than a load balancer offers. In such a case, we had to use the Consul-based solution which provided the IP and port of the nsqlookdup container to all services.
While we have used both solutions interchangeably, each has its own strengths. So much so that at Flux7 we have used each solution to complement the other. For example, DevOps consultants have used AWS Route 53 to provide the location of the Consul service itself.
The Ideal User
In our experience, anyone looking to build a microservices architecture at large but specifically in AWS using ECS would benefit greatly from Amazon’s reference architecture.
Indeed, we have seen requests for just such setups come from small two-person startups all the way up to large enterprises with numerous teams. Inquiries have come from across industries and brands — from Fortune 500 retailers to a large camera manufacturer, from online gaming and SaaS providers, to hospitality and healthcare. They all are seeing the vision of AWS microservices and are looking to create their own container-based microservices architecture. If you are interested in the benefits of a microservices architecture and would like to apply our proven best practices, give us a call today.
To read how a financial services leader paired Amazon Web Services — which allowed the organization to quickly scale to demand without expensive hardware purchases — and Docker containers., please see this page.