CAP Theorem: Its importance in distributed systems

By Flux7 Labs
February 6, 2014

CAP Theorem

Five years ago, Amazon found that every 100ms of latency cost them 1% of sales. Google discovered that a half-second increase in search latency dropped traffic by 20%.

The need for scaling up/down/out is growing and so are the challenges of dealing with huge distributed systems. So, when designing such applications, it’s important to keep three core requirements in mind as described by Brewer’s CAP theorem:

1. Consistency
2. Availability
3. Partition-Tolerance

The CAP theorem was first proposed by Eric Brewer of the University of California, Berkeley, in 2000, and then proven by Seth Gilbert and Nancy Lynch of MIT in 2002. Read more here.

Defining The Three Core Requirements

Consistency (C) requires that all reads initiated after a successful write return the same and latest value at any given logical time.

Availability (A) requires that every node (not in failed state) always execute queries. Let’s say we have “n” servers serving our application. To ensure better availability we would add an additional “x” servers.

Partition Tolerance (P) requires that a system be able to re-route a communication when there are temporary breaks or failures in the network. The goal is to maintain synchronization among the involved nodes.

Brewer’s theorem states that it’s typically not possible for a distributed system to provide all three requirements simultaneously because one of them will always be compromised.

Tradeoffs Between Requirements

1. Available and Partition-Tolerant: Say you have two nodes and the link between the two is severed. Since both nodes are up, you can design the system to accept requests on each of the nodes, which will make the system available despite the network being partitioned. However, each node will issue its own results, so by providing high availability and partition tolerance you’ll compromise consistency.

2. Consistent and Partition-Tolerant: Say you have three nodes and one node loses its link with the other two. You can create a rule that a result will be returned only when a majority of nodes agree. So, despite having a partition, the system will return a consistent result. However, since the separated node won’t be able to reach consensus it won’t be available even though it’s up.

3. Finally, a system can be both consistent and available, but it may have to block on a partition.

Selecting Two Requirements at a Time


All relational DBs are CA.

Single site clusters to ensure that all nodes are always in contact, e.g., 2 PCs.

When a partition occurs, the system blocks.


Some data may be inaccessible (availability sacrificed), but the rest is still consistent/accurate, e.g., shared database.


The system is still available under partitioning, but some returned data may be inaccurate, e.g., DNS, caches, Master/Slave replication.

Needs a conflict resolution strategy.

Brewer’s Theorem 14 Years Later

Brewer’s theorem was proposed in 2000 and has been put to use quite effectively. However, contrary to the theorem’s original statements, there have been some changes in the way CAP theorem is used. Designers still have to choose two out of the three requirements. But, a CAP goal today is to maximize the consistency and availability requirements. This in turn requires that strategies be in place during partitions and recovery solutions. (More on this by Brewer here).

Databases and CAP Theorem

The following diagram depicts where each database falls and which two of the three features they select.

Read more on NoSQL database types here.

Get Started with AWS

Receive AWS tips, DevOps best practices, news analysis, commentary and more. Sign up for our IT Modernization blog here and set your topic and frequency preferences. Or, download our guide on getting started with AWS, establishing a secure AWS enterprise architecture with Flux7 Landing Zones.


Categories: Blog
Tags: ,
Share This Article