Tech · 6 min read
CAP Theorem and PACELC: What Distributed Systems Force You to Choose
CAP is the most-cited and most-misunderstood distributed systems concept. The real meaning, why PACELC is more accurate, and what each trade-off actually feels like in production.
By Jarviix Engineering · Apr 19, 2026
The CAP theorem is the most-cited and most-misunderstood concept in distributed systems. Repeated in every interview, written on every architecture whiteboard, and rarely explained correctly. The result: engineers make architectural decisions based on a flawed mental model.
This post explains what CAP actually says (and doesn't say), why PACELC is the more useful framework, and what each trade-off feels like when running real production systems.
CAP: the actual statement
Eric Brewer's CAP theorem states: in a distributed system, when a network partition occurs, you can have at most two of the following three properties:
- Consistency (C): every read returns the most recent write
- Availability (A): every request receives a response (success or failure, not timeout)
- Partition tolerance (P): system continues operating despite network failures between nodes
The crucial subtlety: this is about behavior during partitions. In normal operation, you can have all three.
Since network partitions are inevitable in any distributed system spanning multiple physical locations, you must choose between C and A when partitions occur. You cannot opt out of partition tolerance — it's not a design choice, it's a physical reality.
What people get wrong
The common misinterpretation: "PostgreSQL is CA, MongoDB is AP, etc."
Wrong. PostgreSQL on a single machine isn't even a distributed system; CAP doesn't apply. In a clustered/replicated PostgreSQL setup, it must choose between C and A during partitions like any other distributed system.
The reality: CAP is a runtime property, not a design label. The same system can behave as CP for some operations and AP for others, often configurable per-query.
CP systems: choose Consistency over Availability
When a partition occurs, CP systems refuse to serve requests on the partitioned side. They guarantee that any successful response reflects the latest committed state.
Examples:
- Google Spanner
- ZooKeeper
- etcd
- HBase
- Most relational databases in clustered mode
What it feels like in production:
- During network issues, requests fail or block
- Consistent data, predictable behavior
- Higher latency (must coordinate across replicas)
- Lower availability during partitions
When to choose CP:
- Financial transactions
- Inventory systems with strict consistency requirements
- Distributed locks and coordination
- Configuration management
AP systems: choose Availability over Consistency
When a partition occurs, AP systems continue serving requests on both sides — at the cost of potentially returning stale or conflicting data.
Examples:
- Cassandra (with default consistency)
- DynamoDB (in default mode)
- Riak
- CouchDB
What it feels like in production:
- Always responsive, even during network issues
- Stale data possible
- Conflict resolution required
- Lower latency (no coordination needed)
When to choose AP:
- Social media feeds (slight staleness OK)
- Shopping carts (eventual consistency acceptable)
- Analytics and counters
- User session data
PACELC: the more honest framework
Daniel Abadi proposed PACELC to address what CAP ignores: trade-offs during normal operation.
Partition? A or C? Else, L or C?
In other words:
- During Partitions (P), choose between Availability (A) and Consistency (C)
- Else (E), normally, choose between Latency (L) and Consistency (C)
Most distributed systems sacrifice some consistency for lower latency even without partitions. Synchronous replication across regions is too slow; many systems use asynchronous replication, accepting eventual consistency.
PACELC system classifications
| System | During Partition | Normally | Notes |
|---|---|---|---|
| Spanner | PC | EC | Strong consistency at all times |
| DynamoDB | PA | EL | Highly available, low latency |
| Cassandra | PA | EL | Tunable per query |
| MongoDB | PA | EC (default) / EL (read preferences) | Tunable |
| MySQL Group Replication | PC | EC | Strong consistency cluster |
PACELC captures real-world trade-offs better than CAP alone.
Tunable consistency
Modern distributed databases don't force a single CAP choice. They offer per-query consistency levels:
Cassandra read/write consistency
- ONE: respond as soon as 1 node confirms (high availability, low latency, weak consistency)
- QUORUM: majority must confirm (balanced)
- ALL: all replicas must confirm (strong consistency, low availability)
DynamoDB
- Eventually consistent reads (default): low latency, may return stale data
- Strongly consistent reads: latest data, double the cost, slightly higher latency
This per-query tunability means the same database can serve as CP or AP depending on the operation.
Real-world consistency models
Beyond strict CAP, several consistency models exist:
Strong (linearizable) consistency
Every read returns the most recent write. Behaves like a single machine. Hardest to achieve in distributed systems.
Sequential consistency
Operations appear in some sequential order consistent across all nodes. Slightly weaker than linearizable.
Causal consistency
Operations that have a causal relationship are seen in the same order by all nodes. Operations without causality may appear in any order.
Eventual consistency
All replicas converge to the same value if no new updates happen. No guarantees on when convergence occurs.
Read-your-writes consistency
Once you write, all your subsequent reads see the write. Weaker than global consistency but provides good UX.
Monotonic read consistency
Once you read a value, subsequent reads never return older values.
Different applications need different consistency models. "Eventual consistency" is too weak for banking; "linearizable" is too expensive for social media feeds.
Practical decision framework
When designing a distributed system, ask:
1. Can the application tolerate stale data?
- No (banking, inventory): need strong consistency, accept higher latency / lower availability
- Yes (feeds, analytics): use eventual consistency for speed and availability
2. What's the cost of unavailability?
- High (e-commerce checkout): bias toward AP
- Medium (configuration management): bias toward CP — better to fail clearly than serve wrong data
- Low (background analytics): either works
3. What's the cost of stale data?
- High (financial transactions): demand strong consistency
- Low (recommendation engines): eventual consistency fine
4. Are there per-operation differences?
- Often yes: critical operations need strong consistency; bulk operations accept eventual consistency
Use systems that allow per-query consistency tuning rather than picking a single CAP-classified system.
Common mistakes
- Picking systems based on CAP labels: "we need CP, so we'll use Spanner" — without considering operational complexity, cost, latency
- Assuming "eventually consistent" means a few seconds: in reality, can be minutes or hours during partitions
- Treating CAP as a design choice: P is forced by physics; you only choose A vs C during partitions
- Ignoring normal-operation trade-offs: PACELC matters daily; CAP only during rare partition events
- Single consistency level for all queries: missing the opportunity to use different consistency for different operations
- Confusing replication mode with CAP: synchronous vs asynchronous replication is one factor; CAP is broader
What to read next
- Eventual consistency — deeper dive on the AP side.
- Database isolation levels — single-node consistency models.
- Distributed locks — coordination in CP systems.
- System design basics — broader context.
CAP is not a useless concept — it's a foundation for thinking about distributed trade-offs. But treating it as a design choice rather than a runtime constraint, or ignoring the more practical PACELC framework, leads to systems that disappoint in production. Real distributed systems thinking starts with: "what does my application actually need under partition?" — and equally importantly, "what trade-offs am I making in normal operation?"
Frequently asked questions
Is CAP theorem actually true?
Technically yes, but commonly misinterpreted. CAP is a statement about behavior during network partitions: when a partition occurs, you can have either Consistency or Availability, not both. It's NOT a statement about how to design your system in normal conditions. Most modern distributed databases (Cassandra, DynamoDB, MongoDB) provide tuning knobs to choose CP or AP behavior per query, blurring the strict either/or. Don't pick a database based on CAP letters alone.
What's PACELC and why does it matter more?
PACELC extends CAP to address normal operation. During Partitions (P), you choose between Availability (A) and Consistency (C) — the standard CAP. Else (E), during normal operation, you choose between Latency (L) and Consistency (C). Most systems sacrifice some consistency for lower latency even when no partitions occur. PACELC captures this critical second trade-off. Examples: DynamoDB is PA/EL (high availability + low latency, eventual consistency); Spanner is PC/EC (strong consistency + slower).
Can I get strong consistency AND high availability AND low latency?
Within a single datacenter, mostly yes. Across geographies, no. Physics imposes hard constraints: speed of light makes cross-region synchronous replication slow (50-200ms latency); network partitions are inevitable across continental distances. Google Spanner gets close with TrueTime atomic clocks but still has trade-offs. Most production systems accept tunable consistency or per-query consistency choices rather than trying to solve the impossible.
Read next
Apr 19, 2026 · 6 min read
CAP Theorem in Practice: PACELC and What Real Systems Actually Pick
What CAP and PACELC really say, why 'CP vs AP' is a useful but lossy summary, and how to map the theory to the database choices you'll actually make.
Apr 19, 2026 · 7 min read
Distributed Locks: Redlock, Zookeeper, and Why They're Harder Than They Look
When you need a distributed lock, what your real options are (Redis, Zookeeper, Etcd), and the failure modes that make 'just use Redlock' a worse answer than it sounds.
Apr 19, 2026 · 6 min read
Eventual Consistency: What It Really Means in Production
What eventual consistency actually buys you, what it costs your users, and the patterns (read-your-writes, monotonic reads, quorum reads) that make it bearable.