Tech · 6 min read
CAP Theorem in Practice: PACELC and What Real Systems Actually Pick
What CAP and PACELC really say, why 'CP vs AP' is a useful but lossy summary, and how to map the theory to the database choices you'll actually make.
By Jarviix Engineering · Apr 19, 2026
CAP theorem is one of those topics everyone has heard of and few people actually use correctly. The "pick two out of three" framing is catchy but misleading; the real-world version is a richer trade-off space called PACELC.
This post is a practical walk through what CAP and PACELC actually claim, where the famous summary breaks down, and how to map the theory to the database choices you'll make.
CAP, restated honestly
CAP theorem says: in the presence of a network partition, a distributed system must choose between Consistency and Availability.
The bumper-sticker version — "Consistency, Availability, Partition tolerance: pick two" — is misleading because P (partition tolerance) isn't a choice. Networks partition. Wires fail, switches fail, DNS lies. If your system is distributed at all, you don't get to opt out of partitions; you can only choose how to behave when they happen.
So the real choice CAP forces is binary, and only during a partition:
- CP: When partitioned, refuse some requests rather than return inconsistent data. The system stops being available to part of the network.
- AP: When partitioned, keep serving requests even if different partitions return different answers. The system stops being consistent.
That's it. That's the entire theorem.
Why "two out of three" is wrong
Three reasons the casual framing misleads:
- No partition, no problem. Most of the time, your network is fine. CAP says nothing about how your system behaves then.
- It's per-operation, not per-system. A database can be CP for some operations and AP for others (DynamoDB, Cassandra, Cosmos DB all let you tune per-query).
- "Consistency" in CAP is linearizability. Strong but specific — strict ordering of operations across nodes. Other consistency models (sequential, causal, eventual) live on a spectrum CAP doesn't address.
A more useful framing is PACELC.
PACELC: what most articles skip
Daniel Abadi's PACELC formulation:
- In case of Partition (P): trade Availability (A) vs Consistency (C). (Same as CAP.)
- Else (E), in normal operation: trade Latency (L) vs Consistency (C).
Why it matters: even when nothing is broken, achieving strong consistency across replicas costs latency (replicas must coordinate, quorums must be reached). Most production systems care about latency every microsecond, not just during the rare partition event.
So real systems sit somewhere on a 2D map:
| During partition | Normal operation | |
|---|---|---|
| Spanner / Cockroach | C (refuse during partition) | C (pay latency) |
| Postgres (single node) | N/A | C |
| DynamoDB (default) | A (serve stale) | L (low latency) |
| DynamoDB (strong reads) | A (serve stale) | C (pay latency) |
| Cassandra | A (serve stale) | L (low latency) |
| MongoDB | C (refuse writes) | L (with read concern adjustments) |
That table is far more useful than "CP vs AP" because it tells you what you're paying both during incidents and during normal happy-path operation.
Mapping theory to choices
When picking a database or data architecture, ask the PACELC questions:
What happens during a partition?
If a portion of your system is cut off from the rest, what do you want?
- Online retailer at Black Friday: absolutely keep accepting orders, accept that some inventory counts will be stale and we'll reconcile after. AP.
- Bank ledger: absolutely refuse writes that we can't confirm consistent. The cost of double-spending is enormous. CP.
- Multiplayer game state: AP — keep playing, reconcile state when network heals.
- Air traffic control: CP — no compromise on consistency, even if some controllers go offline temporarily.
What's the latency budget in normal operation?
- Public-facing read paths (feeds, recommendations, search results): every millisecond matters. L over C.
- Money movement: correctness over speed; an extra 100ms is fine. C over L.
- Analytics dashboards: can tolerate seconds of staleness. L over C.
Can different parts of the same system make different choices?
Yes — and most do.
- Stripe-style: payment writes are CP (Spanner-class consistency); analytics is AP/L.
- Social network: posting a status is CP (don't lose it); the timeline read path is AP/L.
- E-commerce: inventory decrement is CP (don't oversell); recommendations and reviews are AP.
Polyglot persistence isn't avoidance of the trade-off; it's applying the trade-off correctly per workload.
Real-world systems: where they sit
A non-exhaustive map:
- Postgres, MySQL (single primary): CP at the level of the primary; replicas are eventually consistent. Network partition between primary and replica means stale reads from replicas, which is usually fine.
- Spanner, CockroachDB, YugabyteDB: CP. Strongly consistent. Pay latency in the form of quorum writes; pay availability in the form of refusing writes during catastrophic partitions.
- DynamoDB, Cassandra, ScyllaDB: AP by default. Quorum reads/writes available per-query for stronger guarantees, at latency cost.
- MongoDB: CP for the primary; replicas are eventually consistent. Configurable read/write concerns per operation.
- Etcd, Zookeeper, Consul: CP. Used as coordination/locking infrastructure where consistency matters more than availability.
- Redis (single primary): CP at the primary. Redis Cluster relaxes this somewhat; Redis with sentinel still chooses consistency over availability during failover.
What you actually trade away
The honest list of what each end of the spectrum costs:
Choosing C (CP):
- Higher latency (quorum coordination).
- Lower availability during partitions (some requests refused).
- Higher operational cost (more careful failure handling, more sophisticated infrastructure).
Choosing A (AP):
- Stale reads possible.
- Conflict resolution required (last-writer-wins, CRDTs, application-level merging).
- "Eventually correct" — you ship features knowing some users will see stale data for some window.
There's no free lunch. Either users sometimes wait (CP) or users sometimes see disagreement (AP). Pick whichever your product can tolerate; design around the rest.
Three rules for using CAP/PACELC well
- Apply per workload, not per system. Your auth needs CP. Your feed doesn't. Don't pick one for the whole company.
- Talk to product about the trade-off. "We can ship this faster if we accept ~5 seconds of staleness on this feature" is a real product conversation. Don't make the call alone.
- Test partition behavior, not just happy path. Use Toxiproxy, Chaos Mesh, or jepsen-style failure injection. Most teams discover their CAP behavior in production incidents; you can find it in staging.
What to read next
CAP/PACELC is the theory; eventual consistency is the practice once you've chosen AP. SQL vs NoSQL is where these choices first show up in database selection. The distributed cache HLD writeup is a concrete walk-through of where on the PACELC map a real system actually lands. And system design basics ties them into the broader picture.
Frequently asked questions
Is CAP theorem outdated?
No, but it's incomplete. PACELC extends it to cover the more common case (no partition) where you still trade latency vs consistency. Both are useful framings, neither is the final word.
Can a database really be CA?
Not in a distributed sense. Single-node systems are 'CA' in a trivial way (no partitions to tolerate). Anything spread across machines must pick CP or AP when partitioned.
Where does Spanner fit?
Spanner is famously CP — strongly consistent — and gets away with low latency through extremely well-engineered clocks (TrueTime) and global infrastructure most teams can't replicate. It pays for consistency with cost, not unavailability.
Read next
Apr 19, 2026 · 6 min read
Eventual Consistency: What It Really Means in Production
What eventual consistency actually buys you, what it costs your users, and the patterns (read-your-writes, monotonic reads, quorum reads) that make it bearable.
Apr 19, 2026 · 6 min read
CAP Theorem and PACELC: What Distributed Systems Force You to Choose
CAP is the most-cited and most-misunderstood distributed systems concept. The real meaning, why PACELC is more accurate, and what each trade-off actually feels like in production.
Apr 19, 2026 · 7 min read
Distributed Locks: Redlock, Zookeeper, and Why They're Harder Than They Look
When you need a distributed lock, what your real options are (Redis, Zookeeper, Etcd), and the failure modes that make 'just use Redlock' a worse answer than it sounds.