Tech · 6 min read
Database Isolation Levels: From Read Committed to Serializable, Without the Confusion
What isolation levels actually do, the anomalies each one prevents, and which level your real-world workload should use — explained without the textbook fog.
By Jarviix Engineering · Apr 19, 2026
Isolation levels are the single most-skimmed topic in backend engineering. Most developers know "use a transaction" but couldn't tell you what READ COMMITTED actually prevents. That's how subtle data-corruption bugs ship.
This post walks through what each level prevents, what bugs slip through, and what to actually use.
The setup
When two transactions touch the same data at the same time, four classic anomalies can occur:
- Dirty read. Transaction B reads data that A wrote but hasn't committed. A then rolls back; B used data that never existed.
- Non-repeatable read. B reads row X. A updates X and commits. B reads X again — different value.
- Phantom read. B runs
SELECT * WHERE status='pending'and sees 5 rows. A inserts a 6th matching row. B re-runs the query and now sees 6. - Serialization anomaly. Two transactions each individually look correct, but their combined effect produces a state that no serial ordering of them could produce.
Isolation levels are how the database promises to prevent some subset of these.
The four standard levels
The SQL standard defines four:
| Level | Dirty read | Non-repeatable read | Phantom read | Serialization anomaly |
|---|---|---|---|---|
| READ UNCOMMITTED | Possible | Possible | Possible | Possible |
| READ COMMITTED | Prevented | Possible | Possible | Possible |
| REPEATABLE READ | Prevented | Prevented | Possible (per spec) | Possible |
| SERIALIZABLE | Prevented | Prevented | Prevented | Prevented |
Higher levels prevent more anomalies, at the cost of more locking, more retries, or both.
What each level actually means in practice
The spec is one thing; what your engine does is another.
READ UNCOMMITTED
In theory: you can see writes that haven't committed yet.
In practice: Postgres doesn't even implement this — READ UNCOMMITTED is silently treated as READ COMMITTED. SQL Server does implement it (WITH (NOLOCK)); the rare correct use case is "I'm okay reading approximate data for a dashboard". For business logic, never.
READ COMMITTED
In theory: you only see committed data, but each query within your transaction sees a fresh snapshot.
In practice: this is the Postgres default. It prevents dirty reads but allows everything else. The classic bug it allows:
-- Transaction A:
SELECT balance FROM accounts WHERE id=1; -- returns 1000
-- ...some logic decides we can withdraw 800...
UPDATE accounts SET balance = balance - 800 WHERE id=1;
COMMIT;
-- Meanwhile, Transaction B did the same thing concurrently.
-- Both saw 1000, both withdrew 800. Final balance: 200 (or -600).
This is the "lost update" / "write skew" family of bugs. They are the most common transaction bugs in real systems.
REPEATABLE READ (a.k.a. Snapshot Isolation in Postgres)
In theory: every read in the transaction sees the same snapshot, taken at the start.
In practice: Postgres's REPEATABLE READ gives you snapshot isolation — strong, but not quite the same as SQL spec REPEATABLE READ. It prevents dirty reads, non-repeatable reads, and (in Postgres) phantoms too. It does not prevent the lost-update / write-skew anomaly:
-- Transaction A reads schedule, sees no doctor on call.
-- Transaction B reads schedule, sees no doctor on call.
-- A inserts "Alice on call".
-- B inserts "Bob on call".
-- Both commit. Two doctors on call when the rule says one.
Both transactions individually look correct. The combined effect violates a constraint that crosses rows. Snapshot isolation lets this through.
SERIALIZABLE
In theory: the database guarantees the result is as if the concurrent transactions ran one at a time in some order.
In practice:
- Postgres implements this with Serializable Snapshot Isolation (SSI). At commit time, it detects whether your transaction would have been part of an unsafe interleaving, and aborts it with a serialization failure (
SQLSTATE 40001). You retry. - MySQL/InnoDB implements it with locking — selects acquire shared locks, updates acquire exclusive locks. More predictable, more deadlocks.
- SQL Server uses range locks and snapshot isolation modes.
The cost: applications must handle serialization failures by retrying. The win: no more lost-update or write-skew bugs at all, anywhere in your code, without thinking about it.
When to actually use each level
A practical guide:
- READ COMMITTED. Default for most workloads. Use it for read-heavy queries and writes where you've explicitly handled concurrency (with
SELECT FOR UPDATE, version columns, etc.). - REPEATABLE READ / Snapshot. Useful for reports and analytics within a transaction — every query sees the same consistent snapshot. Also a reasonable default in MySQL.
- SERIALIZABLE. Reach for it when correctness matters more than peak throughput, and contention is moderate. Especially for money-moving, scheduling, allocation, and quota systems.
A reasonable rule: if a bug in your transaction logic would cost real money or trust, run that operation at SERIALIZABLE and let the database catch your concurrency mistakes.
Pessimistic vs optimistic concurrency
Even within a single isolation level, you can choose the locking strategy:
Pessimistic (SELECT ... FOR UPDATE)
BEGIN;
SELECT balance FROM accounts WHERE id=1 FOR UPDATE; -- locks the row
UPDATE accounts SET balance = balance - 100 WHERE id=1;
COMMIT;
Other transactions trying to read or update this row block until you commit. Predictable, but holds locks for the duration of the transaction → blocks everyone else.
Optimistic (version columns)
SELECT balance, version FROM accounts WHERE id=1; -- version=7
UPDATE accounts SET balance = balance - 100, version = 8
WHERE id=1 AND version = 7;
-- 0 rows updated → someone else changed the row, retry.
No locks held. Read-heavy workloads with rare conflicts thrive on this. The application layer must handle "0 rows updated → retry".
Database-managed (SERIALIZABLE)
The database does the optimistic check for you. You write code as if you were the only transaction, and handle serialization failures with a generic retry wrapper.
A retry pattern that just works
If you use SERIALIZABLE or optimistic locking, every database call needs a retry layer:
def with_retries(fn, max_retries=3):
for attempt in range(max_retries):
try:
return fn()
except SerializationFailure:
if attempt == max_retries - 1:
raise
time.sleep(0.01 * (2 ** attempt))
Wrap the body of every transaction in this. With it, SERIALIZABLE is invisible to application code 99% of the time.
Three rules, regardless of level
- Keep transactions short. Long transactions mean long-held locks, more conflicts, more retries, more replication lag.
- Don't do network I/O inside transactions. Calling a third-party API while holding a row lock is how databases die.
- Never assume the level you set is the level you got.
EXPLAIN, read the docs for your specific database version, and verify with a small concurrency test.
What to read next
Isolation levels are the local concurrency story. The distributed version is eventual consistency and the CAP theorem. For the row-level mechanics under all of this, database indexing is the companion read. The Splitwise LLD writeup is a great applied example — shared balances across multiple users is exactly the workload where picking the right isolation level (and writing the right SELECT FOR UPDATE) is the difference between correct and corrupt.
Frequently asked questions
What's the default isolation level in Postgres? In MySQL?
Postgres defaults to Read Committed; MySQL/InnoDB defaults to Repeatable Read. Both implementations are slightly different from the SQL spec — read your engine's docs, don't trust the labels.
Should I just use Serializable everywhere?
It's a defensible choice for correctness-critical apps that don't have extreme contention. The cost is more retries (serialization failures) and slightly more work for the database; the benefit is bugs you'd otherwise never find.
What's optimistic vs pessimistic locking?
Pessimistic: take a lock before reading and hold it until commit. Optimistic: read freely, check at commit time whether anyone else changed the data, retry if so. SERIALIZABLE in Postgres is essentially database-managed optimistic concurrency.
Read next
Apr 19, 2026 · 5 min read
Database Indexing Explained: B-trees, Hash Indexes, and When to Add One
How database indexes actually work — B-trees vs hash, covering and partial indexes, the cost they impose on writes, and a practical rulebook for when to add one.
Apr 19, 2026 · 5 min read
SQL vs NoSQL: A Decision Framework, Not a Religion
When relational makes sense, when document or wide-column wins, and how to pick a database without falling into the 'we use Mongo because it's web-scale' trap.
Apr 19, 2026 · 7 min read
Database Sharding Explained: When, Why, and How to Do It Right
Sharding is the most common — and most misunderstood — way to scale databases. The strategies, the trade-offs, and the cases where you absolutely should not shard.