LRU Cache
O(1) get/put with a HashMap + doubly-linked list. The textbook design problem.
designlinked-listhashmap
Intro
Implement an LRU (Least Recently Used) cache supporting get(key) and put(key, value) in O(1). The trick: a HashMap for O(1) lookup + a doubly-linked list for O(1) reorder/eviction.
Key insight
Hashmap stores `key → list-node`. The list keeps order: head = most recently used, tail = least recently used.
Approach
- 1On get: if key in map, move its node to head; return value. Else return -1.
- 2On put: if key exists, update value and move to head. Else create new node at head; if size > capacity, evict tail.
- 3Use sentinel head/tail nodes to avoid null checks during splice.
Complexity
Time
O(1) per operation
Space
O(capacity)
Code
Snippetjava
class LRUCache {
static class Node { int k, v; Node prev, next; Node(int k, int v) { this.k = k; this.v = v; } }
Map<Integer, Node> map = new HashMap<>();
final int cap;
final Node head = new Node(0, 0), tail = new Node(0, 0);
LRUCache(int capacity) {
cap = capacity;
head.next = tail; tail.prev = head;
}
public int get(int key) {
Node n = map.get(key);
if (n == null) return -1;
unlink(n); pushFront(n);
return n.v;
}
public void put(int key, int value) {
Node n = map.get(key);
if (n != null) { n.v = value; unlink(n); pushFront(n); return; }
n = new Node(key, value);
map.put(key, n); pushFront(n);
if (map.size() > cap) {
Node lru = tail.prev;
unlink(lru); map.remove(lru.k);
}
}
void unlink(Node n) { n.prev.next = n.next; n.next.prev = n.prev; }
void pushFront(Node n) { n.next = head.next; n.prev = head; head.next.prev = n; head.next = n; }
}Pitfalls
- Using `LinkedHashMap` shortcut in interviews — clarify with the interviewer first.
- Forgetting to remove from the map on eviction — slow memory leak.
Variants & follow-ups
- LFU Cache (LC 460)