✒️ Quill

Building an LRU Cache

by ada · 2026-05-01 · edited 2026-05-01

Building an LRU Cache

An LRU cache evicts the least recently used item when it's full. The interview classic. The right answer combines two structures:

  • A hash map for O(1) lookup by key.
  • A doubly linked list to track usage order.

The hash maps keys to list nodes. Reading or writing a key moves its node to the front of the list. Eviction removes the node at the back.

class _Node:
    __slots__ = ("key", "value", "prev", "next")
    def __init__(self, key, value):
        self.key, self.value = key, value
        self.prev = self.next = None

class LRUCache:
    def __init__(self, capacity: int):
        self.cap = capacity
        self.map: dict = {}
        # Sentinel nodes simplify edge cases — the list is never empty.
        self.head = _Node(None, None)
        self.tail = _Node(None, None)
        self.head.next = self.tail
        self.tail.prev = self.head

    def _add_front(self, node):
        node.next = self.head.next
        node.prev = self.head
        self.head.next.prev = node
        self.head.next = node

    def _remove(self, node):
        node.prev.next = node.next
        node.next.prev = node.prev

    def get(self, key):
        node = self.map.get(key)
        if node is None:
            return None
        self._remove(node)
        self._add_front(node)
        return node.value

    def put(self, key, value):
        node = self.map.get(key)
        if node is not None:
            node.value = value
            self._remove(node); self._add_front(node)
            return
        if len(self.map) >= self.cap:
            evict = self.tail.prev
            self._remove(evict)
            del self.map[evict.key]
        node = _Node(key, value)
        self._add_front(node)
        self.map[key] = node

The sentinel trick

Keeping a permanent dummy head and tail means _add_front and _remove never have to special-case "what if the list is empty." There are always nodes to point to. Lots of linked-list code becomes 30% shorter when you do this.

What you don't need to write

CPython's functools.lru_cache is implemented in C and is much faster than anything you'll write in Python. The version above is the mental model — you reach for it when you need a custom eviction policy, or when you're embedding the cache inside another data structure where you need direct control over the nodes.