Linked lists are over-represented in interviews and under-represented in production code — arrays beat them on cache locality almost everywhere. The exceptions are real: LRU caches, free lists, intrusive lists in kernels, and any structure that needs O(1) splice with held references.
When the linked list is actually the right structure
- LRU / LFU cache. Doubly-linked list + hash map: hashmap finds the node, list reorders in
O(1). The canonical use. - Free lists / object pools. Memory allocators. Splice and unsplice without copying.
- Intrusive lists. Linux kernel style — the link pointers live inside the data structure, not a wrapping node. Zero allocation per insert.
- Streaming with stable references. When a consumer holds a pointer to “their place,” and you splice without invalidating it. (Rare in app code; common in low-level systems.)
For everything else — ArrayList/Vec/list in Python — flat arrays win.
Patterns that recur
Dummy head node. Whenever the answer might modify the head, prepend a dummy and return dummy.next. Removes the special case.
Two pointers, fast and slow. Cycle detection (Floyd’s), find middle, find k-from-end. The discipline: write the invariant (“when slow has moved k, fast has moved 2k”) before coding.
In-place reverse. Three pointers (prev, curr, next); flip curr.next = prev per step. Memorize the dance once.
Merge two sorted lists. Dummy head, advance the smaller, splice. Generalizes to k-way merge with a min-heap.
The doubly-linked list + hashmap pattern (LRU)
The interview question that’s actually a real production pattern. The tricky part is keeping invariants when manipulating both structures:
- Map:
key → node. - List: head = least recently used, tail = most recently used (or vice versa, pick once).
get: hashmap lookup → unlink node → splice at MRU end.put: if exists, unlink; create node; splice at MRU end; if size exceeds, unlink LRU anddel map[key].
unlink and splice are the only mutating operations; everything composes from those two.
What linked lists don’t do
- Random access —
O(n). - Cache-friendly iteration — every step is a pointer chase to potentially cold memory.
- Predictable memory — every node is a heap allocation.
If you “need” a linked list for performance reasons, benchmark against a flat array first. You’ll usually be wrong.