A quick lookup table for the operations that actually matter when you’re sizing a data structure to a workload. The interesting question is rarely “what’s the average case” — it’s what does worst-case look like under adversarial input, and does amortization actually buy you anything in your access pattern.
Containers
| Structure | Access | Search | Insert | Delete | Notes |
|---|---|---|---|---|---|
| Dynamic array | O(1) | O(n) | O(1) amortized, O(n) worst | O(n) | Worst-case insert is the resize. Pin capacity if latency-sensitive. |
| Linked list | O(n) | O(n) | O(1) at known node | O(1) at known node | Cache-hostile. Almost always lose to arrays in practice. |
| Hash table | — | O(1) avg, O(n) worst | same | same | Worst case is collision storm or rehash. Adversarial keys → DoS. |
| Balanced BST | O(log n) | O(log n) | O(log n) | O(log n) | Use when ordered iteration matters. |
| Heap | O(1) peek | — | O(log n) | O(log n) pop | heapify is O(n). kth largest is O(k log n). |
Sorting
| Algorithm | Best | Avg | Worst | Space | Stable |
|---|---|---|---|---|---|
| Quicksort | O(n log n) | O(n log n) | O(n²) | O(log n) | No |
| Mergesort | O(n log n) | O(n log n) | O(n log n) | O(n) | Yes |
| Heapsort | O(n log n) | O(n log n) | O(n log n) | O(1) | No |
| Timsort | O(n) | O(n log n) | O(n log n) | O(n) | Yes |
| Insertion | O(n) | O(n²) | O(n²) | O(1) | Yes |
Python’s list.sort is Timsort — best on partially-ordered data, which is most real data. Quicksort’s O(n²) worst case shows up on already-sorted input with naive pivot choice; production implementations randomize or median-of-three.
Things that bite
- Hash table worst case is real. Python
dicthas been hardened, but custom hash functions or attacker-controlled keys can still degrade toO(n). Treat unbounded user input as a sizing risk. - Amortized ≠ predictable. A dynamic array is
O(1)amortized insert, but the resize happens on some insert. If p99 matters, pre-size. O(log n)collapses for smalln. Below ~50 elements a linear scan beats a tree because of cache locality and lower constants. “Use the right structure” depends onn.O(n) ≠ O(n). Two algorithms with the same big-O can differ by 10–100× in wall time. Constants and memory access patterns dominate inside a single complexity class.