A quick lookup table for the operations that actually matter when you’re sizing a data structure to a workload. The interesting question is rarely “what’s the average case” — it’s what does worst-case look like under adversarial input, and does amortization actually buy you anything in your access pattern.

Containers

StructureAccessSearchInsertDeleteNotes
Dynamic arrayO(1)O(n)O(1) amortized, O(n) worstO(n)Worst-case insert is the resize. Pin capacity if latency-sensitive.
Linked listO(n)O(n)O(1) at known nodeO(1) at known nodeCache-hostile. Almost always lose to arrays in practice.
Hash tableO(1) avg, O(n) worstsamesameWorst case is collision storm or rehash. Adversarial keys → DoS.
Balanced BSTO(log n)O(log n)O(log n)O(log n)Use when ordered iteration matters.
HeapO(1) peekO(log n)O(log n) popheapify is O(n). kth largest is O(k log n).

Sorting

AlgorithmBestAvgWorstSpaceStable
QuicksortO(n log n)O(n log n)O(n²)O(log n)No
MergesortO(n log n)O(n log n)O(n log n)O(n)Yes
HeapsortO(n log n)O(n log n)O(n log n)O(1)No
TimsortO(n)O(n log n)O(n log n)O(n)Yes
InsertionO(n)O(n²)O(n²)O(1)Yes

Python’s list.sort is Timsort — best on partially-ordered data, which is most real data. Quicksort’s O(n²) worst case shows up on already-sorted input with naive pivot choice; production implementations randomize or median-of-three.

Things that bite

  • Hash table worst case is real. Python dict has been hardened, but custom hash functions or attacker-controlled keys can still degrade to O(n). Treat unbounded user input as a sizing risk.
  • Amortized ≠ predictable. A dynamic array is O(1) amortized insert, but the resize happens on some insert. If p99 matters, pre-size.
  • O(log n) collapses for small n. Below ~50 elements a linear scan beats a tree because of cache locality and lower constants. “Use the right structure” depends on n.
  • O(n) ≠ O(n). Two algorithms with the same big-O can differ by 10–100× in wall time. Constants and memory access patterns dominate inside a single complexity class.