Time Complexity Big O Chart
Master the speed of algorithms. Visualize the difference between O(n) and O(n²) with our interactive "Algorithm Race".
Big O Visualizer
See how algorithms slow down as data grows.
Instant access. No matter how much data.
Cuts problem in half each step (Binary Search).
Check every item once (Loop).
Efficient sorting (Merge Sort).
Nested loops. Slows down fast.
Doubles every step. Unusable for large N.
Complexity Reference
Instant access. No matter how much data.
Cuts problem in half each step (Binary Search).
Check every item once (Loop).
Efficient sorting (Merge Sort).
Nested loops. Slows down fast.
Doubles every step. Unusable for large N.
Why Scale Matters
If you are sorting 5 items, it doesn't matter how you do it. You could throw them on the floor and pick them up. But if you are Google sorting 5 billion items, the difference between O(n) and O(n²) is the difference between seconds and millennia.
Big O Notation is the language we use to predict the future. It tells us: "If my userbase doubles, will my server fees double (Linear)? Or will they quadruple (Quadratic)?"
The O(1) Holy Grail
Constant Time means the algorithm is "magic". It takes the same time to find an item in a list of 10 as it does in a list of 10 billion.
Examples:
- Accessing an array by index:
arr[5] - Looking up a key in a Hash Map
- Pushing to a Stack
The O(n!) Nightmare
Factorial Time is the "universe heat death" complexity. It happens when you try every possible combination of inputs.
The Math:
- 5! = 120
- 10! = 3.6 Million
- 20! = 2.4 Quintillion (Too big for any CPU)
Common Complexity Classes
O(log n) - Logarithmic
Divide and Conquer. Every time the algorithm takes a step, it ignores half the remaining data. This is why you can search the entire internet so quickly.
O(n) - Linear
Brute Force. You have to look at every single item. Reading a book page-by-page is O(n).
O(n log n) - Pseudo-Linear
The gold standard for sorting. It's O(n) work done O(log n) times (or vice versa). Merge Sort and Heap Sort live here.
Premature Optimization
Don't obsess over O(1) vs O(n) for small scripts. Readable code is better than clever code. Optimize only when you have measured a bottleneck. Remember: "Premature optimization is the root of all evil" - Donald Knuth.
Frequently Asked Questions
What is Big O Notation?
Big O notation is a mathematical way used in computer science to describe the performance or complexity of an algorithm. It specifically describes the worst-case scenario, or the upper bound, of how the runtime grows as the input size (n) increases.
Why is O(1) called "Constant Time"?
Because the runtime does not change regardless of the input size. Accessing an array index is O(1) because fetching the 1st element takes the exact same time as fetching the 1,000,000th element.
Is O(n log n) good?
Yes! O(n log n) is usually the best possible time complexity for sorting comparison-based algorithms (like Merge Sort or Quick Sort). It is significantly faster than O(n^2) but slightly slower than O(n).
What is the "n" in Big O?
"n" represents the size of the input. For a list of 10 items, n=10. For a user database of 1 billion, n=1,000,000,000. Big O analysis cares about how the algorithm slows down as n gets extremely large.
Difference between Time and Space Complexity?
Time Complexity measures the computational speed (CPU cycles). Space Complexity measures the amount of memory (RAM) required. An algorithm might be fast (low time complexity) but use huge amounts of memory (high space complexity), or vice versa.
Why is O(n^2) considered bad?
O(n^2) (Quadratic) means if you double the input, the time quadruples. For small data (n=100), it's fine. But for n=100,000, O(n^2) might take hours or days, whereas O(n) would take milliseconds.
What algorithm has O(n!) complexity?
The "Traveling Salesman Problem" (brute force solution) is O(n!). This is factorial growth. Even with n=20, the number of operations is so large that a supercomputer would take centuries to finish.
Does Big O account for constants?
No. Big O drops constants. O(2n) and O(100n) are both just O(n). We care about the rate of growth, not the exact speed. In practice, constants matter, but theoretically, they are ignored.
What is Average Case vs Worst Case?
Worst Case (Big O) is the guarantee: it will never be slower than this. Average Case (Big Theta) is what happens on typical data. Quick Sort is O(n log n) on average but O(n^2) in the worst case (if the array is already sorted).
What is O(log n) used for?
Binary Search is the classic example. If you have a sorted phonebook of 1,000,000 names, you don't read every name (O(n)). You open the middle, see if the name is before or after, and cut the problem in half. It takes only ~20 steps!
Is O(1) always faster than O(n)?
For large n, yes. But for small n, O(n) might be faster if the O(1) algorithm has a huge setup cost (constant factor). Big O only matters when n approaches infinity.
What is Amortized Analysis?
It describes the average time per operation over a sequence of operations. For example, resizing a Dynamic Array (ArrayList) is O(n), but it happens so rarely that adding an item is considered "Amortized O(1)".
How do I identify O(n^2) code?
Look for nested loops! If you have a "for loop" inside another "for loop", and both iterate up to n, that is n * n = n^2.
What is the complexity of Hash Maps?
Hash Map (Dictionary) lookups are O(1) on average. This makes them incredibly efficient for key-value storage. However, in the worst case (many collisions), they degrades to O(n).
Can we calculate complexity for recursive functions?
Yes, typically using the "Master Theorem" or by drawing a recursion tree. For example, the Fibonacci sequence (naive recursion) is O(2^n) because each call spawns two more calls.