Chapter 42: Time Complexity

DSA Time Complexity – Let’s understand this topic properly and deeply, like I’m sitting next to you in the library, explaining slowly with a notebook, drawing tables, giving many real-life examples, comparing different cases, and most importantly helping you feel why time complexity is the single most important concept in Data Structures & Algorithms.

What is Time Complexity really?

Time complexity is a way to measure how the running time of an algorithm grows as the input size (usually called n) becomes very large.

In simple words:

Time complexity tells us: “If my input becomes 10 times bigger, how many times slower (or faster) does my program become?”

We don’t measure exact seconds (because that depends on computer, language, etc.). We measure how the number of basic operations grows with input size.

Most Common Time Complexities (you must know these by heart)

Notation Name Growth rate (as n increases) Real feeling (when n = 1 million) Example algorithms / situations
O(1) Constant time Always same time Instant Array access arr[i], HashMap get/put (average)
O(log n) Logarithmic Very slow growth ~20 operations Binary Search, Balanced BST operations
O(n) Linear time Grows directly with n 1 million operations Linear search, traversing array/list
O(n log n) Linearithmic Fast for large n ~20 million operations Merge Sort, Quick Sort, Heap Sort
O(n²) Quadratic Gets painful quickly 1 trillion operations → too slow Bubble Sort, Selection Sort, nested loops
O(n³) Cubic Almost unusable for n > 1000 Extremely slow Naive matrix multiplication
O(2ⁿ) Exponential Catastrophic Impossible (n=40 → years) Recursive Fibonacci, Subset sum (naive)
O(n!) Factorial Beyond impossible Never practical Traveling Salesman (brute force)

Very Important Real-life Comparison Table

Input size n O(1) O(log n) O(n) O(n log n) O(n²) O(2ⁿ)
10 1 ~3 10 ~30 100 1,024
100 1 ~7 100 ~700 10,000 10³⁰
1,000 1 ~10 1,000 ~10,000 1 million huge
10,000 1 ~14 10,000 ~140,000 100 million insane
100,000 1 ~17 100,000 ~1.7M 10 billion impossible
1,000,000 1 ~20 1 million ~20 million 1 trillion impossible

Rule of thumb most people remember:

  • 10⁸ operations ≈ 1 second (on modern computers, roughly)
  • O(n²) → okay up to n ≈ 10⁴ (10,000)
  • O(n log n) → okay up to n ≈ 10⁷–10⁸
  • O(n) → okay up to n ≈ 10⁸–10⁹
  • O(2ⁿ) → only tiny n (n ≤ 30–40)

How to Find Time Complexity (step-by-step thinking)

  1. Look at loops

    • Single loop → usually O(n)
    • Two nested loops → usually O(n²)
    • Three nested loops → O(n³)
  2. Look inside loops

    • If inside loop you do constant work → multiply by O(1)
    • If inside loop you do binary search → multiply by O(log n)
  3. Drop lower order terms & constants

    • 3n² + 5n + 100 → O(n²)
    • 7n log n + 4n + 200 → O(n log n)
  4. Recursive functions → use recurrence relations

    • Binary search → T(n) = T(n/2) + O(1) → O(log n)
    • Merge Sort → T(n) = 2T(n/2) + O(n) → O(n log n)
    • Naive Fibonacci → T(n) = T(n-1) + T(n-2) + O(1) → O(2ⁿ)

Very Detailed Real Examples with Code

1. O(1) – Constant time

Python

No matter if array has 10 or 10 million elements → same time.

2. O(log n) – Binary Search

Python

Every iteration halves the search space → log₂(n) steps n = 1,000,000 → ~20 comparisons

3. O(n) – Linear Search

Python

Must look at every element once → exactly n comparisons

4. O(n log n) – Merge Sort

Python
  • Divides into half → log n levels
  • Each level does O(n) merging work
  • Total → n × log n

5. O(n²) – Bubble Sort

Python

Two nested loops → n × n = n² comparisons/swaps

Common Time Complexity Cheat Sheet (memorize this table)

Big-O Name 10³ (1,000) 10⁵ (100,000) 10⁶ (1 million) 10⁷ (10 million)
O(1) Constant 1 1 1 1
O(log n) Logarithmic ~10 ~17 ~20 ~23
O(n) Linear 1,000 100,000 1 million 10 million
O(n log n) Linearithmic ~10,000 ~1.7M ~20M ~230M
O(n²) Quadratic 1 million 10 billion 1 trillion 100 trillion
O(n³) Cubic 1 billion 1 quintillion
O(2ⁿ) Exponential ~10³⁰ insane impossible impossible

Quick Rules of Thumb for Interviews

  • 10⁸ ~ 10⁹ operations ≈ acceptable for 1 second (modern judges)
  • O(n²) → safe up to n ≈ 10⁴ (10,000)
  • O(n log n) → safe up to n ≈ 10⁷–10⁸
  • O(n) → safe up to n ≈ 10⁸–10⁹
  • O(2ⁿ) → only for n ≤ 30–35

Summary – Time Complexity in one line each

  • Describes how running time grows with input size n
  • We use Big-O notation → worst case upper bound
  • Most important complexities: O(1), O(log n), O(n), O(n log n), O(n²), O(2ⁿ)
  • O(n log n) is the sweet spot for sorting & many efficient algorithms
  • O(n²) is usually the limit for n ≈ 10⁴–10⁵
  • Understanding time complexity separates beginner from intermediate/advanced coders
  • Almost every interview question will ask you to analyze time complexity

Would you like to go deeper into any part of time complexity?

Very common next topics:

  • How to calculate time complexity step-by-step for any code
  • Amortized analysis (why dynamic array append is O(1) amortized)
  • Space complexity (very similar but for memory)
  • Common time complexity traps interviewers use
  • Master theorem for solving recurrences (Merge Sort, Binary Search, etc.)
  • When O(n²) is acceptable vs when you must optimize

Just tell me which one you want next — I’ll explain it in the same detailed, friendly, teacher style with clear examples! 😊

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *