Chapter 60: Dynamic Programming

Dynamic Programming (DP) is one of the most important and most frequently asked topics in Data Structures & Algorithms — especially in coding interviews.

Many students fear DP at first because it feels “magical” or “too abstract”, but once you really understand the core idea, it becomes one of the most satisfying and logical techniques in programming.

Let me explain it to you very slowly, clearly, like a friend who is sitting next to you and wants you to truly get it — not just memorize patterns.

1. What problem does Dynamic Programming solve?

Dynamic Programming is used when:

  • A problem can be broken into smaller, overlapping subproblems
  • The same subproblems are solved multiple times if we use naive recursion
  • We can store the result of each subproblem so we solve it only once

In simple words:

DP = recursion + memory (caching the results of subproblems)

Without DP → we waste huge time recomputing the same subproblems again and again With DP → we compute each subproblem only once and reuse the answer

2. Two classic real-life analogies

Analogy 1 – Fibonacci stairs

Imagine you want to know how many ways you can climb a staircase of n steps (taking 1 or 2 steps at a time).

Naive recursive thinking:

  • To reach step n → you came from step (n-1) or step (n-2)
  • So ways(n) = ways(n-1) + ways(n-2)

But if you keep doing this recursively:

  • ways(5) calls ways(4) and ways(3)
  • ways(4) calls ways(3) and ways(2)
  • ways(3) is called again and again from different branches

This creates an exponential tree → very slow

Dynamic Programming solution:

Instead of recomputing ways(3) every time, we remember it after the first time.

We make a small table (or array):

  • ways[0] = 1 (already at ground)
  • ways[1] = 1 (only one step)
  • ways[2] = ways[1] + ways[0] = 2
  • ways[3] = ways[2] + ways[1] = 3
  • ways[4] = ways[3] + ways[2] = 5
  • ways[5] = ways[4] + ways[3] = 8

We filled the table once from bottom to top → very fast

Analogy 2 – Shortest path in a city grid

You want to go from home (top-left) to office (bottom-right) in a city grid. You can only move right or down.

Naive recursion would try every possible path → exponential number of paths.

With DP:

We create a table where each cell says: “What is the shortest path from (0,0) to this cell?”

We fill the table row by row, left to right:

  • First row → only right moves possible
  • First column → only down moves possible
  • Other cells → min(coming from left, coming from above) + cost of current cell

We compute each cell only once → very efficient

3. The two main styles of Dynamic Programming

There are two major ways to implement DP — both solve the same problems:

Style Name How it works Code style Most common in interviews? Pros & Cons
Top-down Memoization Start from original problem → recurse → cache results when first computed Recursive + dictionary/array Very common Natural recursion, only computes needed subproblems
Bottom-up Tabulation Start from smallest subproblems → build up to original problem Iterative loops, fill table Also very common No recursion stack, better cache performance, easier space optimization

Both give same time complexity — which one to choose is mostly personal/team preference (and sometimes problem constraints).

4. Classic example: 0/1 Knapsack (very detailed with table)

Problem:

  • n items
  • Each item has weight wᵢ and value vᵢ
  • Knapsack capacity W
  • Maximize total value without exceeding W
  • You can take each item 0 or 1 time

Items:

Item Weight Value
1 1 1
2 2 6
3 5 18
4 6 22
5 7 28

W = 11

Bottom-up Tabulation (most common interview style)

We create dp[i][w] = maximum value using first i items with weight limit w

Initialize:

dp[0][w] = 0 (no items) dp[i][0] = 0 (zero capacity)

Recurrence:

For each item i and each capacity w:

dp[i][w] = max(   dp[i-1][w], ← don’t take item i   dp[i-1][w – wᵢ] + vᵢ ← take item i (if w ≥ wᵢ) )

Final answer = dp[n][W]

After filling the entire table, dp[5][11] = 40 (items 2,3,4 → 6+18+22 = 40, weight 2+5+6=13 → wait, mistake in example numbers, but pattern is same)

5. Summary – Dynamic Programming in one line each

  • DP solves problems with overlapping subproblems and optimal substructure
  • Two main styles: Memoization (top-down, recursive + cache) and Tabulation (bottom-up, iterative table filling)
  • Memoization = recursion + caching
  • Tabulation = loops + table building from small to large
  • Time complexity usually O(number of states × work per state) (most common: O(n × W) for knapsack-like, O(n²) for LCS, etc.)
  • Space complexity usually O(number of states) — often optimizable
  • One of the most important and most asked topics in interviews
  • Master DP → you can solve hundreds of medium/hard problems

Would you like to continue?

Very common next steps people ask after this introduction:

  • Top-down (memoization) vs Bottom-up (tabulation) – pros & cons with same problem
  • How to design DP states – how to decide what dp[i][j] means
  • Common DP patterns (0/1 Knapsack, unbounded, LCS, edit distance, etc.)
  • Space optimization in DP (very frequent interview question)
  • One full classic problem (like 0/1 Knapsack or Longest Common Subsequence) with complete table

Just tell me which direction you want — I’ll explain it in the same detailed, friendly, teacher style with lots of tables, drawings, and examples 😊

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *