Chapter 58: Memoization
Memoization is one of the most important and most frequently used techniques in competitive programming, system design interviews, and real-world software development when dealing with expensive recursive functions or repeated subproblems.
Let me explain it to you very clearly and slowly — like we are sitting together and I’m teaching a friend who is genuinely confused and wants to really understand it deeply.
1. What Memoization actually is (very simple definition)
Memoization = saving the result of an expensive function call so that when the same inputs come again, we don’t recompute it — we just return the already saved result.
The word comes from Latin memorandum → “to be remembered”.
In programming terms:
Whenever a function is called with certain arguments, we store (“memoize”) the result in a cache (usually a hash map or array). Next time the function is called with exactly the same arguments, we immediately return the cached value instead of recalculating everything.
2. Real-life analogy everyone understands
Imagine you are a chef in a restaurant and you often get orders for the same dish — say, Butter Chicken.
Without memoization:
- Every time someone orders Butter Chicken, you start from scratch: chop onions, fry spices, add chicken, simmer for 30 minutes… → takes 45 minutes each time
With memoization:
- First time someone orders Butter Chicken → you cook it normally (45 min), serve it, but you also save one full pot in the warmer.
- Next time someone orders the same dish → you don’t cook again — you just take from the saved pot (2 minutes)
- You only cook fresh when the saved pot is empty or someone wants a different dish
That’s exactly what memoization does: avoid repeating expensive work when the inputs are the same.
3. The classic example – Fibonacci (everyone’s first memoization)
Naive recursive Fibonacci (very slow):
|
0 1 2 3 4 5 6 7 8 9 |
def fib(n): if n <= 1: return n return fib(n-1) + fib(n-2) |
For fib(40) → it makes ~331 million recursive calls — takes many seconds.
Why so slow?
Because it recomputes the same values millions of times:
fib(5) is calculated again and again in different branches.
Now — with memoization (top-down DP):
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
def fib(n, memo=None): if memo is None: memo = {} if n in memo: return memo[n] # ← this is memoization! if n <= 1: return n memo[n] = fib(n-1, memo) + fib(n-2, memo) return memo[n] |
Now fib(40) takes less than 1 millisecond — only 41 calls are made (one for each number from 0 to 40).
Before memoization: exponential tree of calls After memoization: just a straight line down + lookups
4. Two ways people implement memoization
Style 1: Top-down (recursive + memo cache) – most common in interviews
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
def fib(n): memo = {} def helper(k): if k in memo: return memo[k] if k <= 1: return k memo[k] = helper(k-1) + helper(k-2) return memo[k] return helper(n) |
This is called top-down dynamic programming with memoization.
Style 2: Bottom-up (iterative DP) – often faster and uses less stack space
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
def fib(n): if n <= 1: return n dp = [0] * (n+1) dp[1] = 1 for i in range(2, n+1): dp[i] = dp[i-1] + dp[i-2] return dp[n] |
Both give O(n) time and O(n) space Bottom-up usually preferred in competitive programming because:
- No recursion stack overhead
- Better cache performance
- Easier to optimize space to O(1) (keep only last two values)
5. When should you use memoization? (very practical guide)
Use memoization when:
- The function is recursive
- It has repeated subproblems (same arguments are computed many times)
- The subproblems are expensive to compute
- The number of different possible argument combinations is not too huge
Classic problems where memoization shines:
- Fibonacci
- 0/1 Knapsack
- Longest Common Subsequence (LCS)
- Matrix Chain Multiplication
- Subset Sum
- Coin Change (minimum coins)
- Word Break
- Palindrome Partitioning
- Edit Distance
- Many tree DP problems
6. Summary – Memoization in one line each
- Memoization = caching results of expensive function calls so we never recompute the same inputs
- Saves enormous time when there are overlapping subproblems
- Classic top-down style: use a dictionary / array to store already computed results
- Turns exponential time → polynomial time (usually O(n) or O(n²))
- Most common in dynamic programming problems
- Bottom-up DP is often preferred in practice (iterative version)
- One of the most important techniques to master for interviews and competitive programming
Would you like to go deeper into memoization?
Very common next topics people ask:
- Top-down vs Bottom-up DP – pros & cons with examples
- How to choose memoization key (when arguments are multiple)
- Memoization on recursive tree problems (very frequent)
- Common mistakes when using memoization (wrong key, mutable objects, etc.)
- Memoization vs Tabulation – which one to choose when
Just tell me which direction you want — I’ll continue in the same detailed, friendly, teacher style with lots of examples and visuals 😊
