Dynamic Programming - (Theory)
Concept
Basics
Why Dynamic Programming?
Dynamic Programming (DP) solves problems by breaking them into overlapping subproblems and storing results to avoid redundant computations.
Problems with naive recursion:
-
Exponential time complexity (e.g.,
fib(5)recalculatesfib(3)multiple times). -
Stack overflow risk due to deep recursion trees.
Classical Examples:
-
Fibonacci sequence
-
0/1 Knapsack
-
Coin Change
How DP Optimizes:
-
Optimal Substructure: A problem's solution depends on its subproblems.
-
Overlapping Subproblems: Identical subproblems recur.
-
Caching: Store computed results (via memoization or tabulation).
Recursion Tree for Fibonacci (n = 5):
fib(5)
/ \
fib(4) fib(3)
/ \ / \
fib(3) fib(2) fib(2) fib(1)
/ \
fib(2) fib(1)
Redundant calls to fib(2) and fib(3)
Types of Dynamic Programming
1. Memoization (Top-Down)
Java Code (Fibonacci):
import java.util.Arrays;
public class FibonacciMemoization {
public static int fib(int n, int[] memo) {
if (n <= 1) return n;
if (memo[n] != -1) return memo[n];
memo[n] = fib(n - 1, memo) + fib(n - 2, memo);
return memo[n];
}
public static void main(String[] args) {
int n = 5;
int[] memo = new int[n + 1];
Arrays.fill(memo, -1);
System.out.println(fib(n, memo)); // Output: 5
}
}
Dry Run (n = 5):
| Call Stack | memo[] State |
Action |
|---|---|---|
fib(5) |
[-1, -1, -1, -1, -1, -1] |
Compute & cache |
├─ fib(4) |
[-1, -1, -1, -1, -1, -1] |
Compute & cache |
│ ├─ fib(3) |
[-1, -1, -1, -1, -1, -1] |
Compute & cache |
│ │ ├─ fib(2) |
[-1, -1, 1, -1, -1, -1] |
Compute & cache |
│ │ └─ fib(1) |
[-1, 1, 1, -1, -1, -1] |
Cache hit |
│ └─ fib(2) |
[-1, 1, 1, 2, -1, -1] |
Cache hit |
└─ fib(3) |
[-1, 1, 1, 2, 3, -1] |
Cache hit |
| Return | [-1, 1, 1, 2, 3, 5] |
Final Result: 5 |
Complexity:
-
Time: O(n)
-
Space: O(n) (memo array + call stack)
2. Tabulation (Bottom-Up)
Java Code (Fibonacci):
public class FibonacciTabulation {
public static int fib(int n) {
if (n <= 1) return n;
int[] dp = new int[n + 1];
dp[0] = 0;
dp[1] = 1;
for (int i = 2; i <= n; i++) {
dp[i] = dp[i - 1] + dp[i - 2];
}
return dp[n];
}
}
Space Optimized Version:
public static int fibOptimized(int n) {
if (n <= 1) return n;
int prev = 0, curr = 1;
for (int i = 2; i <= n; i++) {
int next = prev + curr;
prev = curr;
curr = next;
}
return curr;
}
DP Table Evolution (n = 5):
i |
dp[0] |
dp[1] |
dp[2] |
dp[3] |
dp[4] |
dp[5] |
|---|---|---|---|---|---|---|
| 0 | 0 | 1 | - | - | - | - |
| 2 | 0 | 1 | 1 | - | - | - |
| 3 | 0 | 1 | 1 | 2 | - | - |
| 4 | 0 | 1 | 1 | 2 | 3 | - |
| 5 | 0 | 1 | 1 | 2 | 3 | 5 |
Complexity:
-
Time: O(n)
-
Space: O(n) (or O(1) with optimized version)
Memoization vs. Tabulation
| Feature | Memoization (Top-Down) | Tabulation (Bottom-Up) |
|---|---|---|
| Approach | Recursive | Iterative |
| Stack Usage | High | None |
| Space | O(n) cache + stack | O(n) array (optimizable) |
| Readability | More intuitive | Slightly less intuitive |
| Speed | May have overhead | Usually faster |
Transition Flow: Recursion → Memoization → Tabulation
Example: Climbing Stairs
Problem: Ways to climb n stairs by taking 1 or 2 steps at a time.
1. Naive Recursion
public static int climbStairsRec(int n) {
if (n <= 2) return n;
return climbStairsRec(n - 1) + climbStairsRec(n - 2);
}
Recursion Tree (n = 4):
climb(4)
/ \
climb(3) climb(2)
/ \ / \
climb(2) climb(1) climb(1) climb(0)
Time: O(2ⁿ)
Space: O(n)
2. Memoization
public static int climbStairsMemo(int n, int[] memo) {
if (n <= 2) return n;
if (memo[n] != 0) return memo[n];
memo[n] = climbStairsMemo(n - 1, memo) + climbStairsMemo(n - 2, memo);
return memo[n];
}
Dry Run (n = 4):
-
Computes:
climb(3)→climb(2)→ base case -
Uses cache for repeated
climb(2),climb(1)
Time: O(n)
Space: O(n)
3. Tabulation
public static int climbStairsTab(int n) {
if (n <= 2) return n;
int[] dp = new int[n + 1];
dp[1] = 1; dp[2] = 2;
for (int i = 3; i <= n; i++) {
dp[i] = dp[i - 1] + dp[i - 2];
}
return dp[n];
}
Space-Optimized Version:
public static int climbStairsOpt(int n) {
if (n <= 2) return n;
int prev = 1, curr = 2;
for (int i = 3; i <= n; i++) {
int next = prev + curr;
prev = curr;
curr = next;
}
return curr;
}
DP Table (n = 4):
i |
dp[1] |
dp[2] |
dp[3] |
dp[4] |
|---|---|---|---|---|
| 1 | 1 | - | - | - |
| 2 | 1 | 2 | - | - |
| 3 | 1 | 2 | 3 | - |
| 4 | 1 | 2 | 3 | 5 |
Summary of Trade-offs
| Approach | When to Use | Pros | Cons |
|---|---|---|---|
| Naive Recursion | Small inputs or prototyping | Simple and intuitive | Exponential time, stack overflow |
| Memoization | Recursive structure with overlap | Efficient, easy to write | Stack overhead |
| Tabulation | Large inputs, space-sensitive | Faster, no stack usage | Less intuitive |
Key Takeaways
-
DP applies when problems have optimal substructure and overlapping subproblems.
-
Always try solving recursively first → then add memoization → convert to tabulation → optimize space.
-
Practice common patterns: Fibonacci, Climbing Stairs, Knapsack, Coin Change.
Visual Aid Cheat Sheet
Approach Flow:
Recursion → Memoization (cache) → Tabulation (table) → Space Optimization (variables)