Dynamic Programming is primarily an improvement over simple recursion. We can use Dynamic Programming to optimize any recursive solution with repeated calls for the same inputs. The idea is to save the results of subproblems, so we don't have to recalculate them later. This straightforward optimization reduces the time complexity from exponential to polynomial.
For example, let's write a simple recursive solution for Fibonacci Numbers. We get exponential time complexity, and if we optimize it by storing subproblem solutions, we get linear time complexity.
int fib(int n){
if(n<= 1){
return n;
}
return fib(n-2) + fib(n-1);
}
Time function T(n) = 2 T(n-1) +1. If we solve this function, we will get O(2^n) time complexity. This is the upper bound of the recurrence relation, which is exponential time. We have noticed that fib(1) and fib (2) are called multiple times. So, we can reduce the time by reducing the function calls. Let's solve this fib function with dynamic programming.
Suppose we have an array F with a length of 5. Initially, -1 is inserted in each index so that we can save the number after the process.
Initially, we will insert the value of F[0] = 0 and F[1] = 1
The value of fib (1)=1 and fib (2)=1. So in the array the value is stored F[2] = 1
The value of F[1] = 1 and F[2] = 1. So, F[3]= 2
The value of F[2] = 1 and F[3] = 2. So, F[4]= 3
The value of F[3] = 2 and F[4] = 3. So, F[5]= 5
int fib(int n) {
if (n<=1){
return n;
}
F[0]=0; F[1] = 1;
for(int i = 2; i <= n; i++){
F[i] = F[i-2] + F[i-1];
}
}
This process will take O(n) times as the function fib (n) will call n+1 unit of time.
This process reduces the time from 2^n to n. That means exponential time to linear time
This dynamic programming method is called memorization which follows top down approach to solve any problem.