The dynamic programming algorithm design approach is a problem-solving technique that solves complex problems by breaking them down into overlapping subproblems and solving each subproblem only once. It efficiently solves optimization problems by storing the solutions to subproblems in a table or array and reusing them when needed, rather than recomputing them.
The dynamic programming approach typically involves the following steps:
1. Characterize the Structure: Analyze the problem and identify its optimal structure. Determine how the problem can be divided into smaller subproblems and how the solutions to these subproblems can be combined to obtain the optimal solution for the original problem.
2. Define the Recurrence Relation: Express the relationship between the optimal solution of the original problem and the solutions of its subproblems. This is typically done through a recurrence relation, which defines the solution in terms of smaller subproblem solutions.
3. Create a Memoization Table: Create a data structure (often an array or table) to store the solutions of the subproblems. This table serves as a memoization mechanism to avoid redundant computations.
4. Fill the Table Bottom-Up or Top-Down: There are two approaches to solve dynamic programming problems: bottom-up and top-down. In the bottom-up approach, the table is filled iteratively from the smallest subproblems to the largest, using previously computed solutions. In the top-down approach, also known as memoization, the solutions are computed recursively, but with the added step of storing the solutions in the memoization table to avoid recomputing them.
5. Extract the Optimal Solution: Once the table is filled, the optimal solution to the original problem can be extracted by examining the values stored in the table.
When to Apply: Dynamic programming is especially useful when the problem exhibits overlapping subproblems, which means that the same subproblems are solved multiple times. By storing the solutions of these subproblems, dynamic programming avoids redundant computations and significantly improves the efficiency of the algorithm.
Common examples of problems solved using dynamic programming include the knapsack problem, the longest common subsequence problem, and the Fibonacci sequence. In each case, dynamic programming allows for an efficient and optimal solution by breaking down the problem into smaller subproblems and reusing previously computed solutions.
The links below open Jupyter Notebook pages on Google Colab (https://colab.research.google.com/) and present how to solve the corresponding computer science problem using the Dynamic Programming algorithmic design approach.
List of computer science problems that can be solved using the Dynamic Programming Approach: