Dymanic Programming

This chapter will introduce Dynamic Programming (DP) and the problems it solves, as well as algorithms and optimizations designed based on it.

Dynamic programming is a method for solving complex problems by decomposing the original problem into relatively simple sub-problems.

Since dynamic programming is not a specific algorithm, but a method for solving specific problems, it will appear in a variety of data structures, and the types of questions related to it are also more complicated.

In OI, recursive solutions to non-optimization problems such as counting are often informally called DP, so they are listed together in this chapter. In fact, dynamic programming does have many similarities with other types of recursion, and you can pay attention to the similarities and differences between them when learning.

Motivation

Write a program that calculates the highest sum of numbers passed on a route that starts at the top and ends somewhere on the base.

        7 

      3   8 

    8   1   0 

  2   7   4   4 

4   5   2   6   5 

Each step can go either diagonally down to the left or diagonally down to the right.

The number of rows in the triangle is >1 but <=100.

The numbers in the triangle, all integers, are between 0 and 99

The simplest and crudest idea is to try all paths. Because the number of paths is O(2^r), this approach is unacceptable.

Pay attention to the fact that every step of an optimal path is optimal.

Taking the optimal path mentioned in the example as an example, only the first four steps 7 - 3 - 8 - 7 are considered. There is no path with a greater weight from the top to the second number in row 4.

For each point, there are only two next decisions: to the lower left corner or to the lower right corner (if it exists). Therefore, you only need to record the maximum weight of the current point, use this maximum weight to perform the next decision, and update the maximum weight of subsequent points.

Another advantage of doing this is that we successfully reduced the size of the problem, breaking it into multiple smaller problems. To get the optimal solution from the top to row r, you only need to know the information about the optimal solution from the top to row r-1.

There is still a problem at this time: there will be a lot of overlap between sub-problems, and the same sub-problem may be visited multiple times, and the efficiency is still not high. The way to solve this problem is to store the solution of each sub-problem and restrict the access sequence through memoization to ensure that each sub-problem is only accessed once.