Chapter 5: Finding the Optimal Route - Minimum Path Sum
We've become adept at navigating grids, counting all possible routes, and avoiding obstacles. Now, we will introduce a new objective: finding the "best" or "optimal" path. Instead of counting how many ways there are to get to the destination, we want to find the single path that costs the least.
LeetCode's "Minimum Path Sum" (Problem #64) is the ideal problem for this transition. It uses the same grid-traversal mechanics but changes the goal from counting to optimization.
The Problem:
Given an m x n grid filled with non-negative numbers, find a path from the top-left corner to the bottom-right corner that minimizes the sum of all numbers along its path.
You can only move either down or right at any point in time.
For example: Given the grid [[1,3,1],[1,5,1],[4,2,1]], the minimum path sum is 7, achieved by the path 1 -> 3 -> 1 -> 1 -> 1.
Approach 1: The Recursive Solution
Let's adapt our recursive thinking. The robot can arrive at any cell (row, col) from either the cell above, (row - 1, col), or the cell to the left, (row, col - 1).
To find the minimum path sum to reach (row, col), the robot must have taken the minimum path to one of its previous cells. Therefore, the minimum sum to reach the current cell is its own value plus the smaller of the two minimum sums from the cells it could have come from.
This gives us our recursive formula: min_sum(row, col) = grid[row][col] + min( min_sum(row-1, col), min_sum(row, col-1) )
The base case is the starting cell (0, 0), where the minimum sum is just its own value.
from typing import List
def minPathSum_recursive(grid: List[List[int]]) -> int:
m, n = len(grid), len(grid[0])
def solve(row, col):
# Base case: At the start, the sum is just the value of the cell.
if row == 0 and col == 0:
return grid[0][0]
# If we are on the first row, we can only come from the left.
if row == 0:
return grid[row][col] + solve(row, col - 1)
# If we are on the first column, we can only come from above.
if col == 0:
return grid[row][col] + solve(row - 1, col)
# Recursive step: take the current value + the minimum of the paths from above or left.
from_above = solve(row - 1, col)
from_left = solve(row, col - 1)
return grid[row][col] + min(from_above, from_left)
return solve(m - 1, n - 1)
# Trying it out
# print(minPathSum_recursive([[1,3,1],[1,5,1],[4,2,1]])) # Output: 7
# print(minPathSum_recursive([[1]*20 for _ in range(20)])) # Extremely slow
As usual, the pure recursive approach is elegant but computationally expensive due to redundant calculations for the same cells.
Approach 2: Memoization with a Decorator (Top-Down DP)
To eliminate the redundant work, we'll cache the results of our solve(row, col) function. This allows us to calculate the minimum path sum for each cell just once.
By adding the decorator, the code remains almost identical to our initial recursive thought process, but now it runs efficiently. The use of float('inf') for out-of-bounds cases is a common trick in DP to avoid complex conditional logic.
Approach 3: Iteration (Bottom-Up DP)
For the bottom-up approach, we'll build a dp table of the same dimensions as the grid. Each cell dp[row][col] will store the minimum path sum to reach that cell.
Create an
m x ndpgrid.Initialize the starting cell:
dp[0][0] = grid[0][0].Initialize the first row: Each cell
dp[0][c]can only be reached from the left, so its value isgrid[0][c] + dp[0][c-1].Initialize the first column: Each cell
dp[r][0]can only be reached from above, so its value isgrid[r][0] + dp[r-1][0].Iterate through the rest of the grid: For each cell
(r, c), apply our formula:dp[r][c] = grid[r][c] + min(dp[r-1][c], dp[r][c-1]).
The answer is the value in the bottom-right corner of our dp table.
This iterative solution is highly efficient, with a time and space complexity of O(m*n). For some problems, you can even optimize the space by modifying the input grid in-place or by only storing the previous row/column, but using a separate dp table is the clearest approach.
Conclusion
In this chapter, we made a pivotal shift from combinatorics (counting things) to optimization (finding the best thing). We saw that the structure of our DP solution—the recurrence relation and the build-up—remained remarkably similar. All we had to change was the operation at each step, from + for counting paths to min() for finding the optimal sum.
This demonstrates the true power and flexibility of the dynamic programming paradigm. You are now ready to tackle problems that involve a wide range of optimization criteria.
Last updated