Chapter 6: Making Change - The Knapsack Pattern

Having thoroughly explored paths on a grid, we now turn our attention to a different, yet equally important, category of dynamic programming problems. These problems don't involve a grid but instead ask for the optimal way to achieve a target value using a given set of items. This is often called the "knapsack" or "change-making" pattern.

LeetCode's "Coin Change" (Problem #322) is the classic introduction to this concept. It requires us to figure out the minimum number of items (coins) needed to sum up to a specific amount.

The Problem:

You are given an integer array coins representing coins of different denominations and an integer amount representing a total amount of money.

Return the fewest number of coins that you need to make up that amount. If that amount of money cannot be made up by any combination of the coins, return -1.

You may assume that you have an infinite number of each kind of coin.

For example:

  • coins = [1, 2, 5], amount = 11 -> The answer is 3 (since 11 = 5 + 5 + 1).

  • coins = [2], amount = 3 -> The answer is -1.

Approach 1: The Recursive Solution

Let's think recursively. To find the minimum number of coins for a given amount, we can try using each available coin. If we use a coin, the problem reduces to finding the minimum coins for amount - coin. We then add 1 (for the coin we just used) to that result.

Since we want the minimum number of coins, we should try this for every coin and take the best result.

This gives us our recursive formula: solve(amount) = 1 + min(solve(amount - c)) for every coin c in coins.

Our base cases are:

  • If amount == 0, we need 0 coins.

  • If amount < 0, this is an impossible path, so we should signal it with an invalid value (like infinity).

from typing import List

def coinChange_recursive(coins: List[int], amount: int) -> int:

    def solve(current_amount):
        # Base case: We've reached the target.
        if current_amount == 0:
            return 0

        # Base case: We've overshot the target.
        if current_amount < 0:
            return float('inf')

        min_coins = float('inf')
        # Try every coin
        for coin in coins:
            result = solve(current_amount - coin)
            if result != float('inf'):
                min_coins = min(min_coins, result + 1)

        return min_coins

    final_result = solve(amount)

    # If min_coins is still infinity, no solution was found.
    return final_result if final_result != float('inf') else -1

# Trying it out
# print(coinChange_recursive([1, 2, 5], 11)) # Output: 3
# print(coinChange_recursive([1, 5, 10], 32)) # Very, very slow

This brute-force recursion is correct, but it's incredibly inefficient. It re-calculates the solution for the same sub-amounts over and over again. For an amount like 32, it will time out.

Approach 2: Memoization with a Decorator (Top-Down DP)

The overlapping subproblems are obvious: solve(5) will be calculated countless times on the way to solving for amount = 32. A cache is the perfect solution.

By adding a decorator, we ensure that solve(amount) is only computed once for each unique amount.

This is the power of top-down dynamic programming. The code is a direct translation of our recursive logic, but the performance is night and day.

Approach 3: Iteration (Bottom-Up DP)

For the bottom-up approach, we'll create a dp array to store the minimum coins needed for every amount from 0 up to our target amount.

  1. Create a dp array of size amount + 1. Initialize all values to a placeholder for infinity (e.g., amount + 1, which is a value we can never reach).

  2. Set our base case: dp[0] = 0, because 0 coins are needed to make an amount of 0.

  3. Iterate through each amount i from 1 to amount.

  4. For each i, iterate through each coin. If i - coin is a valid amount (i.e., >= 0), it means we can potentially form i by using that coin. We update dp[i] with the minimum of its current value and 1 + dp[i - coin].

The answer will be the value at dp[amount].

This bottom-up solution systematically builds the answer from the smallest subproblem (amount=0) all the way to the target. It's very efficient, with a time complexity of O(amount * number_of_coins) and a space complexity of O(amount).

Conclusion

With this chapter, you've added the powerful knapsack pattern to your toolkit. You've learned how to solve problems where the goal is to "build" a target value optimally. The key insight was shifting our DP state's meaning: dp[i] no longer represented a position in a sequence or grid, but rather the solution for a value i. This abstract leap is crucial for solving a vast new range of dynamic programming challenges.

Last updated