Lecture 16
Lecture 16
Lecture 16
1
Dynamic Programming
2
Dynamic Programming
The development of a dynamic programming algorithm can be
subdivided into the following steps:
1.Characterize the structure of an optimal solution
2.Recursively define the value of an optimal solution
3.Compute the value of an optimal solution in a bottom-up
fashion
4.Construct an optimal solution from computed information
3
Optimal Substructure
A problem exhibits optimal substructure if and only if an
optimal solution to the problem contains within it optimal
solutions to subproblems.
4
Overlapping Subproblems
A second indication that dynamic programming might
be applicable is that the space of subproblems must be
small, meaning that a recursive algorithm for the
problem solves the same subproblems over and over.
5
Overlapping Subproblems
When a recursive algorithm revisits the same problem over and
over again, then we say that the optimization problem has
overlapping subproblems.
6
Note
If a recursive algorithm solving the problem creates always
new subproblems, then this is an indication that divide-and-
conquer methods rather than dynamic programming might
apply.
7
Greedy Algorithms
8
Greedy Algorithms
The development of a greedy algorithm can be separated into the
following steps:
1.Cast the optimization problem as one in which we make a choice
and are left with one subproblem to solve.
2.Prove that there is always an optimal solution to the original
problem that makes the greedy choice, so that the greedy choice is
always safe.
3.Demonstrate that, having made the greedy choice, what remains is
a subproblem with the property that if we combine an optimal
solution to the subproblem with the greedy choice that we have
made, we arrive at an optimal solution to the original problem.
9
Greedy-Choice Property
10
Optimal Substructure
A problem exhibits optimal substructure if and only if an
optimal solution to the problem contains within it optimal
solutions to subproblems.
11
Divide-and-Conquer
12
Divide-and-Conquer
A divide and conquer method can be used for problems that can
be solved by recursively breaking them down into two or more
sub-problems of the same (or related) type, until these become
simple enough to be solved directly. The solutions to the sub-
problems are then combined to give a solution to the original
problem.
13
Greedy Algorithms
• Knapsack capacity: W
• There are n items: the i-th item has value vi and weight wi
• Goal:
find xi such that for all 0 xi 1, i = 1, 2, .., n
wixi W and
xivi is maximum
Fractional Knapsack - Example
• E.g.: 20
$80
---
Item 3 30 +
Item 2 50 50
20 $100
Item 1 30
20 +
10 10 $60
v1 v2 vn
...
w1 w2 wn
Fractional Knapsack Problem
Alg.: Fractional-Knapsack (W, v[n], w[n])
1. While w > 0 and as long as there are items remaining
2. pick item with maximum vi/wi
• The thief must choose among n items, where the ith item
worth vi dollars and weighs wi pounds
• Carrying at most W pounds, maximize value
• Note: assume vi, wi, and W are all integers
“0-1” b/c each item must be taken or left in entirety
• A variation, the fractional knapsack problem:
Thief can take fractions of items
Think of items in 0-1 problem as gold ingots, in fractional
problem as buckets of gold dust
The Knapsack Problem
And Optimal Substructure
• Both variations exhibit optimal substructure
• To show this for the 0-1 problem, consider the most valuable
load weighing at most W pounds
9 10