Knapsack +
Knapsack +
Knapsack +
15
20 W
20
10
Set of instances I
Function F that gives for all w ∈ I the set of feasible
solutions F (w )
Goal function g that gives for each s ∈ F (w ) the value g(s)
Optimization goal: Given input w , maximize or minimize the
value g(s) among all s ∈ F (w )
Decision problem: Given w ∈ I and k ∈ N, decide whether
OPT (w ) ≤ k (minimization)
OPT (w ) ≥ k (maximization)
where OPT (w ) is the optimal function value among all
s ∈ F (w ).
Set of instances I
Function F that gives for all w ∈ I the set of feasible
solutions F (w )
Goal function g that gives for each s ∈ F (w ) the value g(s)
Optimization goal: Given input w , maximize or minimize the
value g(s) among all s ∈ F (w )
Decision problem: Given w ∈ I and k ∈ N, decide whether
OPT (w ) ≤ k (minimization)
OPT (w ) ≥ k (maximization)
where OPT (w ) is the optimal function value among all
s ∈ F (w ).
A(w )
≤r ∀w ∈ I
OPT (w )
OPT (w )
≤r ∀w ∈ I
A(w )
Definition
A linear program with n variables and m constraints is specified
by the following minimization problem
Cost function f (x) = c · x
c is called the cost vector
m constraints of the form ai · x ./i bi where ./i ∈ {≤, ≥, =},
ai ∈ Rn We have
L = x ∈ Rn : ∀1 ≤ i ≤ m : xi ≥ 0 ∧ ai · x ./i bi .
Definition
A linear program with n variables and m constraints is specified
by the following minimization problem
Cost function f (x) = c · x
c is called the cost vector
m constraints of the form ai · x ./i bi where ./i ∈ {≤, ≥, =},
ai ∈ Rn We have
L = x ∈ Rn : ∀1 ≤ i ≤ m : xi ≥ 0 ∧ ai · x ./i bi .
Theorem
A linear program can be solved in polynomial time.
maximize p · x
subject to
w · x ≤ W , xi ∈ {0, 1} for 1 ≤ i ≤ n.
maximize p · x
subject to
w · x ≤ W , 0 ≤ xi ≤ 1 for 1 ≤ i ≤ n.
15
20 W
20
10
Proof.
Let x∗ denote the optimal solution
w · x∗ = W otherwise increase some xi 10
∀i ∈ B : xi∗ = 1 otherwise B 15
increase xi∗ and decrease some xj∗ for j ∈ {k} ∪ S W
∀j ∈ S : xj∗ = 0 otherwise k 20
∗
decrease xj and increase xk∗
P
W− wi
This only leaves xk = i∈B
wk
S 20
Proof.
Let x∗ denote the optimal solution
w · x∗ = W otherwise increase some xi 10
∀i ∈ B : xi∗ = 1 otherwise B 15
increase xi∗ and decrease some xj∗ for j ∈ {k} ∪ S W
∀j ∈ S : xj∗ = 0 otherwise k 20
∗
decrease xj and increase xk∗
P
W− wi
This only leaves xk = i∈B
wk
S 20
Proof.
Let x∗ denote the optimal solution
w · x∗ = W otherwise increase some xi 10
∀i ∈ B : xi∗ = 1 otherwise B 15
increase xi∗ and decrease some xj∗ for j ∈ {k} ∪ S W
∀j ∈ S : xj∗ = 0 otherwise k 20
∗
decrease xj and increase xk∗
P
W− wi
This only leaves xk = i∈B
wk
S 20
Proof.
Let x∗ denote the optimal solution
w · x∗ = W otherwise increase some xi 10
∀i ∈ B : xi∗ = 1 otherwise B 15
increase xi∗ and decrease some xj∗ for j ∈ {k} ∪ S W
∀j ∈ S : xj∗ = 0 otherwise k 20
∗
decrease xj and increase xk∗
P
W− wi
This only leaves xk = i∈B
wk
S 20
Proof.
P
We have i∈B pi ≤ opt. Furthermore,
10
since wk < W , pk ≤ opt. We get B 15 W
X X
opt ≤ xi pi ≤ pi + pk k 20
i i∈B
≤ opt + opt = 2opt
S 20
1
P
if i ∈ B
W− wi
xi = i∈B
wk if i = k 10
B
0 if i ∈ S 15 W
Exercise: Prove that either B or {k} is a k 20
2-approximation of the (nonrelaxed)
knapsack problem. S 20
Principle of Optimality
An optimal solution can be viewed as constructed of
optimal solutions for subproblems
Solutions with the same objective values are
interchangeable
Example: Shortest Paths
Any subpath of a shortest path is a shortest path
Shortest subpaths are interchangeable
s u v t
Define
P(i, C) = optimal profit from items 1,. . . ,i using capacity ≤ C.
Lemma
Proof.
To prove: P(i, C) ≤ max(P(i − 1, C), P(i − 1, C − wi ) + pi )
Assume the contrary ⇒
∃x that is optimal for the subproblem such that
Procedure knapsack(p, c, n, W )
array P[0 . . . W ] = [0, . . . , 0]
bitarray decision[1 . . . n, 0 . . . W ] = [(0, . . . , 0), . . . , (0, . . . , 0)]
for i := 1 to n do
// invariant: ∀C ∈ {1, . . . , W } : P[C] = P(i − 1, C)
for C := W downto wi do
if P[C − wi ] + pi > P[C] then
P[C] := P[C − wi ] + pi
decision[i, C] := 1
C := W
array x[1 . . . n]
for i := n downto 1 do
x[i] := decision[i, C]
if x[i] = 1 then C := C − wi
endfor
return x
Analysis:
Time: O(nW ) pseudo-polynomial
Space: W + O(n) words plus Wn bits.
Define
C(i, P) = smallest capacity from items 1,. . . ,i giving profit ≥ P.
Lemma
Algorithm A is a
(Fully) Polynomial Time Approximation Scheme
minimization
for problem Π if:
maximization
Input: Instance I, error parameter ε
≤ 1+ε
Output Quality: f (x) ( )opt
≥ 1−ε
Time: Polynomial in |I| (and 1/ε)
PTAS FPTAS
1
n + 21/ε n2 +
ε
1 1
nlog ε n+ 4
ε
1
nε n/ε
3 ..
n42/ε .
1000/ε ..
n + 22 .
.. ..
. .
16
20 W
21
Recall that pi ∈ N for all i! 11
P:= maxi pi // maximum profit
εP
K := // scaling factor
n
pi0 := pKi // scale profits
x0 := dynamicProgrammingByProfit(p0 , c, C)
output x0
Example:
ε = 1/3, n = 4, P = 20 → K = 5/3
p = (11, 20, 16, 21) → p 0 = (6, 12, 9, 12) (or p0 = (2, 4, 3, 4))
– 28. April 2010 36/44
Lemma
p · x0 ≥ (1 − ε)opt.
Proof.
Consider the optimal solution x∗ .
X j p k
p · x∗ − K p0 · x∗ = pi − K i
K
i∈x∗
X pi
≤ pi − K − 1 = |x ∗ |K ≤ nK ,
∗
K
i∈x
i.e., K p0 · x∗ ≥ p · x∗ − nK . Furthermore,
X j pi k X pi
K p0 · x∗ ≤ K p0 · x0 = K ≤ K = p · x0 .
0
K 0
K
i∈x i∈x
p · x0 ≥K p0 · x∗ ≥ p · x∗ − nK = opt − εP ≥ (1 − ε)opt
– 28. April 2010 37/44
Lemma
p · x0 ≥ (1 − ε)opt.
Proof.
Consider the optimal solution x∗ .
X j p k
p · x∗ − K p0 · x∗ = pi − K i
K
i∈x∗
X pi
≤ pi − K − 1 = |x ∗ |K ≤ nK ,
∗
K
i∈x
i.e., K p0 · x∗ ≥ p · x∗ − nK . Furthermore,
X j pi k X pi
K p0 · x∗ ≤ K p0 · x0 = K ≤ K = p · x0 .
0
K 0
K
i∈x i∈x
p · x0 ≥K p0 · x∗ ≥ p · x∗ − nK = opt − εP ≥ (1 − ε)opt
– 28. April 2010 37/44
Lemma
p · x0 ≥ (1 − ε)opt.
Proof.
Consider the optimal solution x∗ .
X p
i
p · x∗ − K p0 · x∗ ≤ pi − K − 1 = |x ∗ |K ≤ nK ,
∗
K
i∈x
i.e., K p0 · x∗ ≥ p · x∗ − nK . Furthermore,
X j pi k X pi
K p0 · x∗ ≤ K p0 · x0 = K ≤ K = p · x0 .
0
K 0
K
i∈x i∈x
Hence,
p · x0 ≥K p0 · x∗ ≥ p · x∗ − nK = opt − εP ≥ (1 − ε)opt
Proof.
The running time ofdynamic
programming dominates.
Recall that this is O nPb0 where Pb0 = bp0 · x∗ c.
We have
0 2 P 2 Pn n3
nP̂ ≤n · (n · max pi ) = n
0 =n ≤ .
K εP ε
Simplifying assumptions:
1/ε ∈ N: Otherwise ε:= 1/ d1/εe.
Upper bound P̂ is known: Use linear relaxation to get a quick
2-approximation.
mini pi ≥ εP̂: Treat small profits separately. For these items
greedy works well. (Costs a factor O(log(1/ε))
time.)
Lemma
px0 ≥ (1 − ε)opt.
Proof.
Similar as before, note that |x| ≤ 1/ε for any solution.
Proof.
The best work in near linear time for almost all inputs! Both in a
probabilistic and in a practical sense.
[Beier, Vöcking, An Experimental Study of Random Knapsack
Problems, European Symposium on Algorithms, 2004.]
[Kellerer, Pferschy, Pisinger, Knapsack Problems, Springer
2004.]
Main additional tricks:
reduce to core items with good profit density,
Horowitz-Sahni decomposition for dynamic programming