Advanced Algorithms Course. Lecture Notes. Part 3: Addendum: Weighted Hitting Set
Advanced Algorithms Course. Lecture Notes. Part 3: Addendum: Weighted Hitting Set
Advanced Algorithms Course. Lecture Notes. Part 3: Addendum: Weighted Hitting Set
1
As often, the idea of a greedy algorithm is simple: Short paths should
minimize the chances of conflicts with other paths, and the shortest paths
can be computed efficiently. Therefore, the proposed algorithm just chooses
a shortest path that connects some yet unconnected pair and adds it to the
solution, as long as possible. After every step we delete the edges of the
path used, in order to avoud collision with paths chosen later.
However, the idea is not as powerful as one might hope: In each step
there could exist many short paths to choose from, and we may easily miss
a good one, since we only take length as selection criterion. But at least
we can prove the O( m) factor, as follows. Let I and I denote the set
of indices i of the pairs (si , ti ) connected by the optimal and the greedy
solution, respectively. Let Pi and Pi denote the selected paths for index i.
The analysis works with case a distinction regarding the length: We call a
path with at least m edges long, and other paths are called short. Let
Is and Is be the set of indices i of the pairs (si , ti ) connected by the short
paths in I and I, respectively.
Since only m edges exist, I can have at most m long paths. Consider
any index i where Pi is short, but (si , ti ) is not even connected in I. (This is
the worst that can happen to a pair, hence our worst-case analysis focusses
on this case.) The reason why the greedy algorithm has not chosen Pi must
be that some edge e Pi is in some Pj chosen earlier. We say that e
blocks Pi . We have |Pj | |Pi | m. Every edge in Pj can block at
most one path of I . Hence Pj blocks at most m paths of I . The number
of such particularly bad indices i is therefore bounded by |Is \ I| |Is | m.
Finally some simple steps prove the claimed approximation ratio: |I |
|I \ Is | + |I| + |Is \ I| m + |I| + |Is | m (2 m + 1)|I|.
2
In the Knapsack problem, a knapsack of capacity W is given, as well as
n items with weights wi and values vi (all integer). The problem is to find
a subset S of items with iS wi W (so that S fits in the knapsack) and
P
P
maximum value iS vi . Define v := max vi .
You may already know that Knapsack is NP-complete but can be solved
by some dynamic programming algorithm. Its time bound O(nW ) is polyno-
mial in the numerical value W , but not in the input size n, therefore we call it
pseudopolynomial. (A truly polynomial algorithm for an NP-complete prob-
lem cannot exist, unless P=NP.) However, for our approximation scheme we
need another dynamic programming algorithm that differs from the most
natural one, because we need a time bound in terms of values rather than
weights. (This point will become more apparent later on.) Here it comes:
Define OP T (i, V ) to be the minimum (necessary) capacity of a knapsack
that contains a subset of the first i items, of total value at least V . We can
compute OP T (i, V ) using the OP T values for smaller arguments, as follows.
If V > i1
P
j=1 vj then, obviously, we must add item i to reach V . Thus we
have OP T (i, V ) = wi + OP T (i 1, V vi ) in this case. If V i1
P
j=1 vj then
item i may be added or not, leading to
3
vi0 vi0 . Now one can easily see:
P P
values, clearly iS iS
vi0 vi0
X X X X X
vi /b (vi /b + 1) n + vi /b.
iS iS iS iS iS
This shows X X
vi nb + vi ,
iS iS
in words, the optimal total value is larger than the achieved value by at most
an additional amount nb. Depending on the maximum value v we choose
a suitable b. By chosing b := v /n, the above inequality becomes
vi v +
X X
vi .
iS iS
vi v , this becomes
P
Since trivially iS
X X X
vi vi + vi ,
iS iS iS
hence X X
(1 ) vi vi .
iS iS