The 0-1 Knapsack Problem: An Introductory Survey
The 0-1 Knapsack Problem: An Introductory Survey
The 0-1 Knapsack Problem: An Introductory Survey
An Introductory Survey
Michail G. Lagoudakis
The Center for Advanced Computer Studies
University of Southwestern Louisiana
P.O. Box 44330, Lafayette, LA 70504
[email protected]
1
while satisfying some constraints. For example, or not there exists a “solution” with profit no less
the Knapsack problem is to maximize the obtained than P. The decision form is given below:
profit without exceeding the knapsack capacity. In
fact, it is a very special case of the well-known PROBLEM: 0–1 KNAPSACK
Integer Linear Programming Problem. INSTANCE: A finite set of objects U, a weight
w (u) 2 Z , a profit p(u) 2 Z
Let us formulate the problem in a mathemat-
+ + for each u2U
ical way. We number the objects from 1 to n and and positive integers W (knapsack capacity) and
define, for all i, 1≤i≤n, the non-negative2 numbers: P (desired profit).
wi = the weight of object i, and QUESTION: Is there a subset U0 of U such that:
pi = the profit of object i. X X
Also, let W be the capacity of the knapsack, w (u) W and p(u) P ?
i.e. the maximum weight it can carry. Finally,
u2U 0
u2U 0
we introduce a vector X of binary variables x i The reason for this restriction is that it is de-
(i=1,...,n) having the meaning: sirable to use the theory of NP-completeness (ap-
plicable only on decision problems) to derive re-
1 ;if the object i is selected
x i= 0 ,otherwise
sults for the original problem. The point is that if
the objective function can be easily evaluated, the
Then the problem is stated as follows: Find a decision problem is no harder than the correspond-
binary vector X that maximizes the objective func- ing optimization problem. So, the implications of
tion (profit) the theory can be extended: If the decision prob-
lem is proved to be NP-complete, the correspond-
X
n
ing optimization problem is NP-hard [GaJo79].
i i
p x
2
may think that this simpler problem would be eas- appeared for the Knapsack problem to be NP-
ier, however both of them have the same degree complete. The second (using 3SAT) is more
of difficulty as it will be revealed in the following direct, in the sense that 3SAT is, so to speak, the
section. “second” NP-complete problem (SAT being the
“first”).
3
We create |CS|=n objects, so U={ui , i=1,...,n}. there is a one-to-one correspondence between U
Let =n+1 and for i=1,...,n and j=1,...,p and CS and between { j-1 : j=1,...,p} and S, if we
take CS0={csi : csi 2CS and ui 2U0 , i=1,...,n} then
if sj 2csi
x ij = f10 ;
w(u ) =
Xx 01
j
S belongs to exactly one cs, for cs belongs to
CS0. Since there is a one-to-one correspondence
i ij
0 + 1 + ::: + 01p u2 U0
01
= =
We conclude that EXACT–COVER ≤p
0–1 KNAPSACK-FILL. By theorem 1 0–1
The object ui , i=1,...,n, corresponds to csi .
The weight of ui is the summation of some powers
KNAPSACK-FILL is NP-complete. 4
of : j-1 appears in w(ui ) if and only if sj 2 csi . Proof 2. [MaYo78] It is obvious that 0–1
Therefore, there is a one-to-one correspondence KNAPSACK-FILL belongs to NP. A nondeter-
between U and CS and between { j-1 : j=1,...,p} ministic algorithm needs only guess a subset of
and S. It is obvious that the transformation can be objects ui , add up their weights and check if the
done in polynomial time O(np2 ). summation equals W. This can be done in poly-
Here an example. Consider the following nomial time. Therefore, 0–1 KNAPSACK-FILL
instance of EXACT-COVER: belongs to NP.
S={s1 , s2 , s3 , s4 } and CS={cs1 , cs2 , cs3 , cs4 , cs5 }, To show that 0–1 KNAPSACK-FILL is
where cs1 ={s1 }, cs2 ={s3 , s4 }, cs3 ={s1 , s2 , s4 }, NP-hard, we will transform 3SAT to 0–1
cs4 ={s1 , s2 }, cs5 ={s2 , s3 , s4 } KNAPSACK-FILL. Let X={x1 , x2 , ... , xn } be
Then the corresponding instance of 0–1 a set of variables and C={c1 , c2 , ... , cm } a
KNAPSACK-FILL is set of clauses making up an arbitrary instance
U={u1 , u2 , u3 , u4 , u5 }, =6 and of 3SAT. Weights must be assigned to a set of
w(u1 ) = 60 = 1 objects U and a knapsack capacity W, so that, C
w(u2 ) = 62 +63 = 252
w(u3 ) = 60 +61 +63 = 223 U0 of U such that
P
is satisfiable if and only if there exists a subset
w(u) = W .
w(u4 ) = 60 +61 = 7 u 2
U0
w(u5 ) = 61 +62 +63 = 258 The set U of the objects will consist of two
kinds of objects:
W
0 1 = 60 + 61 + 62 + 63 = 259
64
601
= 1. For each variable xi , 1≤i≤n create two objects
ui and u0i .
U0 U such that
P
It suffices to show that there exists a set
w(u) = W if and only if
2. For each clause cj , 1≤j≤m, create two objects
co1j and co2j (compensating objects).
u 2 U0
CS contains an exact cover for S. Now, a weight must be assigned to each of them.
Suppose that for some subset U0 of U the The weight w(u) for any u 2 U will be an (n+m)
summation of the weights equals to W. Then each digit decimal3 number. The ith digit, corresponds
power of , j-1 (j=1,...,p), must appear in exactly
one of the weights w(u), for u belongs to U0 . Since
3 Actually it could be a base-4 number but let us use decimal
for simplicity.
4
to the variable xi , while the (n+j)th digit corre- Note that |U|=2(n+m) and each weight (and the
sponds to the clause cj . For the objects ui and u0i , capacity) has exactly n+m digits, so the instance
1≤i≤n, the rightmost n digits are used to identify can be represented by a table of [2(n+m)+1](n+m)
the corresponding variable x i , setting the ith digit entries. Each entry can be calculated in constant
equal to 1 and the rest (n-1) digits equal to 0. The time. Therefore, the transformation from 3SAT
lefmost m digits are used to identify the clauses to 0–1 KNAPSACK-FILL can be done in poly-
cj , 1≤j≤m, where xi (for ui ) or ¬xi (for u0i ) ap- nomial time.
pears, setting the (n+j)th digit equal to 1 and the An example is provided for clarification pur-
rest equal to 0. poses. Consider the following instance of 3SAT:
For the compensating objects co1j and co2j , X={x1 , x2 , x3 , x4 , x5 } and C={c1 , c2 , c3 , c4 },
1≤j≤m, all the rightmost n digits are 0 and the where c1 ={x1 , ¬x2 , x4 }, c2 ={x2 , ¬x3 , ¬x5 },
leftmost m digits are used to identify the corre- c3 ={x3 , x4 , x5 }, c4 ={¬x1 , x2 , ¬x5 }
sponding clause cj , setting the (n+j)th digit equal The corresponding instance of 0–1
to 1 and the rest (m-1) digits equal to 0. Note that KNAPSACK-FILL is the following:
the weights of co1j and co2j are identical. U={u1 , u0 1 , u2 , u0 2 , u3 , u03 , u4 , u0 4 , u5 , u0 5 , co11 ,
Finally, the capacity W of the knapsack will co21 , co12 , co22 , co13 , co23 , co14 , co24 }
be an (n+m) digit number with the n rightmost The weight for all objects are shown in the
digits equal to 1 and the m rightmost digits equal table below. The capacity W is 333311111.
to 3.
Let us summarize the construction of the 0–1
KNAPSACK-FILL instance: 9 8 7 6 5 4 3 2 1
C is satisfiable.
• the n rightmost digits are 1
• the m leftmost digits are 3 Suppose that for some subset U0 of U the sum-
mation of the weights equals to W. The rightmost
5
n digits ensure that only one of ui , u0 i , 1≤i≤n, be-
longs to U0. If both were belonging to U0 , the ith subset U0 of U, such that
P
summation will be equal to W. So, there exists a
w(u) = W .
u2U 0
digit of the summation would be 2, which contra-
Finally, it is concluded that 3SAT ≤p
dicts the hypothesis. In fact, these objects form
0–1 KNAPSACK-FILL. By theorem 1, 0–1
a truth assignment for the corresponding vari-
ables:
KNAPSACK-FILL is NP-complete. 4
The above discussion results to the following.
• If ui 2U0 , then xi =1.
• If u0 i 2U0 , then ¬xi =1 or xi =0. Theorem 3. 0–1 KNAPSACK is NP-complete.
The m rightmost digits ensure that each clause is Similar proofs using other NP-complete
satisfied for this assignment . Indeed, there are problems can be found in [Papa94], [MaTo90],
three cases for the (n+j)th digit, 1≤j≤m: [Stin87] and [GaJo79].
1.
P
None of co1j , co2j belongs to U0 . Then
w(u) = 3. So, all the literals 4 Related problems
u2U 0 ;u = u i oru= u
0
i
in cj are 1 and, thus, cj is satisfied. In this section a set of related problems is
2.
P
One of co1j , co2j belongs to U0 . Then
w(u) = 2. So, two of the
presented. Usually all of the problems are referred
under the term Knapsack problems. The decision
u2U 0 ;u = ui oru = 0
ui form is preferred, because it is the common form
literals in cj are 1 and, thus, cj is satisfied. for theoretical study. The optimization form can
3.
P
Both co1j , co2j belongs to U0. Then
w(u) = 1. So, at least one of
be derived easily (see also [MaTo90]).
A generalization of the original 0–1 KNAP-
0
u2U ;u = ui oru = 0
ui
the literals in cj is 1 and, thus, cj is satisfied. SACK problem arises under the assumption that
bi copies of the object ui , i=1,...n, are available.
In each case, all cj are satisfiable and, thus, C is This is known as the
satisfiable.
Conversely, if C is satisfiable then there exists PROBLEM: BOUNDED KNAPSACK
an assignment , such that at least one of the INSTANCE: A finite set of objects U={u1 , u2 ,
literals in each clause is 1. For each i, 1≤i≤n, ... ,un }, a number of copies bi 2 Z + , a weight
if xi =1 then select ui , else (xi =0) select u0i . wi 2 Z + , a profit pi 2 Z + for each ui 2 U,
This ensures that if the weights of these objects i=1,...,n, and positive integers W (knapsack ca-
are summed up, the n rightmost digits will be all pacity) and P (desired profit).
1 and the m leftmost digits will be either 1, 2 or QUESTION: Is there any integer vector X=(x1 ,
3 (since each clause is satisfied by at least one x2 , ... , xn ), such that 0≤xi ≤bi for i=1,...,n and
literal). For 1≤j≤m:
X w x W and X p x P ?
n n
6
problem. The name is related to the situation If more than one knapsacks, namely k1 , k2 , ...
when a cashier has to assemble a change W giv- ,km with capacities W1 , W2 , ... ,Wm respectively
ing the maximum (or minimum) number of coins. are allowed, the following problem is obtained.
Xn X
n
array X=(xij ,), such that
Pm
QUESTION: Is there any binary 2–dimensional
ij for i=1,...,n,
i i
w x = W and xi P ?
j =1
x 1
i=1 i=1
Xn
i ij
w x j
W ; j = 1; . . . ; m
7
array X=(xij ,), such that
Pm
xij 1 for i=1,...,n, 1950’s - First dynamic programming algorithm [Bell57]
- Upper bound on the optimal solution [Dant57]
j =1
1960’s - Dynamic programming algorithm improvement
Xn [GiGo6x]
wij xij Wj ; j = 1; . . . ; m
- First branch and bound algorithm [Kole67]
k =1
algorithms are pure polynomial, some of them
Calculate the optimal solution X0 :
have an impressive record of running quickly in
3.
practice. That makes some researchers consider x0i =1, for i=1,...,s-1
the Knapsack problem as a well solved problem x0i =0,
for i=s+1,...,n
0P
[GaJo79]. An excellent reference on algorithms 01
x0s =
s
W wk =ws
for Knapsack problems is [MaTo90]. The descrip-
k =1
tions presented here are based on [Stin87].
8
The optimal solution X0 of the continuous re- node is greater than the Knapsack capacity
laxation gives an upper bound for the optimal so- W the subtree under this node is pruned; it
lution of the 0–1 Knapsack problem. Moreover, it leads to non-feasible solutions.
provides the idea for the following greedy approx- 2. Pruning the branches that will not give a
imation algorithm for the 0–1 Knapsack problem better profit than the optimal so far. This
(the heuristic given in the introduction): can be done by calculating an upper bound
for the solutions under each node (bounding
1. Order and rename the objects so that:
wi wi+1 for i=1,...,n-1.
pi pi+1 function) and comparing it with the optimal
profit found so far. If the bound is less (or
2. CW=0; the current weight
equal) than the optimal so far, the branch
3. For i=1,...n
is pruned. The bounding function at a node
can be calculated by adding the profits of the
a. If CW+wi ≤W objects selected so far in the partial solution
then CW CW+wi; xi =1 and the optimal solution obtained from the
else xi =0 relaxation of the “remaining“ problem using
the greedy algorithm.
The time complexity for both of them is poly-
nomial O(n). In fact this approach is not used in a pure
depth-first fashion, but in a best-first fashion: Cal-
culate the bounding function on both sons of a
5.3 Branch and Bound
node and follow first the most “promising” branch,
A naive approach to solve the 0–1 Knapsack i.e. the one with the greatest value.
problem is to consider in turn all the 2 n possi-
The branch and bound algorithm described
ble solutions (vectors) X, calculating the profit
above has an exponential time complexity O(n2n )
each time and keeping track of the highest profit
in the worst case, but it is very fast in the av-
found and the corresponding vector. Since each
erage case. More complicated branch and bound
xi , i=1,...,n, can be only 0 or 1, this can be done
schemes have also been proposed [MaTo90].
by a backtracking algorithm traversing a binary
tree in a depth-first fashion. The level i of the
tree corresponds to variable x i . The internal nodes 5.4 Dynamic Programming
at some level represent partial solutions extended
The first dynamic programming approach de-
at the next level. For example, x=(0,1,-,-,-) has
scribed here is based on the following idea. The
two children: (0,1,0,-,-) and (0,1,1,-,-). So the
value of, say, xn can be either 0 or 1. If xn =0,
leaves represent all the possible solutions (feasi-
then the best profit possibly obtained is whatever
ble or not). But this exhaustive algorithm leads to
an exponential complexity 2(n2n), which is not
is attained from the remaining n-1 objects with
knapsack capacity W. If xn =1, then the best profit
desirable.
possibly obtained is the profit pn (already selected)
The average case can be improved in two plus the best profit obtained by the remaining n-1
ways: objects with knapsack capacity W-wn (in this case
1. Pruning the branches that leads to non- it must be wn ≤W). Therefore, the optimal profit
feasible solutions. This can be done by calcu- will be the maximum of these two “best” profits.
lating the current weight at each node, i.e. the The idea above leads to the following recur-
weights of the objects selected so far in the rence relation, where P(i,m), i=1,...,n, m=0,...,W,
partial solution. If the current weight at some is the best profit obtainable from the objects 1,...,i
9
with knapsack capacity m: P
i
P (i; m) =
max f 00
P (i
P (i
1;m)
1;m 0 w i )+pi
g ; wi m
bottom-up fashion. As soon as the optimal profit
P has been calculated, the optimal solution X can
with initial conditions be found by backtracking through the table and
assigning 0’s and 1’s to xi ’s according to the se-
0 ; w1 > m
P (1; m) =
p1 ; w1 m
lections of the min function. The algorithm has a
pseudo-polynomial complexity O(n2 maxp), where
So, the optimal profit is P(n,W). maxp=max{pi : i=1,...,n}.
A dynamic programming algorithm simply
has to construct an n2W table and calculate the 5.5 Approximate Algorithms
entries P(i,m) (i=1,...n, m=0,...,W), in a bottom-
up fashion. As soon as the optimal profit P(n,W) Occasionally a small loss of accuracy is ac-
has been calculated, the optimal solution X can ceptable for the sake of speed in running time.
be found by backtracking through the table and A near-optimal solution is often as good as an
assigning 0’s and 1’s to xi ’s according to the se- optimal solution, but it can be obtained in signif-
lections of the max function. icantly less time. This is the reason that several
approximation algorithms have been proposed for
The algorithm has a time complexity of
the Knapsack problem in the literature.
O(nW). Note that this is not polynomial, since
W can be exponentially large to n, e.g. W=2n . The first approximation algorithm described
Such a complexity is called pseudo-polynomial. here is based on the second dynamic program-
ming approach already presented above. Recall
An other dynamic programming approach at-
that the complexity was O(n2 pmax). The idea is
tempts to minimize the knapsack capacity required
that, if the profits can be scaled in some manner,
to achieve a specific profit. For i=1,...,n and
P
i
so that pmax is reduced, it will lead to a signif-
p=0,..., pk , W(i,p) is defined to be the mini- icant improvement in the running time. Given a
k =1
mum knapsack capacity into which a subset of the positive integer k, this scaling can be done by the
objects 1,...,i, obtaining a profit of at least p, can following substitution:
fit. Then W(i,p) satisfies the following recurrence jp k
i
pi ,i=1,...,n
relation: 2 k
8 01
P
01 0
i
>
> In fact, the last k bits of the binary representation
< W (i ;p pi ) + wi ; pi < p
k =1 of each pi are truncated. The result is a small loss
W (i; p) =
> 01 ) 01
P
f g
i
>
: min
W (i ;p of accuracy but now the computational complexity
( 01 0
; pi p
pi )+wi
W i ;p
k =1 has been reduced to O(n2 pmax/2k ). It can be
with initial conditions proved that the relative error
w1 ;p p1 P opt 0 Papp
W (1; p) =
+ 1 ; p > p1
re =
Popt
Then the optimal profit is where Popt and Papp are the profits of the opti-
P = max p f : W (n; p) g W
mal and the approximate solution respectively, is
bounded by the quantity
A dynamic programming algorithm simply 0 1
has to construct an n2
P
n
10
The integer k can by chosen using the fol- b. It appears very often as a subproblem into
lowing formula if we desire the relative error to other problems.
be at0 most 1>0: c. It arises in many practical situations, such as
n 2 0 1 or k log 1 + 1
k pmax
cargo loading, cutting stock, etc.
2
pmax n Suppose we have to invest (totally or par-
and since k must be integer
= log2 1 + 1 pmax
j k tially) an amount of W dollars and we are con-
k sidering n investments. If pi is the profit expected
n from investment i and wi the amount of dollars it
The complexity now becomes O(n3 /). For a fixed requires, then finding the optimal choice of invest-
>0, we have an -approximation algorithm with
ments, i.e. the one that maximizes the profit, is
complexity O(n3 ). equivalent to solving the corresponding knapsack
Another approximation algorithm can be de- problem. Suppose, also, that a transportation com-
rived in the following way. Suppose that the pany has to transport a stack of n items. If item i
greedy approximation algorithm is used to pro- has weight wi and each truck can carry items with
duce a feasible solution X1 with profit P1 for an total weight up to W, which is the ideal amount
instance I. Let, also, X2 with profit P2 be the fea- for the trucks to carry as much weight as possible
sible solution that contains only the object with on each routing? The knapsack (truck) problem
the highest profit. Comparing P1, P2 we can ob- appears again.
tain a feasible solution X* with profit P* =max{P1, The problem has been used also in cryptog-
P2}. It has been proven that if Popt is the profit raphy for public key encryption. Other domains
of the optimal solution, then: where the problem appears are: cargo loading,
P3 Popt 2P 3 project selection, budget control, capital budget-
Then the relative error is bounded by 0.5 and this ing, cutting-stock, network flow, simulated an-
is a simple 0.5–approximation algorithm. Now, nealing, selection of journals for a library, usage
by taking of group theory in integer programming, memory
k = log2 1 +
1 P3 sharing, stochastic signals, etc. A good overview
n
of the early applications is located in [SaKl75].
for some and apply the -approximation algo-
rithm above to the initial instance I, it can be 7 Conclusion
proven that the relative error is at most and This paper provides an introduction to the
the time complexity becomes O(nPopt /2k ) which is
Knapsack problem for the unfamiliar reader. By
(n2 /). Therefore, this improvement leads to an-
no means it is a complete reference. The follow-
other -approximation algorithm with better com-
ing references and the extensive list in [MaTo90]
plexity.
provide pointers to the literature for the demand-
ing reader.
6 Applications
The 0-1 Knapsack problem is one of the few 8 Acknowledgments
problems that have been extensively studied dur- I would like to thank my professors Dr.
ing the last few decades. There are three main William R. Edwards and Dr. Gui-Liang Feng for
reasons [MaTo90]: their useful comments. Also, I would like to thank
a. It is a special case of the very important Mrs Deana Michna for her invaluable help in the
problem of Integer Linear Programming. revision of the text.
11
9 References tations, Plenum Press, New York, 1972, pp.
85–103.
[BaZe80] Balas, E. & Zemel, E. “An algorithms
[Ko_I93] Ko, I. “Using AI techniques and learn-
for large zero-one knapsack problems”, in Op-
ing to solve multi-level knapsack problems”,
erations Research 28, 1980, pp. 1130–1154.
PhD Thesis, University of Colorado at Boul-
[Bell57] Bellman, R. Dynamic Programming, der, Boulder, CO, 1993.
Princeton University Press, Princeton, NJ, [Kole67] Kolesar, P.J. “A branch and bound algo-
1957. rithm for the knapsack problem”, in Manage-
[Dant57] Dantzig, G.B. “Discrete variable ex- ment Science 13, 723–735.
tremum problems”, in Operations Research 5, [LoSm92] Loots, W. & Smith, T.H.C. “A parallel
1957, pp. 266–277. algorithm for the zero-one knapsack problem”,
[GaJo79] Garey, M.R. & Johnson, D.S. Comput- in Int. J. Parallel Program. 21, 5, 1992, pp.
ers and Intractability: A Guide to the Theory of 313–348.
NP-completeness, W.H. Freeman and Comp., [MaYo78] Machtey, M. & Young P. An Introduc-
San Francisco, 1979. tion to the General Theory of Algorithms, El-
[GiGo61] Gilmore, P.C. & Gomory, R.E. “A lin- sevier North-Holland, Inc., New York, 1978,
ear programming approach to the cutting stock Ch.7.
problem I”, in Operations Research 9, 1961, [MaTo77] Martello, S. & Toth, P. “An upper
pp. 849–858. bound for the zero-one knapsack problem and
[GiGo63] Gilmore, P.C. & Gomory, R.E. “A lin- a branch and bound algorithm”, in European
ear programming approach to the cutting stock Journal of Operational Research 1, 1977, pp.
problem II”, in Operations Research 11, 1963, 169–175.
pp. 863–888. [MaTo90] Martello, S. & Toth, P. Knapsack Prob-
[GiGo66] Gilmore, P.C. & Gomory, R.E. “The lems: Algorithms and Computer Implementa-
theory and computation of Knapsack func- tions, John Wiley & Sons, New York, 1990.
tions”, in Operations Research 14, 1966, pp. [OhPS93] Ohlsson, M., Peterson, C. & Soderberg,
1045–1074. B. “Neural Networks for optimization problems
[HoSa74] Horowitz, E. & Sahni, S. “Comput- with inequality constraints: the knapsack prob-
ing partitions with applications to the knapsack lem”, in Neural Computation 5, 2, 1993, pp.
problem”, in Journal of ACM 21, 1974, pp. 331–339.
277–292. [Papa94] Papadimitriou, C.M. Computational
[IbKi75] Ibarra, O.H. & Kim, C.E. “Fast approxi- Complexity, Addison-Wesley Publishing Com-
mation algorithms for the knapsack and sum of pany, Inc., Reading, MA, 1994, Ch. 9.
subset problems”, in Journal of ACM 22, 1975, [PeHA94] Penn, M., Hasson, D. and Avriel, M.
pp. 463–468. “Solving the 0–1 prportional Knapsack problem
[InKo73] Ingargiola, G.P. & Korsh, J.F. “A re- by sampling”, in J. Optim. Theory Appl. 80, 2,
duction algorithm for zero-one single knapsack 1994, pp. 261–272.
problems”, in Management Science 20, 1975, [Sahn75] Sahni, S. “Approximate algorithms for
pp. 460–463. the 0–1 knapsack problem”, in Journal of ACM
[Karp72] Karp, R.M. “Reducibility among combi- 22, 1975, pp. 115–124.
natorial problems”, in Miller, R.E. & Thatcher, [SaKl75] Salkin, H.M. & de Kluyver, C.A.
J.W. (eds.) Complexity of Computer Compu- “The knapsack problem: a survey”, in Naval
12
Research Logistics Quarterly 22, 1975, pp. Ch. 8.
127–144. [Stin87] Stinson, R.D. An Introduction to the De-
[Sava76] Savage, J.E. The complexity of comput- sign and Analysis of Algorithms (2nd Ed.), Win-
ing, John Wiley & Sons, Inc., New York, 1976, nipeg, Manitoba, Canada, 1987, Ch. 3,4,6.
13