Progress in Linear Programming-Based Algorithms For Integer Programming - Johnson Et Al.
Progress in Linear Programming-Based Algorithms For Integer Programming - Johnson Et Al.
This paper is about modeling and solving mixed integer programming (MIP) problems. In the last decade, the use of mixed
integer programming models has increased dramatically. Fifteen years ago, mainframe computers were required to solve
problems with a hundred integer variables. Now it is possible to
solve problems with thousands of integer variables on a personal computer and obtain provably good approximate solutions to problems such as set partitioning with millions of
binary variables. These advances have been made possible by
developments in modeling, algorithms, software, and hardware.
This paper focuses on effective modeling, preprocessing, and
the methodologies of branch-and-cut and branch-and-price,
which are the techniques that make it possible to treat problems
with either a very large number of constraints or a very large
number of variables. We show how these techniques are useful
in important application areas such as network design and crew
scheduling. Finally, we discuss the relatively new research areas of parallel integer programming and stochastic integer programming.
T his
cx
Ax b
lxu
x j integral, j 1, . . . , p
is called a mixed integer program (MIP). The input data are the
matrices c (1 n), A (m n), b (m 1), l (1 n), and u (1
n), and the n-vector x is to be determined. We assume 1
p n, otherwise the problem is a linear program (LP). If p
n, the problem is a pure integer program (PIP). A PIP in which
all variables have to be equal to 0 or 1 is called a binary
integer program (BIP), and a MIP in which all integer variables have to be equal to 0 or 1 is called a mixed binary integer
program (MBIP). Binary integer variables occur very frequently in MIP models of real problems.
We consider problems in which the matrix A has no
special structure and also some structured problems that are
natural to formulate as MIPs. The latter class includes the
traveling salesman problem, fixed-charge network flow
3
LP-Based Integer Programming Algorithms
Figure 1.
Branch-and-bound tree.
4
Johnson, Nemhauser, and Savelsbergh
j1
x ij 1
i 1, . . . , n 1.
i 1, . . . , n, k i 1, . . . , n 1,
j 1, . . . , n.
n1
x ij 1
j 1, . . . , n,
i1
yt
dk Dt
t 1, . . . , T.
kt
t 1, . . . , T,
t 1, . . . , T.
t 1, . . . , T.
5
LP-Based Integer Programming Algorithms
1. Each node is met by exactly two edges. Let (v) be the set
of edges that are incident to node v, then
y tk d k
k 1, . . . , T,
xe 2
@v V.
ev
t1
k 1, . . . , T, t 1, . . . , k.
xe 2
eU
@U V, 2 U
V2 ,
6
Johnson, Nemhauser, and Savelsbergh
2. Preprocessing
We have stressed the importance of tight linear programming relaxations. Preprocessing applies simple logic to reformulate the problem and tighten the linear programming
relaxation. In the process, preprocessing may also reduce the
size of an instance by fixing variables and eliminating constraints. Sometimes preprocessing may detect infeasibility.
The simplest logical testing is based on bounds. Let Li be
any lower bound on the value of the ith row Aix subject only
to l x u, and let Ui be any upper bound on the value of
the ith row Aix subject only to l x u. A constraint Aix
bi is redundant if Ui bi and is infeasible if Li bi. A bound
on a variable may be tightened by recognizing that a constraint becomes infeasible if the variable is set at that bound.
These considerations apply to linear programs as well as to
integer programs. However, such tests may be unnecessary
for linear programs. For mixed integer programs, one is
willing to spend more time initially in order to reduce the
possibility of long solution times. In the case of 0-1 variables,
the initial lower and upper bounds on the variables are 0
and 1, respectively. If these bounds are improved, then the
variable can be fixed. For example, if the upper bound of a
0-1 variable is less than 1, then it can be fixed to 0. Of course,
if the lower bound is positive and the upper bound is less
than 1, then the problem is integer infeasible. In summary,
considering one row together with lower and upper bounds
may lead to dropping the row if it is redundant, declaring
the problem infeasible if that one row is infeasible, or tightening the bounds on the variables.
These relatively simple logical testing methods can become powerful when combined with probing. Probing means
temporarily setting a 0-1 variable to 0 or 1 and then redoing
the logical testing. If the logical testing shows that the problem has become infeasible, then the variable on which we
probe can be fixed to its other bound. For example, 5x1
3x2 4 becomes infeasible when x1 is set to 0. We conclude
that x1 must be 1 in every feasible solution. If the logical
testing results in another 0-1 variable being fixed, then a
logical implication has been found. Consider 5x1 4x2
x3 8. If x1 is set to 1, then subsequent bound reduction will
fix x2 to 0. Thus, we have found the logical implication x1
1 implies x2 0, which can be represented by the inequality
x1 x2 1. Adding this inequality tightens the LP relaxation since (1, 0.75, 0) is feasible for the original inequality
but is infeasible for the implication inequality. If the logical
testing shows that a constraint has become redundant, then
it can be tightened by what is called coefficient reduction or
coefficient improvement. For example, 2x1 x2 x3 1
becomes strictly redundant when x1 is set to 1. Whenever a
variable being set to 1 leads to a strictly redundant constraint, then the coefficient of the variable can be reduced by
the amount that the constraint became redundant. Therefore, 2x1 x2 x3 1 can be tightened to x1 x2 x3 1.
Note that (0.5, 0, 0) is no longer feasible to the LP relaxation
of the tightened constraint. Less obvious coefficient improvements can also be found during probing. For example,
if there are inequalities
x 1 x 2 , x 1 x 3 , and x 2 x 3 1,
(1)
jS
xj
x j 1 S .
jS
7
LP-Based Integer Programming Algorithms
3. Branching
In the normal course of a branch-and-bound algorithm, an
unevaluated node is chosen (initially MIP(0)), the LP relaxation is solved, and a fractional variable (if there is one) is
chosen on which to branch. If the variable is a 0-1 variable,
one branch sets the variable to 0 (the down branch) and the
other sets it to 1 (the up branch). If it is a general integer
variable and the LP value is, for instance, 312, then one
branch constrains it to be 3 and the other constrains it to be
4. Even in this simple framework, there are two choices to
be made: 1) which active node to evaluate and 2) the fractional variable on which to branch.
The variable choice can be critical in keeping the tree size
small. A simple rule is to select a variable whose fractional
value is closest to 21, i.e., with the maximum integer infeasibility. More sophisticated rules, which are used by many
commercial solvers, try to choose a variable that causes the
8
Johnson, Nemhauser, and Savelsbergh
It is difficult to balance the relative advantages and disadvantages of selecting nodes near the top or bottom of the
tree. In general, the number of active nodes may rapidly
increase if the active node is always chosen high up in the
tree. On the other hand, if the node is always chosen from
down low in the tree, the number of active nodes stays
small, but it may take a long time to improve the upper
bound.
More complicated branching involves sets of variables.
An example of branching on a set of variables is the use of
special ordered sets of type 1 (SOS1). It consists of splitting
an ordered set into two parts, one for each branch. That is,
for a constraint such as
xe 2
eU
xe 2
eU
V2 .
x 1 x 2 x 3 x 4 x 5 1,
@U V, 2 U
@s S,
tT
x e 4.
eU
Finally, we return to the example mentioned in the formulation section with the constraint: no more than two variables from an ordered set can be positive, and if there are two
positive variables, then their indices must be consecutive. These
are special ordered sets of type 2 (SOS2). Here we show how
to enforce SOS2 constraints by branching.
Again, consider
x1 x2 x3 x4 x5 1
and suppose {1, 2, 3, 4, 5} is an SOS2 set. The LP solution x1
0, x2 0.5, x3 0, x4 0.5, and x5 0 does not satisfy SOS2.
Note that x2 positive implies x4 0 and x5 0 and that x4
positive implies x1 0 and x2 0. Thus, we can branch by
imposing x1 0 and x2 0 on one child node and x4 0 and
x5 0 on the other. The current LP solution is infeasible to
the problems at both child nodes. This is known as SOS2
branching. Note that x3 0 does not appear on either
branch, which is the difference between SOS1 and SOS2
branching.
Many other logical relationships can be handled by
branching. For example, a variable x with domain 0, [a, b]
with a 0 is called semi-continuous. Excluding an LP solution with 0 x a obviously can be done by branching.
The LP-based branch-and-bound algorithm for integer
programming was developed by Land and Doig.[69] The
following papers are representative of the research on
branching: Beale,[12] Be nichou et al.,[13] Driebeek,[37] Forrest
et al.,[40] and Tomlin.[86] A recent survey of branching techniques is presented by Linderoth and Savelsbergh.[73]
4. Primal Heuristics
As mentioned in the introduction, large trees are the result
of many branchings, and avoiding branching, insofar as is
possible, is equivalent to avoiding the case z(k) zbest. Recall
that the variable choice rule based on pseudocosts tries to
accomplish that by choosing a variable that causes the objective function to deteriorate quickly, i.e., that results in low
values of z(k). In this section, we focus on trying to avoid the
9
LP-Based Integer Programming Algorithms
10
Johnson, Nemhauser, and Savelsbergh
a ijx j b i
algorithm for both BIPs and MBIPs is the class of lift-andproject inequalities.[5] These cuts are derived from the following basic ideas from disjunctive programming:
1. xj binary is equivalent to xj x2j .
2. The convex hull of feasible solutions to a MBIP can be
obtained by taking the convex hull with respect to one
binary variable and then iterating the process.
After selecting a binary variable xj, the original constraint
system Ax b is multiplied by xj and separately by (1 xj).
Then x2j is replaced by xj and, for k j, xkxj is replaced by yk,
where yk satisfies yk xk, yk xj, and yk xk xj 1. The
cuts are obtained in the (x, y)-space and then projected back
to the x-space. Separation for these inequalities requires the
solution of an LP problem and, therefore, can be very expensive. However, given any fractional solution, a violated
inequality can always be found.
The first Type II inequalities used in the solution of MBIPs
were derived from knapsack relaxations.[31] Consider a
knapsack inequality
a jx j b, x j 0, 1 for j N.
jN
a j b.
jC
a ijx j b i,
x j C 1,
(2)
jC
jC
1 x j 1.
jC
1 x *j z j :
jN
jN
a jz j b, z j 0, 1 for j N
1.
11
LP-Based Integer Programming Algorithms
x j C 1 1,
(3)
jN
yj
yj d
jN
jC1
jC1
xj
jNC
jx j
jC2
jx j C 1 1
j .
jC2
y j m jx j
j N,
a jx j b s, x j 0, 1 for j N, s 0,
jN
12
Johnson, Nemhauser, and Savelsbergh
edges from e1, e2, and e3, then we can take no more than two
edges from e7, e8, and e9.
In general, consider a subset of nodes H and an odd set of
node disjoint edges T, each having one end in H as shown in
Fig. 3. The figure resembles a comb, with handle H and teeth
T. Let E(H) be the set of edges with both ends in H. How
many edges can we take from E(H) T in a tour? Arguing
as above, we see that we should take all of T, but then a
simple counting argument shows that the maximum number of edges we can take from E(H) is H T/2. Hence,
we get the simple comb inequality[28]
eEH
xe
x e H T/ 2.
eT
These simple comb inequalities can be separated in polynomial time, and in some instances they define facets of the
convex hull of tours. It is interesting to note that they can be
derived as Gomory-Chva tal cuts. Moreover, they can be
generalized by also using the subtour elimination constraints.[50, 51]
The TSP also demonstrates where cuts can be used to
exclude integer solutions that are not feasible. The initial
formulation of the TSP cannot include all of the connectivity
constraints since the number of them is exponential in the
size of the TSP input. Without all of the connectivity constraints, an optimal solution to the LP relaxation can be
integral but not feasible, i.e., a set of two or more subtours
that contain all of the nodes. When this occurs, at least one
connectivity constraint must be a violated cut. Connectivity
constraints can also be used to cut off fractional solutions.
There is a vast literature on specialized branch-and-cut
algorithms for many combinatorial optimization problems.
A network design problem will be discussed in Section 11.
For some families of cuts, such as connectivity constraints
and combs, the separation problem can be solved exactly
and rapidly. For more complex families of cuts, especially
max
p ijz ij
1im 1jn
z ij 1
i 1, . . . , m
w ijz ij d j
j 1, . . . , n
1jn
(4)
1im
z ij 0, 1 i 1, . . . , m,
j 1, . . . , n,
where pij is the profit associated with assigning task i to
machine j, wij is the amount of the capacity of machine j used
by task i, dj is the capacity of machine j, and zij is a 0-1
variable indicating whether task i is assigned to machine j.
Alternatively, the problem can be viewed as that of partitioning the set of tasks into subsets that can be performed
on a specific machine. This yields the following (re)formulation:
max
p ijy jik jk
1jn 1kKj
1im
y jik jk 1
i 1, . . . , m
jk 1
j 1, . . . , n
1jn 1kKj
1kKj
jk 0, 1 j 1, . . . , n,
k 1, . . . , K j ,
(5)
where the first m entries of a column, given by yjk ( yj1k, yj2k,
. . . , yjmk), satisfy the knapsack constraint
w ijx i d j
1im
x i 0, 1
i 1, . . . , m,
13
LP-Based Integer Programming Algorithms
and Kj denotes the number of feasible solutions to the knapsack constraint. In other words, the first m entries of a
column represent a feasible assignment of tasks to a machine.
The LP relaxation of (5) can be obtained from a DantzigWolfe decomposition of the LP relaxation of (4). The LP
relaxation of (5) is tighter than the LP relaxation of (4) since
fractional solutions that are not convex combinations of 0-1
solutions to the knapsack constraints are not feasible to (5).
However, the number of columns in (5) is exponential in the
size of the input. Fortunately, it is possible to handle the
huge number of columns implicitly rather than explicitly.
The basic idea is simple. Leave most columns out of the LP
relaxation because there are too many columns to handle
efficiently; most of them will have their associated variable
equal to zero in an optimal solution anyway. Then, as in
column generation for linear programming, to check the
optimality of an LP solution, a subproblem, called the pricing
problem, is solved to try to identify columns to enter the
basis. If such columns are found, the LP is reoptimized; if
not, we are done. To solve the LP relaxation of the setpartitioning formulation of the GAP, pricing or column generation is done by solving n knapsack problems.
Obviously, the LP relaxation may not have an integral
optimal solution and then we have to branch. However,
applying a standard branch-and-bound procedure over the
existing columns is unlikely to find an optimal (or even good
or even feasible) solution to the original problem. Therefore,
it may be necessary to generate additional columns in order
to solve the LP relaxations at non-root nodes of the search
tree. Branch-and-bound algorithms in which the LP relaxations at nodes of the search tree are solved by column
generation are called branch-and-price algorithms.
There are two fundamental difficulties in applying column generation techniques to solve the linear programs
occurring at the nodes of the search tree.
Conventional integer programming branching on variables may not be effective because fixing variables can
destroy the structure of the pricing problem.
Solving these LPs and the subproblems to optimality may
not be efficient, in which case different rules will apply for
managing the search tree.
Consider the set-partitioning formulation of the GAP.
Standard branching on the -variables creates a problem
along a branch where a variable has been set to zero. Recall
that yjk represents a particular solution to the jth knapsack
problem. Thus, jk 0 means that this solution is excluded.
However, it is possible (and quite likely) that the next time
the jth knapsack problem is solved, the optimal knapsack
solution is precisely the one represented by yjk. In that case,
it would be necessary to find the second-best solution to the
knapsack problem. At depth l in the branch-and-bound tree,
we may need to find the lth-best solution. Fortunately, there
is a simple remedy to this difficulty. Instead of branching on
the s in the master problem, we use a branching rule that
corresponds to branching on the original variables zij. When
zij 1, all existing columns in the master problem that do
14
Johnson, Nemhauser, and Savelsbergh
15
LP-Based Integer Programming Algorithms
8. Software
Anyone who has ever attempted to use integer programming in practice knows that the road from a real-life decision problem to a satisfactory solution can be quite long and
full of complications. The process involves developing a
model (which typically involves making simplifying assumptions), generating an instance of the model (which may
involve gathering huge amounts of data), solving the instance (which involves transforming the instance data into a
machine-readable form), verifying the solution and the
model (which involves validating the appropriateness of the
simplifying assumptions), and, if the need arises, repeating
these steps. In addition, models may have to be modified
when changes occur in the problem or when user needs
become different. This iterative process represents the modeling life cycle in which a model evolves over time.
Although this paper concentrates on how to solve integer
programs, it is important to realize that this is only a single
step in the overall process. Ideally, a computer-based integer-programming modeling environment has to nurture the
entire modeling life cycle and has to support the management of all resources used in the modeling life cycle, such as
data, models, solvers, and solutions.
Two components of this ideal modeling environment
have received the most attention and are now widely availablethe modeling module and the solver module. A modeling module provides an easy to use yet powerful language
for describing a decision problem. Most modeling modules
available today are based on an algebraic paradigm to conceptualize the model. A solver module provides an optimizer that reads an instance produced by the modeling
module and then solves it. Obviously, the modeling module
and the solver module need to be able to communicate with
each other. The simplest but least efficient way to accomplish this is through files. The MPS format is a generally
accepted format for the specification of an instance of an
integer linear program. All modeling modules are able to
produce an MPS input file and to process an MPS output
file. All solver modules are able to process an MPS input file
and to produce an MPS output file. Since reading and writing files are slow processes, other more efficient interfaces
have also been developed. There are also some integrated
systems that provide both a modeling module and a solver
module.
As we have emphasized throughout the paper, it is often
essential to use the structure of the underlying problem to
enhance the performance of general-purpose integer-programming software. Such enhancements range from speci-
16
Johnson, Nemhauser, and Savelsbergh
Table I.
Instance Characteristics
Name
Number of
Constraints
Number of
Variables
Number of
Integer Variables
Number of
0-1 Variables
EGOUT
BLEND2
FIBER
GT2
HARP2
MISC07
P2756
VPM1
98
274
363
29
112
212
755
234
141
353
1298
188
2993
260
2756
378
55
264
1254
188
2993
259
2756
168
all
231
all
24
all
all
all
all
models, i.e., the evaluation of specific branching rules, primal heuristics, and classes of valid inequalities.
We present a small computational experiment to show the
effects of enhancing a linear programming-based branchand-bound algorithm with some of the techniques discussed
in the previous sections. We have run MINTO (version 3.0)
with six different settings and a time limit of 30 minutes on
a Pentium II processor with 64 Mb internal memory running
at 200 Mhz on a subset of the MIPLIB 3.0 instances. The six
different settings are:
1. Plain LP-based branch-and-bound selecting the fractional
variable with fraction closest to 0.5 for branching and
selecting the unevaluated node with the best bound for
processing.
2. Add pseudocost branching to 1.
3. Add preprocessing to 2.
4. Add a primal heuristic to 3.
5. Add cut generation to 3.
6. Add a primal heuristic as well as cut generation to 3.
In Table I, we present the number of constraints, the
number of variables, the number of integer variables, and
the number of 0-1 variables for each of the instances used in
our computational experiment. In Table II, we report for
each instance its name, the value of the best solution found,
the value of the LP relaxation at the root node, the number
of evaluated nodes, and the elapsed CPU time. Table III
contains summary statistics on the performance for each of
the different settings.
The results presented clearly show that these enhancements significantly improve the performance. In particular,
the setting that includes all of the enhancements is best
(solves most instances) and plain branch-and-bound is worst
(solves fewest instances). However, for some instances, the
extra computational effort is not worthwhile and even degrades the performance.
In instance blend2, we see that cut generation increases the
solution time even though it reduces the number of evaluated nodes. Generating cuts takes time and unless it significantly reduces the size of the search tree, this may lead to an
increase in the overall solution time. On the other hand, in
instances fiber and p2756, we see that cut generation is essential. Without cut generation, these instances cannot be
solved. Cut generation is also very important in solving
17
LP-Based Integer Programming Algorithms
Table II.
Name
Setting
zbest*
zroot
Number of Nodes
CPU (sec.)
EGOUT
EGOUT
EGOUT
EGOUT
EGOUT
EGOUT
1
2
3
4
5
6
568.10
568.10
568.10
568.10
568.10
568.10
149.59
149.59
511.62
511.62
565.95
565.95
60779
8457
198
218
3
3
561.97
39.14
1.13
1.22
0.33
0.30
BLEND2
BLEND2
BLEND2
BLEND2
BLEND2
BLEND2
1
2
3
4
5
6
7.60
7.60
7.60
7.60
7.60
7.60
6.92
6.92
6.92
6.92
6.98
6.98
31283
17090
14787
12257
10640
8359
928.92
226.66
141.39
126.50
381.56
359.11
FIBER
FIBER
FIBER
FIBER
FIBER
FIBER
1
2
3
4
5
6
405935.18
405935.18
413622.87
405935.18
405935.18
156082.52
156082.52
156082.52
156082.52
384805.50
384805.50
30619
83522
81568
74286
677
379
1800.00
1800.00
1800.00
1800.00
39.89
34.75
GT2
GT2
GT2
GT2
GT2
GT2
1
2
3
4
5
6
21166.00
21166.00
21166.00
21166.00
13460.23
13460.23
20146.76
20146.76
20146.76
20146.76
38528
39917
1405
1912
1405
1912
1800.00
1800.00
8.80
12.48
7.16
12.53
HARP2
HARP2
HARP2
HARP2
HARP2
HARP2
1
2
3
4
5
6
73899781.00
73899727.00
74353341.50
74353341.50
74325169.35
74325169.35
74207104.67
74207104.67
24735
37749
37813
33527
26644
21631
1800.00
1800.00
1800.00
1800.00
1800.00
1800.00
MISC07
MISC07
MISC07
MISC07
MISC07
MISC07
1
2
3
4
5
6
2810.00
2810.00
2810.00
2810.00
2810.00
2810.00
1415.00
1415.00
1415.00
1415.00
1415.00
1415.00
12278
43583
61681
66344
62198
43927
1800.00
1244.67
1635.69
1800.00
1800.00
1300.52
P2756
P2756
P2756
P2756
P2756
P2756
1
2
3
4
5
6
3334.00
3124.00
3124.00
2688.75
2688.75
2701.14
2701.14
3117.56
3117.56
33119
35383
37707
33807
461
137
1800.00
1800.00
1800.00
1800.00
140.66
74.56
VPM1
VPM1
VPM1
VPM1
VPM1
VPM1
1
2
3
4
5
6
20.00
20.00
20.00
20.00
20.00
15.42
15.42
16.43
16.43
19.25
19.25
42270
179402
77343
146538
16
1
1800.00
1631.84
624.14
1339.64
0.72
0.84
18
Johnson, Nemhauser, and Savelsbergh
Table III.
Summary Statistics
Settings
2
0
4
1
5
1
4
0
6
2
7
5
cx
xGW conW for all W V, W V
xGZW conW Z
xij 0, 1
(6)
19
LP-Based Integer Programming Algorithms
and Klabjan et al.[68] Dynamic column generation approaches to crew pairing are presented in Vance et al.[87] and
Anbil et al.[1]
11. Promising Research Areas
One might be tempted to conclude that mixed-integer programming is a mature field and that further research is
unlikely to yield large payoffs. On the contrary, recent successes have fostered industrys wishes to solve larger models and to get solutions in real time. We will present one
example of this phenomenon from our experience, but many
others exist.
The new generation of crew-scheduling algorithms discussed in the previous section have typically reduced excess
pay (above pay from flying time) from around 10% to 1%.
However, these results are from planning models and could
only be achieved if the flight schedule was actually flown.
Because of adverse weather and mechanical difficulties,
some flights must be delayed or canceled and the crews
must be rescheduled. Consequently, excess pay may jump
back up to 5 8%.
The need to reschedule raises the issue of the robustness
of the original schedule determined from the planning
model. In fact, the planning model should have been formulated as a stochastic integer program that allows for consideration of many scenarios. However, a k-scenario model is at
least k times the size of a deterministic, i.e., 1-scenario,
model. The need to solve such a model justifies the need for
algorithms that are capable of solving problems an order of
magnitude larger than our current capabilities.
The crew-scheduling problem exemplifies the past,
present, and future of integer programming in logistics and
manufacturing. Through the 1980s, integer programming
could only be used for small planning models. Presently, it
is successful on larger planning models. In the future, integer programming needs to have the capability of dealing
with at least two new types of models.
1. Robust models that integrate planning and operations.
These will be massively large, multi-scenario stochastic
models for which several hours of solution time may be
acceptable.
2. Recourse models whose solutions provide the real-time
operational corrections to the planning solutions. These
models will have to be solved in seconds or minutes
depending on the application.
These observations lead us to two promising directions of
the fieldstochastic integer programming and parallel integer programming.
11.1 Stochastic Integer Programming
The linear stochastic scenario programming model for a
two-stage problem is
max
c 0x
k p kc ky k
A 0x
A kx B ky k
x, y k
b0
b k , k 1, . . . , K
0, k 1, . . . , K,
20
Johnson, Nemhauser, and Savelsbergh
21
LP-Based Integer Programming Algorithms
Acknowledgments
This research was supported by National Science Foundation
Grant No. DDM-9700285. We are grateful to anonymous referees
whose comments led to significant improvements in the exposition.
References
1. R. ANBIL, J.J. FORREST, and W.R. PULLEYBLANK, 1998. Column
Generation and the Airline Crew Pairing Problem, Documenta
Mathematica III, 677 686.
2. R. ANBIL, R. TANGA, and E. JOHNSON, 1991. A Global Optimization Approach to Crew Scheduling, IBM Systems Journal 31,
7178.
3. D. APPLEGATE, R.E. BIXBY, V. CHVA TAL, and W.J. COOK, 1997.
ABCC TSP, 16th International Symposium on Mathematical
Programming, Lausanne, Switserland.
4. B.C. ARNTZEN, G.B. BROWN, T.P. HARRISON, and L.L. TRAFTON,
1995. Global Supply Chain Management at Digital Equipment
Corporation, Interfaces 25, 69 93.
5. E. BALAS, S. CERIA, and G. CORNUE JOLS, 1993. A Lift-and-Project
Cutting Plane Algorithm for Mixed 0-1 Programs, Mathematical
Programming 58, 295324.
6. E. BALAS, S. CERIA, G. CORNUEJOLS, and N. NATRAJ, 1996. Gomory Cuts Revisited, Operations Research Letters 19, 19.
7. E. BALAS, S. CERIA, M. DAWANDE, F. MARGOT, and G. PATAKI.
OCTANE: A New Heuristic for Pure 0-1 Programs, Operations
Research, in press.
8. E. BALAS and R. MARTIN, 1980. Pivot and Complement: A
Heuristic for 0-1 Programming, Management Science 26, 86 96.
9. C. BARNHART, E.L. JOHNSON, R. ANBIL, and L. HATAY, 1994. A
Column Generation Technique for the Long-Haul Crew Assignment Problem, in Optimization in Industry II, T.A. Ciriani and
R.C. Leachman, (eds.), Wiley, Chichester, 724.
10. C. BARNHART, E.L. JOHNSON, G.L. NEMHAUSER, M.W.P. SAVELSBERGH, and P.H. VANCE, 1998. Branch-and-Price: Column Generation for Solving Integer Programs, Operations Research 46,
316 329.
11. C. BARNHART, E.L. JOHNSON, G.L. NEMHAUSER, and P.H.
VANCE, 1999. Crew Scheduling, in Handbook of Transportation
Science, R.W. Hall, (ed.), Kluwer, Boston, 493521.
12. E.M.L. BEALE, 1979. Branch and Bound Methods for Mathematical Programming Systems, in Discrete Optimization II, P.L. Hammer, E.L. Johnson, and B.H. Korte, (eds.), North-Holland, Amsterdam, 201219.
13. M. BE NICHOU, J.M. GAUTHIER, P. GIRODET, G. HENTGES, G.
22
Johnson, Nemhauser, and Savelsbergh
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
Routing, M.O. Ball, T.L. Magnanti, C. Monma, and G.L. Nemhauser, (eds.), Elsevier, Amsterdam, 35140.
B.L. DIETRICH, L.F. ESCUDERO, and F. CHANCE, 1993. Efficient
Reformulation for 0-1 Programs: Methods and Computational
Results, Discrete Applied Mathematics 42, 147175.
N.J. DRIEBEEK, 1966. An Algorithm for the Solution of Mixed
Integer Programming Problems, Management Science 12, 576
587.
J. ECKSTEIN, 1994. Parallel Branch-and-Bound Algorithms for
General Mixed Integer Programming on the CM-5, SIAM Journal
on Optimization 4, 794 814.
L.F. ESCUDERO, S. MARTELLO, and P. TOTH, 1996. A Framework
for Tightening 0-1 Programs Based on Extensions of Pure 0-1 KP
and SS Problems, in Proceedings of the 4th International IPCO
Conference, E. Balas and J. Clausen, (eds.), Springer, Berlin, 110
123.
J.J.H. FORREST, J.P.H. HIRST, and J.A. TOMLIN, 1974. Practical
Solution of Large Scale Mixed Integer Programming Problems
with UMPIRE, Management Science 20, 736 773.
R. FOURER, D.M. GAY, and B.W. KERNIGHAN, 1993. AMPL: A
Modeling Language for Mathematical Programming, Scientific
Press, Redwood City, CA.
B. GENDRON and T. CRAINIC, 1994. Parallel Branch-and-Bound
Algorithms: Survey and Synthesis, Operations Research 42, 1042
1066.
F. GLOVER, 1996, personal communication.
M.X. GOEMANS, 1997. Improved Approximation Algorithms for
Scheduling with Release Dates, Proceedings of the 8th ACM-SIAM
Symposium on Discrete Algorithms, 591598.
R. GOMORY, 1960. An Algorithm for the Mixed Integer Problem,
Technical Report RM-2597, The Rand Corporation, Santa
Monica, CA.
R.E. GOMORY, 1958. Outline of an Algorithm for Integer Solutions to Linear Programs, Bulletin of the American Mathematical
Society 64, 275278.
M. GRO TSCHEL, M. JU NGER, and G. REINELT, 1984. A Cutting
Plane Algorithm for the Linear Ordering Problem, Operations
Research 32, 11951220.
M. GRO TSCHEL, C.L. MONMA, and M. STOER, 1992. Computational Results with a Cutting Plane Algorithm for Designing
Communication Networks with Low-Connectivity Constraints,
Operations Research 40, 309 330.
M. GRO TSCHEL, C.L. MONMA, and M. STOER, 1995. Design of
Survivable Networks, in Handbooks in Operations Research and
Management Science, Volume 7: Network Models, M.O. Ball, T.L.
Magnanti, C. Monma, and G.L. Nemhauser, (eds.), Elsevier,
Amsterdam, 617 672.
M. GRO TSCHEL and M.W. PADBERG, 1979. On the Symmetric
Traveling Salesman Problem II: Lifting Theorems and Facets,
Mathematical Programming 16, 281302.
M. GRO TSCHEL and W.R. PULLEYBLANK, 1986. Clique Tree Inequalities and the Symmetric Traveling Salesman Problem, Mathematics of Operations Research 11, 537569.
Z. GU, G.L. NEMHAUSER, and M.W.P. SAVELSBERGH, 1998. Cover
Inequalities for 0-1 Integer Programs: Computation, INFORMS
Journal on Computing 10, 427 437.
K. GUE, G.L. NEMHAUSER, and M. PADRON, 1997. Production
Scheduling in Almost Continuous Time, IIE Transactions 29,
341358.
M. GUIGNARD and K. SPIELBERG, 1981. Logical Reduction Methods in Zero-One Programming, Operations Research 29, 49 74.
L.A. HALL, A.S. SCHULZ, D.B. SHMOYS, and J. WEIN, 1997. Scheduling to Minimize Average Completion Time: Off-Line and
23
LP-Based Integer Programming Algorithms
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.