Ahuja, Network Flows
Ahuja, Network Flows
^^
Dewey
ALFRED
P.
NETWORK FLOWS
Ravindra K. Ahuja Thomas L. Magnanti James B. Orlin
NETWORK FLOWS
Ravindra K. Ahuja L. Magnanti James B. Orlin
Thomas
NETWORK FLOWS
Ravindra K. Ahuja* Thomas L. Magnanti, and James Sloan School of Management Massachusetts Institute of Technology Cambridge, MA. 02139
,
B. Orlin
On
MIT.
NETWORK FLOWS
OVERVIEW
Introduction 1.1 Applications 1.2 Complexity Analysis 1.3 Notation and Definitions 1.4 Network Representations 1.5 Search Algorithms 1.6 Developing Polynomial Time Algorithms
Basic Properties of
21
Z2 Z3 24
Network Flows Flow Decomposition Properties and Optimality Conditions Cycle Free and Spanning Tree Solutions Networks, Linear and Integer Programming
Network Transformations
Shortest Paths
3.1
Algorithm Implementation R-Heap Implementation Label Correcting Algorithms All Pairs Shortest Path Algorithm
Dijkstra's
Dial's
Maximum Flows
4.1
4.2
4.3 4.4
Labeling Algorithm and the Max-Flow Min-Cut Theorem Decreasing the Number of Augmentations Shortest Augmenting Path Algorithm
4.5
Minimum
5.1
5.2 5.3
5.4 5.5 5.6
Negative Cycle Algorithm Successive Shortest Path Algorithm Primal-Dual and Out-of-Kilter Algorithnns
5.7
5.8 5.9
5.10
5.11
Network Simplex Algorithm Right-Hand-Side Scaling Algorithm Cost Scaling Algorithm Double Scaling Algorithm Sensitivity Analysis
Assignment Problem
Reference Notes
References
Network Flows
Perhaps no subfield of mathematical programming
is
more
alluring than
network optimization.
Highway,
rail,
electrical,
lives.
our everyday
As
even non-specialists
recognize the practical importance and the wide ranging applicability of networks.
(e.g.,
flows on arcs
at
network
optimization problems and the basic ruiture of techniques used to solve these problems.
been instrumental in the evolution of network planning models as one of the most
widely used modeling techniques
in all of operatior^s research
Network optimization
is
Networks provide a
new
theories.
inspired
many
of the
most fundamental
For example,
price
directive
decomposition
algorithms
for
both
linear
programming and
So did cutting
plane methods and branch and bound procedures of integer programming, primal-dual
methods
of
linear
combinatorial
optimization.
for several theoretical domaiiis (for example, the field of matroids) for a
fertile
science.
Many
results in
and
efficient data
The aim
optimization.
of this paf>er
is to
summarilze
In particular,
we
number
of recent theoretical
We
broad major
Applications
Basic Prof)erties of
Network Flows
''
Much
of our discussion
(e.g.,
polynomial-time) algorithms.
Among good
we have
We
have attempted
our
discussion so that
it
not only provides a survey of the field for the specialists, but also
summary
to the non-specialists
who have
a basic
working
knowledge
programming.
listed
In this chapter,
we
limit
above.
Some
the
(iv)
survey.
We, however,
problems
in Section 6.6
important references.
As
a prelude to the
in this section
we
present
We
discuss
(i)
different
ways
to
measure the
networks
of
performance of algorithms;
quantitively;
(iii)
(ii)
to represent
many
in
1.1
algorithms;
and
that
have proven
be useful
Applications
Networks
this section,
arise in
numerous application
settings
emd
in a variety of guises.
In
is
we
briefly describe a
Our
discussion
how network
flow
problems
arise in practice;
far
of our discussion.
To
illustrate the
we
consider
some
we
this discussion,
we
networks arising
in practice:
Route networks
Space-time networks (Scheduling networks)
Nevertheless,
that
is,
We
will illustrate
models
in
We
first
Cjj,
lower bound
/,;,
and a
capacity
integer
Uj;
e A.
We
node
i
an
number
<
0,
representing
i
its
supply or demand.
if b(i)
>
0,
then node
i
is
a supply
node;
node.
if b(i)
then node
|
is
a
|
0,
then node
is
a transhipment
Let
n =
and
m= A
The minimum
cost
formulated as follows:
Minimize
C;; x;:
'
(1.1a)
(i,j)A^
subject to
X^ii
{j:(i,j)e]\}
Xxji
{j:(j,i)6^A}
=b(i),
foralliN,
(1.1b)
/jj
<
Xjj
S u^ =
for all
(i, j)
e A.
(1 .Ic)
We
(xjj)
The
constraint (1.1b)
implies that the total flow out of a node minus the total flow into that node must equal
We
bound and
capacity constraints
which we
bound constraints.
are
all
zero;
we show
can be
made
In matrix notation,
we
:
represent the
minimum
),
cost flow
problem
(1.2)
minimize
ex
Nx
= b and
<xSu
N. The matrix
We
let
and
denote the
which
is
is
a
1.
column vector
Note
that each
i
whose
x-;
which
flow variable
app>ears in
with
a +1 coefficient
and
as
an inflow
is
corresponding to arc
(i, j)
Nj; =
node - e;
with a -1 coefficient.
The matrix
nonzero,
all
2m
out of
its
nm
of
its
h<is exactly
one +1 and
one
2.2
-1.
Figure
2.3,
1.1
Later in Sections
and
we
consider
some
For now,
we
make two
(i)
observations.
Summing
gives
all
the
all
and
I N
b(i)
= 0,or
i
{N
Ib(i)
i
{N
b(i)
<
0)
if
Consequently,
total
total
demand
the
mass balance
cor\straints are to
(ii)
If
all
the mass
is
sum
of
all
redundant.
central role in the
The following
minimum
cost flow
problem play a
(a)
An example
network.
(1,2)
1
2 3 4
5
A c Nj
one
is
N2
C;;
in A.
The
objective
is to
way
that
minimum
minimizes the cost of the assignment. The Jissignment problem G = (N^ u N2, A) with b(i) = 1 for all i
i
Nj and
b(i)
= -1 for
all
N2 (we set
l^:=
and
u^;
for all
(i, j)
A).
Physical Networks
"^
The
one
that
map
is
to inind
when we
envision a network.
Many network
in this
problem context.
street
As one
to
network
one way
street assignments, or
whether or not
to construct a
new
road or bridge.
that tells us
In order to
make
we need
a descriptive
model
how
to
model
traffic
predictive
model
for
measuring the
any change
in the system.
We can
then use
of equilibrium
line of the
to
answer
that
Each
how
long
it
The time
to
do so depends upon
is
traffic
conditions;
traverse
it.
the
more
Now
also suppose that each user of the system has a point of origin
(e.g.,
his
his or her
business
district).
Each of these users must choose a route through the network. Note,
affect
each other;
if
two users
traverse the
same
link,
let
they add to each other's travel time because of the added congestion on the
link.
Now
us
make
between
his or her
origin
embedded
set of
is
that
is, all
developed
a set of sophisticated
models
for this
problem
Used
in the
mode
of "what if
scenario analysis, these models permit analysts to answer the type of questions
previously.
we posed
in practice.
many
For example, a
heairt of
(LPIES) model developed by the U.S. Department of Energy as an analysis tool for
The
basic equilibrium
model
of electrical
networks
is
another example.
In this setting.
Law
In this setting
and the
Numerous
network
,
how
can
we
lay out
or
make
between
sejjarations
(to
avoid
electrical interference).
Route Networks
Route networks, which are one
level of abstraction
networks, are familiar to most students of operations research and management science.
The
problem
is illustrative.
shipper
with supplies
must ship
each with
a given aistomer
costs based
demand. Each
upon some
we
center might correspond to a complex four leg distribution channel with legs
to a rail station,
(ii)
from the
head
from the
rail
to a distribution center,
and
even
customer
(or
in
some
we
distribution cost of
this route, this
all
problem becomes a
from plants
to
used
in
numerous
applications.
As but one
illustration, a prize
several years ago described an application of such a network planning system by the
Cahill
costs
May
Roberts Pharmaceutical
Company
(of Ireland)
to
service as well.
Many
address
this
problem
possible to
any
One
problem
that
we
machine scheduling.
In this
application context,
we would
demand
points with available machines, and the cost associated with arc
i
as the cost
of completing job
on machine
j.
The solution
to the
problem
specifies the
minimum
assuming
that each
we wish
to schedule
some production
or service activity
it is
1.2,
in
an important example. In this problem context, we wish to meet prescribed demands for a product in each of the T time periods. In each d^
lot size
problem,
period,
we can produce
I^
at level
Xj
inventory
f)eriod.
.
this
problem has
T+
1, 2,
node
all
production.
The flow on
(t, t
arc
(0, t)
prescribes the
production level
level
I^
in period
t,
1)
to
t
to period
period
that period
final inventory.
for
node
indicates that
demand (assuming
inventory
. . .
1, 2,
T.
Whenever
is
easily solved as a
we must find the minimum cost path of If we impose to that demand point). production and inventory arcs from node capacities on production or inventory, the problem becomes a minimum cost network
shortest path problem (for each
demand
period,
flow problem.
Id,
Figure 1^.
One extension of this economic lot sizing problem Assume that production x^ in any period incurs a fixed
produce
T^.
that
is,
whenever we
in period
,
(i.e.,
x^
>
0),
little,
we
In addition
we may
h^
and a per
t
unit
inventory cost
for carrying
to i>eriod
1.
this
10
the
problem
is
concave.
As we
cost
network
known
as a spanning trees
decomposes
form
the
first
arc
on each path
arc.
is
(0, t))
arc
is
an
inventory carrying
solution,
each time
we
produce,
we produce enough
to
for an integral
number
of contiguous periods.
Moreover, in no period do
we
to solve the
problem very
as follows.
efficiently as a
nodes
j).
to
T+
1,
and
nodes
and
with
<
j,
it
contains
an arc
(i,
The length
of arc
is
satisfying the
demand
of the periods
from
to
to
node T +
of the
same
and
vice-versa.
Hence we can
obtain the
Many enhancements
facility
(ii)
model are
possible, for
example
(i)
the production
for inventory, or
the production facility might be producing several products that are linked by
common
share
(for
example,
we may need
to
change
when making
In
common
limited production
facilities.
most
embedded network
Another
classical
is
used
an
airline.
node
(e.g.,
time
(e.g..
New
York
at 10 A.M.).
The
arcs are of
two
types:
two
airports, for
New York at 10 A.M. to Boston at 11 to stay at New York from 10 A.M. until 11 overnight at New York from 11 P.M. until
example
revenues vdth each service
or
leg, a
A.M.;
A.M.
6
we
identify
network flow
demand)
network).
11
of planes.
arises in
many
other dynamic
scheduling applications.
Derived Networks
This category
a "grab
is
that
on the
illustrate this
Single Duty
Crew Scheduling.
number
12
(x;
1)
or not
(x;
=
a of
we
the matrix
and b
is
all
Observe
's
column
shift (no
split shifts or
work
breaks).
We
show
that this
problem
To
make
this identification,
we perform
it.
the
following operations:
In
Now
add
all
the
in the
column
just
of
A) and
last
row
in
below the
of the
problem
will
have a +1
is
in
row
and a -1
appended) row.
at
to ship in
to
node 9
minimum
network given
Figure
which
is
problem.
^5
unit
Figure
1.4.
If
we
specify a
number
network
be on duty
in
hand side
coefficients (supply
and
demands) could be
would be
a general
minimum
cost
problem.
Critical Path
In construction
to in
complete a variety of tasks that are related by precedence conditions; for example,
constructing a house, a builder must pour the foundation before framing the house and
plumbing
fixtures.
13
Suppose we need
complete
jobs
and
that job
S;
(j
1, 2,
J)
requires
t:
days
to complete.
We are
to
choose the
constraints
jobs
start
time
of each job
so that
we honor
we
represent the
(i, j)
in the
network, the
cannot
we add we
to
two
dummy
to
be completed before
other jobs.
Let
augmented
project.
Then we vdsh
,
problem:
minimize sj^^ - Sq
subject to
Sj
Sj
tj
(i
j)
e A.
On
to
is
a linear
program
in the variables
if
s:
seems
the
we move
variable
to the left
two
variables,
one with
one
coefficient
coefficient.
The
linear
programming dual
xj:
of this
(i, j)
,
If
we
associate a dual
variable
maximize
t;
X;;
^
(i,j)X
subject to
^ 2^
X:;
^ 2-
Xjj
si
I
{j:(i,j)eA)
{j:(j,i)!^)
all
14
15
xj;
0,
for all
(i, j)
from
node
to
node
with
tj
as the
arc length of
arc
(i, j).
following interpretation.
It is
needed
precedence conditions.
become known
and
a
become known
as the
critical
path problem.
This model
become
management,
itself is
particularly for
managing
it
large-scale corwtruction
The
critical
path
important because
For example,
if
we
could consider
the most efficient use of these resources to complete the overall project as quickly as
possible.
minimum
cost flow
problems.
The open
pit
mining problem
is
precedence conditions.
this figure,
mine shown
in Figure
1.5.
As shown
of
in
we have
The provisions
any
given mining technology, and perhaps the geography of the mine, impose restrictions
on
how we
for
example,
we
we
immediately above
restrictions
on the "angle" of
mining the blocks might impose similar precedence conditions. Suppose now that each block has an associated revenue n (e.g., the value of the ore in the block minus the
j
and we wish
to extract blocks to
(y^
maximize
-
overall revenue.
we
let
j,
y;
1) or not (y;
0)
we
extract
block
a constraint y; ^ y^
(or, y;
yj
0)
whenever we
that
need wish
to
before block
total
i,
and
,
(ii)
we
revenue ny;
linear
y;
summed
or
1)
blocks
j.
The dual
linear
programming version
=
will be a
y;
<
1,
rather than
node
as the
demand demand
node
j.
dummy
"collection
it
node"
with
(that
is.
sum
of the
rj's,
to
node
16
block
j);
this arc
y;
The
critical
path scheduling problem and open pit mining problem illustrate one
arise indirectly.
way
that
in a linear
program are
by
precedence constraint in the dual linear program v^ll have a network flow structure.
the only constraints in the problem are precedence constraints, the dual linear program
will
The
for a
U.S.
of
its
statistics that
individual.
table.
shown
in Figure 1.6(a).
1,
particular individual.
We
either
up
or dov^n to
the nearest multiple of three, say, so that the entries in the table continue to
add
to the
sum
new
table
adds
to a
rounded version
of the overall
that
sum
meets
this criterion.
row
in the table
for each
j
column.
It
(corresponding to
row
ij-th
i)
and node
(corresponding to column
entry in the
up
or dov^T*. In addition,
i:
we add
a supersource s to the
i-th
network connected
to
the flow
on
this arc
t
must be the
row sum,
rounded up or dov^n.
we add
a supersink
column node
to this
up
or
down.
We
also
and node
s;
all
we
them
rounding base
16a
Time
in
^service (hours)
<1
Income
less
1-5
<5
than $10,(XX)
Column
Total
16b
(multiples of 3
in
at
one of two
of this
The formulation of
problem, corresponding
to tables
flow problem.
we
12
Complexity Analysis
There are three basic approaches for measuring the performance of an algorithm:
Empirical analysis
statistical
sampling on
objective of
The major
empirical analysis
is to
estimate
how
Worst-case analysis
aims
to
provide upper bounds on the number of steps that a given algorithm can take on
Therefore, this type of analysis provides performance guarantees.
is
The
number
of steps taken
it
by
an algorithm.
provides
its
relative merits,
and
is
Researchers have
designed
many
we
is
Worst-Case Analysis
For worst-case analysis,
we bound
the
number
of arcs
C and U on
capacities.
Whenever
is
(or
U) appears
in the
complexity arulysis,
we assume
integer valued.
As an example
we
will
prove
17
that the
number
is less
of steps for the label correcting algorithm to solve the shortest path
problem
than
pnm
steps for
some
To avoid
the need to
p, researchers typically
pmn
steps for
some constant
0(nm)." The 0(
we mean
would dominate
bounds are
n and m.
if
called asymptotic
running times.
For example,
would
running time
m ^ n.
most
practical
may have
0(
1.
the 0(
analysis required to
2.
is
fundamentally
difficult.
The
leeist
value of
the constants
it is
choice of the computer language, and even to the choice of the computer.
3.
For
all
all
we
integers for
4.
For large practical problems, the constant factors do not contribute nearly as
much
to the
n,
m,
or U.
Counting Steps
of a network algorithm
is
performs.
The counting
for
of steps relies on a
number
of assumptions, most of
18
Al.l
The computer
being executed
A1.2
step.
By envoking
Al.l,
we
model
of computations;
we
Al .2
implicitly
assumes
and
tirithmetic operations.
In fact, even
by counting
all
today's computers
we would
present.
obtain the
algorithms that
we
Our
an addition or
is justified
by the
fact that
0(
is
notation ignores
which
essentially all
modem
computers.
On
may
the other hand, the assumption that each arithmetic operation takes one step
involving very large numbers on real computers since, in practice, a computer must
store large
numbers
in several
words
of
its
memory.
number
of
number
of steps.
To avoid
systematic
we
will typically
assume
for
that both
C and U
k.
are polynomially
bounded
in n,
i.e.,
= Oirr-) and
U = 0(n'^),
is
some constant
This assumption,
if
known
quite
/)
we were
to restrict costs to
be
less
than lOOn-^,
we
would allow
Polynomial-Time Algorithms
An
the
algorithm
is
if its
running time
is
is
boimded by
The
input length of a
problem
number
is
length
n,
m, log
C and
to a
log
(e.g.,
it
is
0((n +
m)flog n + log
network algorithm as a
polynomial-time algorithm
n,
running time
is
bounded by
a polynomial function in
maximum
flow algorithms
we
consider
is
0(nm
+ n^ log U).
Other instances of
19
log
n).
polynomial-time algorithm
is
is
polynomial-time algorithm
in
if its
running time
bounded by
or log U.
polynomial function
The
maximum
algorithm.
therefore,
is
not
a strongly polynomial-time
is
The
if
interest in strongly
polynomial-time algorithms
all
primarily theoretical.
In particular,
we envoke
polynomial-time algorithms
Odog n) and
log
U=
CXlog
n).
An
algorithm
is
if its
Some examples
").
(Observe that
nC cannot be bounded by
is
C) We
pseudopolynomial-time
its
running time
is
polynomially bounded in
is
m,
C and
U.
The
class of
pseudopolynomial-time algorithms
algorithms.
Some
instances of pseudopolynomial-time
0(mC).
algorithms become polynomial-time algorithms, but the algorithms will not be attractive
if
C and U
polynomiab
in n.
is
asymptotically
Even
n
is
in
extreme cases
this is true.
this case,
For
example, n^'^OO
is
if
sufficiently large.
Qn
n must
is
more pragmatic.
Much
practical
shown
that, as a rule,
20
APPROXIMATE VALUES
21
and
m= A
I
We associate
that Uj; >
(i, j)
e A, a cost
Cj;
and
a capacity
Uj:.
We
assume throughout
nodes
in a graph;
for each
A.
Frequently,
we
distinguish
two
special
the source s
and
sink
t.
An
arc
(i, j)
and
j
j.
The arc
(i,j)
is
incident to nodes
and
j.
We
j.
refer to
node
tail
jmd node
(i, j)
(i, j),
and say
(i, j)
The
of
j
arc
is
an outgoing
of
node
and an incoming
arc of
node
i,
list
node
i,
A(i), is
i.e.,
A(i)
{(i, j)
N}.
The degree
node
is
the
number
of incoming
and
A directed
(\2
r-1.
,
path in
,
G
if
= (N, A)
is
a sequence of distinct
(ij^,
12^, 12,
1,
.
13), 13.-
,(
ij.i, if)
ij^+p
for each
k=
An
undirected path
is
i^.
and
ij^^-j
or arc (ij^+i
i\^
We refer to
the nodes
i3
ij-.^
A directed
is
cycle is a directed
i|)
and an undirected
cycle
(ij. , i)
or
(i^
ij.).
We
we
an
undirected path;
whichever
is
If
arise,
we
shall
i2
-ij^
when
its
shall
sometimes
We
shall
graph
= (N, A)
is
if its
i
node
set
j
in A,
N| and
if
e N2.
graph G' =
is
(N', A') is a
subgraph of
G=
(N, A)
if
N'
CN
A.
graph
a spanning subgraph of
= (N, A)
N' =
N and
A'
A.
Two
nodes
and
i
if
to
is
graph
is
said to be connected
it
disconnected.
In this chapter,
we always assume
graph
G
is
is
We refer to any set Q c A with the property that the graph G' = (N, A-Q) disconnected, and no superset of Q has this property, as a cutset of G. A cutset
connected.
22
two
sets of nodes,
X and N-X.
We
shall
alternatively represent
the cutset
Q as the
graph
is
node
acyclic
if it
contains no cycle.
tree is a
subtree of a tree
is
a connected
subgraph of
T.
tree
is
said to be a spanning
A tree of G if
and
T is a spanning subgraph
arcs not belonging to
1
T are called
tree arcs,
T are
tree arcs.
A node in
nc des.
Each
least
two
leaf
spanning
tree contains a
to a
Removing any
two
arc in this
Removing any
tree-arc creates
subtrees.
Arcs
a
to
two
If
different subtrees of a
spanning
tree created
by deleting
to this cutset is
added
resulting graph
is
again a spanning
In this chapter,
we assume
that logarithms
we
state
it
othervdse.
We
number b by
1.4
Network Representations
The complexity
of a
also
upon
the
manner used
scheme used
for maintaining
improved by representing
In this section,
the network
discuss
more
cleverly
structures.
we
we have
representation of a network.
nm
this
words
to store a network, of
is
which only
space
2m words
network representation
is
not
efficient.
network
matrix representation.
the element
I^:
1 if
arc
A, and
Ijj
otherwise.
The
arc costs
and
capacities are
23
(a)
A network example
arc
number
1
point
(tail,
head)
cost
cost
1-
2
3
1
2
3
2
3
1
4
5
4 2
6
7
8
4
1
4
2
(b)
The forward
star representation.
(c)
The reverse
star representation.
arc
number
1
(tail,
head)
cost
2
3
4
5
6
7
8
24
This representation
is
but
is
The forward
star
and
to
known
as
representation in the
computer science
The forward
star
representation
numbers
we
number
from node
1,
and so
on.
We
We
node
i,
denoted by
number
i
in the arc
list
node
1) in
arcs of
node
-
the arc
If point(i)
> point(i+l)
1,
then node
has no outgoing
arc.
For consistency,
set point(l)
The forward
outgoing arcs
at
incoming arcs
at
any
node
efficiently,
we need an
known
representation.
we
n
representation as follows.
store the
(tail,
We
to
j.
in order
and sequentially
We also
i.
maintain a reverse
position in these
denoted by
rpoint(i),
first
we
at
set rpoint(l) =
As
earlier,
we
store the
incoming arcs
node
Observe
that
star representation S,
we
We
storing arc
(3, 2) hcis
1.
numbers
ir\stead of
the
(tail,
arc
number
The
arc
(1, 2)
So instead of storing
we
arc numbers,
we
star representation.
We
numbers
in
an m-array
trace.
Figure
25
1.5
Search Algorithms
Search algorithnvs are fundamental graph techniques; different variants of search
lie at
the heart of
many network
algorithms.
In this section,
we
discuss
two
of the
most
commonly used
all
satisfy a particular
let
us suppose that
we wish
nodes
graph
s,
node
At every point
states:
in the
search procedure,
all
nodes
in the
to
network are
one of two
known
yet to be determined.
inadmissible
We call an arc
otherwise.
(i, j)
admissible
if
node
is
is
unmarked, and
Initially,
marked.
Subsequently, by examining
Whenever
i
the procedure
marks
of
admissible arc
node
j,
i.e.,
predi])
i.
The algorithm
26
algorithm
SEARCH;
nodes
s;
begin
unmark
all
in
N;
mark node
LIST
:= {s);
while LIST *
do
begin
select a
if
node
i
in LIST;
(i, j)
node
is
then
begin
mark node
pred(j) :=
i;
j;
add node
end
else delete
to LIST;
node
from LIST;
end;
end;
When
from
nodes.
this
algoirthm terminates,
it
has marked
all
nodes
in
a tree consisting of
marked
We
structure
arcs.
maximum
flow and
minimum
i
We
the
list
emanating
(i, j)
from
it.
Arcs
in
each
list
which
i
is
is
the
arc in A(i).
is
and
the ciirrent
When
admissible
It is
easy to
show
0(m +
n)
= 0(m)
time.
Each
iteration of the
not.
it
to LIST,
and
deletes a
marked node from LIST. Since the algorithm marks any node at most once, it executes the while loop at most 2n times. Now consider the effort spent in identifying the
27
i,
we
at
algorithm examines a
total of
ie
X A(i) = m N
0(m)
time.
The algorithm,
nodes
to LIST.
is
maintained as a queue,
to the rear,
order.
This
s;
kind of search amounts to visiting the nodes in order of increasing distance from
therefore, this version of search
is
It
marks nodes
s to
i
in the
meeisured as
minimum number
is
LIST as a
stack,
i.e.,
nodes are
to the front;
This algorithm
to
performs a deep probe, creating a path as long as possible, and backs up one node
initiate a
it
can mark no
Hence,
L6
algorithms for network flow problems: the geometric improvement (or linear convergence)
In this section,
will
we
We
and
that
an algorithm runs
in
at
every iteration
it
makes an improvement
proportioT\al to the
solutioiis.
and optimum
Let
be an upper bound on the difference in objective function values between any two
For most network problems,
feasible solutions.
is
a function of n,
in the
m, C, and U. For
cost flow
instance, in the
problem
maximum mCU.
flow problem
mU, and
minimum
28
Lemma 1.1. Suppose r^ is the objective function value of a minimization problem of some solution at the k-th iteration of an algorithm and 2* is the minimum objective function value. Further, suppose that the algorithm guarantees that
(2k_2k+l) ^ a(z^-z*)
(13)
for
(i.e., the improvement at iteration k+1 is at least a times the total possible improvement) some constant a xvith < a< 1. Then the algorithm terminates in O((log H)/a) iterations.
(z*^
z*)
2/a
iterations
from
iteration k.
If in
aCz*^
optimum
2/a
iterations.
On
if
at
some
iteration,
more than
aCz*^
- z*)/2
units,
then
(1.3)
implies that
a(z^ - z*)/2 ^ z^ -
z^-^^
^ aCz^ - z*),
and, therefore, the algorithm must have reduced the total possible improvement (z*^- z*) by a factor of 2 within these 2/a iterations. Since H is the maximum possible improvement and every objective function value is an integer, the algorithm must terminate wathin 0((log H)/a) iterations.
We
A
have stated
the statement
we
fixed percentage)
improvements
for the
maximum flow problem and the maximum improvement algorithm minimum cost flow problem are two examples of this approach. (See Sections
5.3.)
and
Scaling Approach
we
which we
call bit-scaling.
29
the assignment
problem.
Sections 4
and
5,
maximum
flow and
minimum
cost flow
problems.
Using the bit-scaling technique, we solve a problem P parametrically as a sequence of problems P^, P2, P3, Pj^ the problem P^ approximates data to the first
...
,
:
bit,
bit,
is
K, the
optimum
solution
is
of
problem
Pj^^.-j
Pj^.
The
is
scaling technique
more
efficient
than
For example, consider a network flow problem whose largest arc capacity has
value U. Let
K = Flog Ul and
would consider
suppose
if
that
we
bit
binary
necessary to
make each
capacity
bits long.
Then the
its
problem
Pj^
binary
representation.
The manner
an arc
in
P^
is
plus
or
1.
30
100
<=^
(a)
(b)
PI
P2
100
P3:
010
(c)
Figure
1.10.
Example of a
bit-scaling technique.
(a)
Network with
(b)
(c)
Network with binary expansion of The problems Pj, P2, and P3.
31
begin
obtain an
for k
:
optimum
to
solution of P^;
= 2
K do
optimum
solution of Pj^.i to
Pj^;
begin
reoptimize using the
obtain an
optimum
solution of
end;
end; This approach
is
it
have led
to
improved algorithms
for
both the
maximum
flow and
minimum
The problem P^
is
(ii)
problem
Pj;_i is
an excellent starting
solution for problem Pj^ since Pj^.^ and Pj^ are quite similar.
solution of Pi^_i can be easily reoptimized to obtain an
optimum
For problems that satisfy the similarity assumption, the number of problems solved
is
OOog
n).
Thus
(i.e.,
for this
approach
to
to
be only a
little
more
efficient
by
maximum
and
is xj^
flow problem.
Let
vj^
denote the
vj^.
maximum
Pj^
In
the problem
twice
or
1.
If
we
multiply the
optimum flow
2vj^_'j
for Pj^.i
by
2,
we
Pj^.
Moreover,
vj^
<
maximum
increase the
flow value
by
at
most
m units
(if
we add
to the capacity of
any
arc,
it
then
is
we
maximum
at
most
1).
In general,
claissical
maximum
Section 4.1
flow problem.
would perform
the reoptimization in at
most
augmentations, taking
O(nmU)
time.
bound
is
only pseudopolynomial.
32
2.
BASIC PROPERTIES OF
As
a
NETWORK FLOWS
we
describe several basic
prelude to the
We
begin by showing
how network
modeled
Section
in either of
1.1
Then we
and demonstrate
that these
and spanning
tree
we need
Finally,
types of solutions.
We
we
2.1
as flows on
and
cycles.
own
to
first
step in
it
worthwhile
In the arc formulation (1.1), the basic decision variables are flows
Xj:
on arcs
cycles
(i, j).
P and
of
and
f(q),
the flow
in
on cycle
in
Q.
Notice that every set of path and cycle flows uniquely determines arc flows in a
natural way:
the flow
xj;
on arc
(i, j)
equals the
sum
and
We
j)
some new
notation:
5jj(p)
1 if
equals
(i, j)
if
arc
(i,
is
otherwise;
arc
is
otherwise.
Then
^i3=
I
p P
5ij(p)h(p)+
X
qe
hf<i^^^^^-
33
If
is
we
is
represented
f is
eis
path
flows and cycle flows and that the path flow vector h and cycle flow vector
cycle flow representation of the flow.
a path
and
Can we
represent
it
is,
can
we decompose any
(i.e.,
as) path
The following
result provides
an affirmative
answer
to this question.
2.1:
Theorem
(Directed Case).
cycle flow
Conversely, every has a unique representation as nonnegative arc flows. nonnegative arc flow x can he represented as a directed path and cycle flow (though not necessarily uniquely) with the following two properties:
C2.1.
to a
demand node
most
of x. cycles
C2.2. At most n+m paths and cycles have nonzero flow; out of have nonzero flow.
Proof.
In the light of our previous observations,
these, at
we need
that
ig is a
to establish
assertions.
We
show
any
decomposed
Oq,
into path
and cycle
If
flows.
i^j
Suppose
i|) carries a
positive flow.
is
demand node
then
we
mass
i^
implies that
some other
arc
carries positive
We
we
encounter a
demand node
ig to
or
we
revisit a
previously examined node. Note that one of these cases will occur within n steps. In the
former case
ij^
we
node
and
we
obtain a directed
(xj:
:
cycle q.
we
we
=
let
h(p) = inin
min
(i, j)
p)],
b(ijj)
+ h(p) and
:
Xj; -
(i, j)
in
we
obtain a cycle q,
(i, j)
we
let f(q)
= min
{x^:
(i, j)
q) and redefine
Xj:
f(q) for
each arc
in q.
We
lecist
repeat this process with the redefined problem until the network contains no
we
select a
one outgoing arc with positive flow as the starting node, and repeat the procedure,
in this Ceise
which
0.
must find a
is
cycle.
We
terminate
when
problem x =
by the
the
sum
of flows
identified
procedure.
Now
we
identify
to zero;
path,
we
reduce the
identify
we reduce
the flow on
some
arc to zero.
34
m)
total
of
at
most
cycles.
is
form
be negative. In
this Ccise,
network
and
arcs with
negative flows.
final
node, has forward arcs and backward arcs which are defined as arcs along and
on p as
and
-h(p)
on each backward
We
same way.
6j:(q) is still
In this
more general
setting,
and
-1 if
we now
define
6j;(p)
and
S^jCq) to
be
arc
(i, j)
is
backward
Theorem 2.2; Flow Decomposition Property (Undirected Case). Every path and cycle Conversely, every arc flow x can be flow has a unique representation as arc flows. represented as an (undirected) path and cycle flow (though not necessarily uniquely) with the following three properties:
C2.3.
C2.4.
to a sink
node of
x.
and any
At most n+m paths and cycles have nonzero flow; out of C2.5. have nonzero flow.
Proof.
these, at
most
cycles
This proof
at
is
similar to that of
ij^_-j
Theorem
2.1.
is
that
we
some node
by adding an arc
(ij^.'j
ij^)
ij^_|
number
of important consequences.
As
particularly convenient
way and
to
build
by a sequence
of simple operations.
We need
flow
f(q)
x.
cycle q with
>
is
called
an augmenting
5jj(q) f(q)
cycle
if
<
Xjj
<
Ujj,
(i, j)
35
if
some
positive
amount
of flow
(namely
cycle
f(q)) is
q.
We
q as
c(q)
V
(i, j)
Cj; 5jj(q).
The
cost of an
A
if
we augment
change
Suppose
< X < u and
that x
to a
i.e.,
Nx
b,
Ny
b,
0<y<u.
-
Then the
difference vector z = y
satisfies the
homogeneous equations Nz = Ny
Nx
0.
i.e.,
we
can find
(i, j)
at
most
<
f(qj.)
of A,
zjj
6ij(qi) f(qi)
5jj(q2) f(q2)
...
SjjCqr) fCq^.
Since y = x +
z,
for
any arc
(i, j)
we have
+
6ij(q2)
<
yjj
Xjj
5jj(q^) fCq^)
f(q2)
...
5jj(qr) f(qr)
<
Ujj.
Now
q-j,
(i, j)
is
either a
q^
that contains
it
or a
qm
that contains
it.
same
<
sign;
moreover,
(i, j)
<
qj^.
yjj
<
Consequently,
<
Xj;
6j:(qj(.) f(qj^^)
Uj; for
each arc
That
we add any
(i, j).
of
qj^ to x,
,
Hence,
each cycle q^
that
q2
...
q,. is
to the
flow
x.
Further, note
(i, j)
(i, j)
(i, j)
(i, j)
(i, j)
k=l
(i,j)A
k=l
36
We
result.
Theorem
Let X
network flow problem. Then y equals x plus the with respect to x. Further, the cost of y equals the cost of x
augmenting
cycles.
The augmenting
characterizing the
optimum
solution of the
x*
is
minimum
Suppose
that
is
any
minimum
cost flow
x*.
The augmenting
into at
If
vector X* - x can be
decomposed
most
of the
cx* - ex.
ex* < cx
negative cost.
Further,
if
nonnegative
cost,
Since x*
is
is
also
an optimum flow.
We
Theorem
it
2.4.
Optimality Conditions.
feasible flow
is
an optimum flow
if
and only
if
2J.
We start
by assuming
minimize
cx
Nx = b and
^x<u
and
that
0.
Much
a simple
and
costs
37
3,$4
3-e
<D
2+e
4,$3
i
2.1.
4+e
<!)
cycle.
Figure
Let us
assume
The network
in
Note
that
and subtracting
flow from
at
mass balance
is
each
of the
node. Also, note that the per unit incremental cost for this flow change
cost of the clockwise arcs
the
sum
i.e..
$4
$3 = $
-1.
and say
A.
is
Consequently, to
minimize cost
nonnegativity of
that in the
cycle.
in
all
our example,
arc flows,
(at
i.e.,
we
set
3-6^0 and
we no
8 S
0,
or 6 <
3;
that
is,
we
set 6
all
3.
Note
new
solution
6 = 3),
arcs in the
Similarly,
if
(i.e.,
we were
to
to 4),
then
-2)
we would
decrease 6 as
much
as possible
(i.e.,
5 +
4 + 6 S 0, or 6
>
at
and again
value zero.
of
all
We
can restate
this observation in
another way:
to preserve nonnegativity
flows,
we must
on
6,
<6 <
3.
depends
linearly
we
optimize
it
by
selecting 6
= 3 or 6 =
in
38
We
(i) If
can extend
this
A =
0,
we
are indifferent to
all
3 and therefore can again choose a solution as good as the original one but with the flow
of at least arc in the cycle at value zero.
(ii) If
the flow,
(i.e.,
e.g.,
such as 6 units on
all arcs,
then the
feasibility
Ceise -2
<6<
and we can
find a solution as
6,
good
as the
original
that
is,
one by choosing 6 =
for
-2 or 6
1.
At these values of
the solution
is
cycle free,
is at its
bound) or
Some
observations
up
to this point.
(i, j)
is
upon
it.
We
flow
xj;
equals either
its
lower or
if
upper bound.
the
Therefore,
at a
given any
time,
initial
flow
we
and
fundamental
result:
Theorem
optimization
If the
Nx
b,
<x <u
is
feasible
region
a feasible solution,
Note
that the
is
necessary to rule out situations in which the flow change variable 6 in our prior
(negative) in a positive cost cycle; for example, this condition rules out
any negative
cost
bounds on
its
arc flows.
39
useful to interpret the cycle free property in another way.
It is
Suppose
that the
network
nodes).
is
connected
(i.e.,
there
is
Then, either a given cycle free solution x contains a free arc that
incident to
we
can add to the free arcs some restricted arcs so that the
(i)
(ii)
S contains
all
(iii)
No
(i)
and
(i)
(ii).
We
will refer to
any
set
S of arcs satisfying
through
(iii) eis
a spanning tree of
a
the network
and any
spanning
tree S
we
given cycle free solution x as a spanning tree solution, with the understanding that
restricted arcs
may
S.)
Figure
that
it
2.2. illustrates a
spanning
is)
tree
may
to
complete the
wdth arc
(3,
spanning
tree
in several
ways
replace arc
(2, 4)
5) in
Figure
2.2(c)); therefore, a
given
We
If
is
nondegenerate
if
In
flow x
is
unique.
rot span
(i.e.,
lower or upper
bound
of the arc.
In this case,
we
spanning
tree
is
degenerate.
40
(4,4)
(1,6)
(0,5)
(a)
arc
(xj:, uj:
).
(b)
<D
(c)
spanning
tree solution.
Figure
2.2.
spanning
tree solution.
41
When
Theorem
2.6:
optimization
Nx
b,
<x <
u]
is
feasible
We
of the flow
might note
that the
spanning
problem as
well,
i.e.,
a concave
is
x.
negative at
some
we augment
positive
amount
of
cycle.
Hence,
we
at least
its
The
and spanning
tree property
consequences.
In particular, these
that
at
two
large
programming.
programming.
first
results relating
network flows
that
to linear
and integer
S has
at
programming, we
least
first
make
any spanning
tree
one
(actually at
lecist
node
one arc
in the
spanning
tree.
Consequently,
we
row
and
-1,
its
incident arc
lies
is
column
1,
then
row
which
on the
its
we now remove
this lecif
node and
Consequently, by rearranging
for the
all
but
spanning
tree,
we
can
now assume
row
2 has
-t-1
or
-1
element on the
42
Continuing
in this
way
permits us to
n-1
its first
rows
is
2.3
shows
nodes
5
L =
43
Now
vectors
and u have
all
integer components.
component
of
0,
yr-
equals
-1),
+1, or
Mx^
(2.1)
is
an integer
components
-1,
since the
first
equals +1 or
the
first
equation in
(2.1),
implies that x|
is integreil;
now
if
we move
x] to
the right of
the equality in
for X 2
the right
integral
shows
that x^
integral.
This argument shows that for problems with integral data, every spanning tree
solution
is
integral.
we have
Theorem problem
2.8.
Integrality Property.
If the objective
minimize
is
ex:
Nx
b,
<x <u
the vectors
solution.
bounded from below on the feasible region, the problem has a feasible solution, and b, 1, and u are integer, then the problem has at least one integer optimum
Our
observation at the end of Section 2.2 shows that this integrality property
is
more general
situation in
is
concave.
objective function ex
is
a linear program
result
are distinguished as the most important large class of problems with this prop>erty.
expressed
tis
as x =
ay
(l-a)z for
some weight
< a <
1.
Since, as
we have
seen,
we might
44
solutions
and
the
next result.
For network flow problems, every cycle free solution is an extreme point and, conversely, every extreme point is a cycle free solution. Consequently, if the objective value of the network optimization problem
2.9.
Theorem
minimize
is
ex:
Nx
b,
<x <u
feasible
region
feasible solution,
is
easy to establish.
First, if
it
free arcs, as in
2.1,
we
is
< a<
i.e.,
1.
and
zij
z' be the
ujj
components
zjj
of
/ij
<
<
xij
<
<
or
/jj
<
<
(i,
xij
j).
<
yjj
<
and
"
let
Nj
=
0'
Then
NjCz^
>
network contains an
not equal to
Zij
for
components
if
x^,
y^ and
is
is
In linear
linear
program corresponding
to variables strictly
We
maximal number
of columns.
Theorem
is
2.10:
Basis Property.
Every spanning
tree solution to a
is
Let us
final
basis
and the
that
integrality property.
Consider a linear
(x ,x^) is a
compatible partitioning of
we
row so
that
is
a nonsingular matrix.
Then
45
Bx^ = b - Mx^, or x^ = B-^(b
Mx^).
it
is
component
sums and
multiples of components of
if
b'
=b
Mx^ and
determinant of B. Therefore,
vector whenever x^,
partitioning of
b,
the determinant of
B equals +1 or
of all integers.
then x^
is
an integer
if
and
are
composed
In particular,
the
b,
/
and u are
all
integers, then x^
if all
and consequently
x^
is
an integer.
Let us
-1,
call a
matrix
it
unimodular
unimodular
of
its
its
<md
call
totally
-1.
if all
of
How
Since bases of
and the
integrality property?
correspond
to
sparming
determinant of any basis (excluding the redundant row now), equals the product of the
diagonal elements in the triangular representation of the basis, and therefore equals +1
or
-1.
is
it is
totally
0.
unimodular. For
Otherwise,
it
is
singular,
is
it
has determinant
must correspond
which
But then,
it is
the
must be equal
to
4l
an alternate proof of
unimodular property.)
The constraint matrix of
a
Theorem
minimum
cost
2.4
Network Transformations
Frequently, analysts use network transformations to simplify a network problem,
to
show equivalences
of different
network problems, or
to
we
describe
some
of these
important transformations.
Tl.
If
an arc
(i, j)
l^y
then
we
can replace
Xjj,
Xy.
by
Xjj+
l^-
in the
variable
the flow
on arc
(i, j)
will
46
simple network interpretation;
we
begin by sending
/j;
and then
/jj.
b(j)
b(i)-/ij
b(i)
'Cij,Ujj)
CD
Figure
T2.
<D
2.4.
^
a positive
(i, j)
(Cij'Uij-V
CD
lower bound to zero.
Uj:,
<D
then
Transforming
If
{Removing
Capacities).
an arc
we
can remove
the capacity,
constraint
(i,
making the
j)
The capacity
Sj;
can be written as
-1,
Sj;
Ujj, if
we
>
0.
we
obtain
-Xjj - Sjj
-Ujj
(2.2)
This transformation
is
tantamount
to
Observe
xj;
now
in
(2.2)
we
assure that
each of
Xj;
and
Sj;
appear
in exactly
positive sign
and
in
to the
following
network transformation.
b(i)
(Cjj
,
b(j)
b(i)
-Uij
(Cjj
,
b(j)
oo)
Uij
(0,oo)
Ujj)
<T)
Xjj
<^
Figure
2.5.
O
Removing
^<
t I
Xjj,
X- j
X^j
Sjj
arc capacities.
In the
network context,
this
If x^; is
a flow
on arc
is X.
V.
(i, j)
in the original
Xjj^
=
ik
Xjj
and
Uj;
- Xj:;
cost.
Likewise, a flow
^k'
"
^" *^^
Xjj^
of the
same
cost in the
47
original network.
x^j
Further, since
this
x^j^
Xjj^
u^;
and
is
x^j^.
and
x:j^
x^<
Ujj.
Consequently,
Uj;
transformation
valid.
(i, j)
or an upper
in variable:
bound on the
replace
x^;
change
(i, j)
by
Cj:
with
its
associated cost
by the arc
i)
v^ath a cost
-Cj;.
remove
arcs
with negative
costs.
send
Ujj units of
by arc
(j,
i)
vdth cost
-Cj;.
The
new flow
we "remove" from
b(i)
b(j)
b(i)-Ujj
b(i)
Ujj
CD
<D
Figure
2.6.
An example
arc (k,
i')
0<
of arc reversal.
two nodes
capacity,
(i, i')
i
T4.
(Node
Splitting).
into
and
i'
and
by an
cost
of the
same
cost
and
by an arc
i.
(i', j)
of the
same
and
capacity.
We also
add
arcs
each
when we
node
splitting
transformation for
48
(a)
(b)
Figure
2.7.
(a)
The
We
to
when we use
it
reduce a shortest path problem with arbitrary arc lengths to an assignment problem.
is
This transformation
also
used
node
activities
and node
data in the standard "arc flow" form of the network flow problem:
the cost or capacity for the throughput of
we
simply associate
arc
(i, i').
node
with the
new throughput
49
3.
SHORTEST PATHS
Shortest path problems are the most fundamental
encountered problems
shortest path
in the
problem
arises
when
many
nodes
in a network.
More
importantly,
number
of shortest path
efficient
in
network optimization.
The
(i)
in increasing
all
other nodes
all
when
nonnegative;
(ii)
finding
node
to every other
(e.g.,
shortest paths with turn penalties, shortest paths visiting specified nodes, the k-th
shortest path).
In this section,
we
(i),
(ii)
and
(iii).
The algorithmic
and
(ii)
and
label correcting.
The
label setting
networks with
nonnegative arc lengths, whereas label correcting methods apply to networks with
negative arc lengths as well. Each approach assigns tentative distance labels (shortest
path distances) to nodes at each step. Label setting methods designate one or more labels
as permanent (optimum) at each iteration. Label correcting methods consider as temporary until the final step
label setting
all labels
when
they
all
become f>ermanent.
We
will
show
that
shown
is
more
Dijkstra's algorithm
first
we
bound of
0(n2).
We
emd
in theory.
Next,
we
of the label correcting method, outlining one special implementation of this general
in
50
well in practice.
Finally,
we
discuss a
method
problem.
Dijkstra's Algorithm
3.1
We consider a
(i, j)
network
G=
Cj;
N,
and
let
max
Cjj
(i, j)
}.
In this section,
we assume
amd
that
aire
3.3,
we
further
assume
nonnegative.
We
suppose
that
node
s is a specially
We
with a suitably
node
j.
We
invoke
this connectivity
assumption throughout
s to all
other
is
to fan out
and
label
nodes
is
in order
Each node
d(i):
the label
i;
permanent
it
once
we know
that
it
otherwise
is
temporary.
Initially,
we
s a
permanent
>
label of zero,
temporary
A, and
node
are
is its
shortest distance
all
node
with the
minimum
labels
temporary
makes
it
au-cs in A(i) to
it
of adjacent nodes.
has designated
relies
nodes as
we prove later) that it is always possible to minimum temporary label as permanent. The following
(which
basic implementation of Dijkstra's algorithm.
51
algorithm DIJKSTRA;
begin
P:=(s); T: = N-{s);
d(s)
d(j)
: :
= =
and pred(s) =
:
0;
Cgj
and
pred(j)
if
(s,j)
and
d(j)
otherwise;
while P *
begin
do
(node selection)
let
T be
node
T:
for
which
d(i)
= min
{d(j)
T);
P: = Pu(i);
{distance update) for each
if
(i,j)
= T-{i};
A(i)
do
then
d(j)
:
d(j)
>
d(i)
Cjj
d(i)
Cjj
and
pred(j)
i;
end; end;
The algorithm
i
by
pred(i),
N.
ensure that
s to
node
prior to
on the
from node
node
i.
allow us to trace back along a shortest path from each node to the source.
To
we
At each point
P and
T.
Assume
node
j
in
is
node
in
is
j)
belongs to
P.
Then
it
is
node
in
T
to
d(i) to
node
must contain
a first
node k
i
in T.
at least as far
away from
the source as
node
since
its
label
least that of
node
i;
furthermore, the
lengths are nonnegative. This observation shows that the length of path
is at least d(i)
and hence
labeled
i
it is
node
i.
node
i,
some nodes
>
T+
Cj:
(i)
node could become an internal node in the must thus scan all of the arcs (i, j) in A(i); if
updates the labels of nodes in T (i).
We
+
Cj;
d(i)
then setting
d(j)
d(i)
be
split into
two
and ujjdating
i
with
minimum temporary
label
and
52
takes 0( A(i)
I
I
))
time to update the distance labels of adjacent nodes. Thus, overall, the
^
ie
A(i)
|
= 0(m)
time for
N
thus runs in O(n^)
of Dijkstra's algorithm
Researchers have
attempted
to
reduce the node selection time without substantially increasing the time for
updating distances. Consequently, they have, using clever data structures, suggested
several
we
Subsequently
the best
we
(A
all
R-heaps, which
is
nearly
known
most
3^
Dial's
Implementation
in Dijkstra's
algorithm
is
node
selection.
To improve
we must
Instead of scanning
temporarily labeled nodes at each iteration to find the one with the
minimum
in a sorted
we reduce
in practice,
by maintaining distances
fashion?
computation time
FACT
labels
This fact follows from the observation that the algorithm permanently labels a
node
with
d(i),
in A(i)
during the
distance update step, never decreases the distance label of any permanently labeled node
since arc lengths are nonnegative.
selection.
FACT
3.1
We
Recall that
node
selection step,
we
we
is
nonempty
bucket.
The distance
node
in this
bucket
minimum. One by
53
one,
arc
we
their
lists to
update distance
We
higher numbered buckets in increasing order to select the next nonempty bucket.
By
it is
and
efficiently; in fact, in
0(1) time,
i.e.,
bls
a time
bounded by some
linked
list.
One implemention
a doubly
we
two pointers
one pointer
to its
us,
by rearranging the
list,
add
Now,
it
as
we
we move
from a
higher index bucket to a lower index bucket; this transfer requires 0(1) time.
0(m
number
FACT
3.2.
<
d(i)
(i)
k e P (by
FACT
3.1),
and
node
in T,
d(j)
= d(k) +
Cj.:
cj^;
for
some k P
of distance updates).
Hence,
d(j)
<
d(i)
<
d(i)
+ C.
In other words,
d(i)
d(i)
+ C.
nodes with
temporary distance
in
labels.
We
infinite
any of the
when
they
0, 1, 2,
...
arranged
node
mod
(C+1).
and so
forth;
FACT
3.2, at
any point
in
vvill
if
hold only nodes with the same distance labels. This storage scheme
minimum
k+1, k+2,
...
C,
0, 1, 2,
k-1, store
nodes
54
k-l
Figure
3.1.
Dial's algorithm
wrap around
fashion, to
nonempty
where
it
compared
C may
many
most applications,
is
is
not
n.
all
of the buckets
much
less
than
however,
is
is
algorithm runs in
is
0(m
pseudopolynomial
if
= 2" the
The search
heis
new
next section,
we
is
0(m +
The discussion
sections
of this
implementation
can skip
it
of a
3.3.
R-Heap Implementation
Our
first
The
first
implementation considers
all
the
55
temporarily labeled nodes together
(in
for a
node with
in different buckets.
labels in a
For example, instead of storing only nodes with a temporary label of k in the
k-th bucket,
different
we
k.
The
temporary labels
make up
bucket k
is
[100k
..
lOOk+99] and
width
is
TOO.
Using widths of
factor of k.
size
k permits us
to
we need
is
arbitrarily large,
we need
original
only
to
Dijkstra's
implementation.
Using a width of
number
of buckets, but
still
numbered bucket
to find the
node with
minimum temporary
one
for the lowest
label.
If
we
numbered
bucket,
we
we
1, 1, 2, 4, 8,
so
number
of buckets
needed
in
only
Odog
nC).
Moreover,
reallocate
we
dynamically modify
the ranges of
numbers stored
we
distance labels in a
is 1.
way
minimum
whose width
we
bucket
minimum.
In fact, the
running time of
R-heap
algorithm
0(m
+ n log nC).
We now
Flog
describe an R-heap in
1
more
detail.
nCl We
do not
a (possibly empty)
if
We store a
will
it
temporary node
in
bucket k
d(i)
e range(k).
set
We
store
permanent nodes.
The nodes
in bucket
redistributes the
56
Initially, the
rarge(0) =
[0];
[1];
..
ranged) =
range(2) = [2
3);
rangeO) =
[4
..
7];
range(4) = [8
..
15];
range(K) = [2^-1
..
2^-1].
These ranges
will
will
widths.
in the
Suppose
range
[8
..
for
15].
example
minimum
quickly
determined to be
We
fact
by verifying
that buckets
nonempty. At
all
this point,
we
4.
minimum
is
..
nodes
in
bucket
is
helpful.
Since the
that
minimum
the bucket
less
whose range
is [8
15],
we know
no temporary
than
8,
to 3 v^ll
we
the
[8], [9],
[10
11],
and
We
and we
(0, 1, 2,
and
3).
we have
(i.e.,
we
shift
is
nodes constantly
to
0(n
node can be
shifted at
most
Eventually, the
minimum temporary
it
algorithm selects
in
Actually,
we would
Since
we
will
be
scanning
find the
all
makes sense
example
15],
to first
minimum temporary
label
is 11.
Suppose
for
..
that the
minimum
we need
only
to 4
15].
57
would be [n],
[12], (13
..
14], [15], e.
Moreover,
at the
end of
this redistribution,
we
is 1.
are
minimum temporary
whose width
To
reiterate,
we do
is
not carry out the actual node selection step until the
If
minimum nonempty
bucket
k,
the
minimum nonempty
to buckets
bucket
is
whose width
we
k into buckets
to k-1.
to k-1,
The
is
redistribution
of
the
algorithm
0(m
+ n log nC).
We now
the figure, the
illustrate
3.2.
In
length.
source
Figure 3.2
The
For
this
= flog 1201 =
7.
Nodei:
Label
d(i):
12
13
[0]
[1]
4
15
20
4
[8 ..15]
nC=120
5
[16. .31]
{5}
Buckets:
12
[2 ..3]
(3)
3
[4 ..7]
6
[32 ..63]
[64
..
7
127]
Ranges:
CONTENT:
(2,4)
(6)
Figure 3.3
The
initial
R-heap.
To
select the
label,
we
0,
1,2,
...
nonempty bucket.
has width
1,
58
algorithm designates node 3 as permanent, deletes node 3 from the R-heap, and scans
the arc (3,5) to change the distance label of
node
from 20
to 9.
its
We
new
is
bucket
Since
its
move
bucket
to a lower
5, to
index bucket. So
identify the
first
we
starting at bucket
is 4.
which
Node
new
R-heap.
Node
i:
59
CONTENT(O) =
(5),
CONTENTO)
0,
CONTENT(2) =
e,
{2.
.
CONTENTO) =
CONTENT(4) =
4),
0.
bucket 4
We
are
now
then
its
CONTENT(k) and
the modified
we
right to left
and
add
the
node
The term
reflects the
number
it
of distance ujxlates,
to a
moves
most
at
times.
bound on
node movements.
Next we consider the node
buckets from
left
selection step.
Node
selection begins
by scanning the
k.
This
node
in the selected
minimum
distance label.
0, 1,
...
,
k ^
2,
we
its
redistribute
k-1
and
reinsert
content to
If
is [/
..
u]
label of a
node
in
djj^j^,
u].
The algorithm
the
first
we
assign
2,
and so
on.
Since bucket k
1, 1, 2,
... ,
width <
2"^
widths of the
2*^,
first
we
k-1 in the
manner
nodes empties bucket k and moves the nodes with the smallest distance
0.
Whenever we examine
it
node
in the
smallest index,
all
we
move
can
time.
to a
most
times, so
the nodes
total
move
a total of at most
nK
node
O(nK)
Since
0(m
We now
60
Theorem
3.1.
path problem in
0(m
of Dijkstra's algorithm
FACT
3.2 permits
us to reduce the
number
of buckets to
+ flog
CT
0(m
this
+ n log C) time. For probelm that satisfy the similarity assumption (see Section
more
sophisticated data
),
structures,
is
bound
further to
0(m
+ n Vlog n
which
is a linear
3.4.
name
labels for
when
they
all
The
to
arcs.
i.e.,
a directed cycle
whose
arc lengths
sum
to a negative value.
Most
cycles.
d(s)
d(j)
0,
(3.1) (d(i)
= min
Cjj
{s}.
(3.2)
As
j.
usual, d(j) denotes the length of a shortest path from the source
node
to
node
These equations are knov^m as Bellman's equations and represent necessary conditions
These conditions are also
sufficient
if
every
We
will
conditions which
is
more
suitable
Theorem
source
3.2
If d(s)
and
if in
conditions,
then
they represent
the shortest
node:
61
C3.1.
d(i)
d(j)
is
the length of
d(i)
i;
and
C32.
<
Cjj
for all
j)
e A.
Proof.
Since d(i)
is
the length of
the source to
node
i,
it
is
an upper
bound on
We show
that
if
bounds on the
-
of the
theorem.
P from
the source to
node
j.
Let
consist of
Ciii2/
nodes
i2
i3
'k
= <
d(ij^)
d(ij^.-j)
Adding
d(ij^)
<
V
(i,j)
Cj;
Therefore
d(j)
is
a lower
bound on
P
node
j,
the source to
j.
We
d(i) satisfy
note that
if
no
satisfies C3.2.
Suppose
C3.2.
Consequently,
d(j)
+ =
Cj:
for each
W. These
V
e
(d(i)
d(j)
Cjj)
T!,
(i,j)
^ii
^'
W
W
is
a negative
feeisibility for
the linear
programming formulation
dual
feasibility.
From
this perspective,
we might view
and
methods
that
feasibility
The
we
consider
successively updating distance labels d(i) until they satisfy the conditions C3.2.
At any
is
we have
yet to discover
to
node
j,
or
the length of
the source to
node
d(j)
j.
The algorithm
+
Cjj,
>
d(i)
based upon the simple observation that whenever the current path from the source to node i, of length d(i), together with
is
the arc
(i,j)
is
a shorter path to
node
d(j).
62
algorithm
begin
d(s)
d(j)
: :
LABEL CORRECTING;
=
and pred(s) =
:
0;
oo for
each
N - (s);
satisfies d(j)
(i,j)
>
d(i)
Cj;
do
d(i)
Cjj;
pred(j)
i;
end;
end;
The correctness
from Theorem
3.2.
At
for all
(i, j)
e A,
is
We now
no
if
data are
in
number
pseudopolynomial time.
A
arcs that
is its flexibility:
we
do not
satisfy
still
assure the
convergence.
restriction
One drawback
Indeed,
method, however,
is
on the choice of
in
polynomial time.
we
start
make
number
of steps can
grow
exponentially with
is
To obtain
bound
for the
we
in
some
Now make
d(j)
passes through A.
in
>
d(i)
Cj:;
if
the arc
if
then update
d(i)
Cj:.
no
We
call this
label
Theorem
label
3.3
correcting algorithm
Wher: applied to a network containing no negative cycles, the modified requires 0(nm) time to determine shortest paths from the
Proof.
We show
list.
63
0(nm) bound.
Let
d''(j)
to
j
node
after r
,
list.
We
<
r.
d''(j)
for each
and
each
r
j
1,
...
n-1.
We perform
Suppose
for each
N.
The provisions
{
imply that
< min
{ly'^(j),
min
i*j
D''"^(i)
Cj; )).
Next note
node
containing no
more than
case
(i),
r arcs
either
(i)
(ii) it
contains exactly
r arcs.
In
dJi])
d^'Q)
d''"^(j),
and
d^(j)
min
i*j
{d''"^(i)
c^:].
Consequently,
min
(d''"^(j),
min
{d^"''(i)
Cj;))
min
(D''"''(i)
Cj;));
Hence,
D^'Cj)
<
d'^(j)
for all
e N.
Finally,
we
note that
Therefore,
node
consists of at
most n-1
arcs.
most n-1 passes, the algorithm terminates v^th the shortest path
lengths.
The modified
In this case, the algorithm terminates with the shortest path distances
cycle.
some node
changes
path
i
to
of
length greater than n-1 arcs that has snnaller distance than
node
to
i.
This situation cannot occur ui\less the network contair\s a negative cost
cyde
Practical
Improvements
stated so far, the modified label correcting algorithm considers every arc of the
list.
As
It
aill
need not do
so.
by
their tail
nodes so that
same
tail
node appear
i
consecutively on the
list.
we
at a
and
Now
suppose that
node
i.
d(i)
Cj;
for every
(i, j)
6 A(i)
and the
64
algorithm need not
maintain a
It
To achieve
algorithm can
list
of
labels
last
examined them.
scans this
list
list
the arc
a formal description of
method.
algorithm
begin
d(s)
d(j)
:
and pred(s) =
:
0;
(s);
>
for each
N-
LIST =
(s);
while LIST*
begin
do
element
of LIST;
delete
from LIST;
(i, j)
for each
e A(i)
do
+
Cjj
if d(j)
>
d(i)
then
begin
d(j)
:
d(i)
C|j
pred(j)
if
j
i;
LIST then
add
to the
end of LIST;
end; end;
end;
Another modification of
this
algorithm sacrifices
its
its
in
running time
in practice.
The modification
i
alters
the
manner
check
in
to LIST.
to LIST,
i
we
to see
yes, then
we add
to
we add
If
it
the
node
as a predecessor.
It is
advantageous
to
for
i.
many
Though
this
65
is
exponential.
the fastest algorithm in practice for finding the shortest path from a single
all
nodes
in
non-dense networks.
from
are
a single source
node
algorithm
more
efficient in practice.)
3.5.
we need we
to
determine
between
all
pairs of nodes.
first
In this section
is
describe two
It
algorithm
and
Dijkstra's algorithm.
The second
dense graphs.
It is
If
we
all
pairs
problem by applying
Dijkstra's algorithm
node
once.
If
we
can fist transform the network to one with nonnegative arc lengths as follows
a
Let s be
all
nodes
in the
i.e.,
connected by directed
paths.
We
distances from s
The algorithm
Cu =
(i, j)
path distances
define the
d(j) or indicates
we
P
new
j)
as
Cj;
d(i)
d(j)
e A.
for all
A.
from node k
to
node
X
(i,j)P
^ii
^ii
"*"
^^^^ ~
'^^'^
(i,j)eP
paths between a pair of nodes by a constant amount (depending on the pair) and
Since arc lengths
become nonnegative
after the
we
between
all
pairs of
We
then
in the original
network by
adding
d(/)
network.
0(nm) time
problem,
and
if
method
takes an extra
66
0(n
S(n,
m, C) time
is
to
compute
In the expression
S(n,m,C)
lengths.
the time needed to solve a shortest path problem with nonnegative arc
we
considered
previously, S(n,m,C) =
m+
to
n log nC.
solve the
all
Another way
problem
is
by dynamic
define the
known
as Floyd's algorithm.
We
as follows:
d'"(i, j)
"
the length of
a shortest path
from node
to
node
...
,
subject to the
i
1, 2,
r-1 (and
and
j)
Let d(i,
j)
d''"*'^(i, j),
we
first
,
to
node
r,
that passes
1, 2,
(ii)
...
in
which case
=
d^(i, r)
d'^"''^(i, j)
d''(i, j),
or
does
in
which case
d^'*'^{i,
j)
d^ir,
j).
Thus we have
d^(i,j)
Cjj,
and
d^'+^Ci,
j)
= min
Cj;
(d^'U, jX d^Ci, r)
d^Cr,
j)).
We
over
assume
that
for all
node
pairs
(i,
j)
e A.
r,
It is
r.
algorithm.
67
algorithm
begin
for
all
pairs
1
(i, j)
NxN
j)
:
do
d(i,
)).
<
j)
:
and =
i;
pred(i,
j)
0;
for each
for each
A
to
do
d(i,
Cj;
and
pred(i,
n do
(i, j)
for each
.
NxN
d(i, r)
do
+
d(r,
j)
-:
if d(i,
j)
>
then
<
>
begin
d(i,
j)
:
d(i, r) d(i,
+
i)
d(r,
j);
if
and
<
cycle,
STOP;
pred(i,
j)
:
= pred(r,
j);
end; end;
j),
for each
node pair
(i,
p.
The
index pred(i,
j)
denotes the
j.
last
node prior
to
node
from
the
node
to
node
j),
to
node
of length d(i,
indices.
it
performs 0(1)
pair.
Consequently,
it
The algorithm
for
when
<
0.
d(i,
i)
<
some node
some node
from node
r;*i, d(i, r)
i
d(r,
i)
to
node
node
contains a
negative cycle. This cycle can be obtained by using the predecessor indices.
Floyd's algorithm
jilgorithm.
is
in
many
Theorem
(i,
j)
(i)
(ii)
d(i, i)
for all
i;
d(i, j) is the
d(i, j)
to node j;
(Hi)
<
d(i, r)
i,
r,
and
j.
i,
this
theorem
is
a consequence of
Theorem
3.2.
68
4.
MAXIMUM FLOWS
An
important characteristic of a network
is its
capacities
on the
arcs, is the
maximum
The resolution
capacities
and
establishes
Moreover,
maximum
is
the
minimum number
join this pair of
of nodes
paths joining
the
maximum number
reliability
of
node
components.
In this section,
we
maximum
flow
in a network.
We begin
The
by introducing
maximum
flow problem.
upon
the
number
of surprising implications in
We
versions of the basic labeling algorithm with better theoretical performance guarantees.
In particular,
we
maximum
and computationally.
We
Uj; for
integer capacity
any arc
e A.
The source
and sink
(i, j)
are
(j,
of the
loss of
network.
We
assume
in A,
i) is
There
is
no
generality in
making
this
assumption since
all
we
We also
we
can
assume
set the
sum
of the capacities of
list
all
capacitated arcs).
{(i,
Let
U = max
(u^;
(i, j)
A).
As
adjacency
i.
defined as A(i) =
k)
(i,
k)
In the
maximum
flow problem,
t
we
wish
to find the
maximum
s to the
sink node
that satisfies
to
69
Maximize v
subject to
V, ifi
0,
(4.1a)
s,
y
{j
:
Xjj
{)
:
y
(j, i)
Xjj
\
^
ifi*s,t,foraUiG N,
(4.1b)
(i, j)
A)
A)
e A.
"V'
>
^'
<
Xj:
<
Ujj
for each
(i, j)
(4.1c)
It
is
some
Algorithms whose
assume
integrality of data.
by appropriately scaling
the data.
is
The concept
flow
x,
is
we
consider.
Given a
the residual
capacity,
rj;
of
any arc
i
e
j
represents the
(i, j)
maximum
(j,
node
(i)
to
u^:
node
-
and
of arc
i).
x^;,
the
unused capacity
to increase
(i, j),
The and
j.
x;j
on arc
x^:
(j,
i)
flow to node
Consequently,
Uj;
xij
We
call
and
4.1
One
of the simplest
is
and most
path
maximum
The
flow problem
the augmenting
algorithm
due
to
algorithm proceeds by identifying directed paths from the source to the sink in the
residual network
network
algorithm summarizes the basic iterative steps, without specifying any particular
algorithmic strategy for
how
to
70
algorithm
begin
x: =
0;
AUGMENTING PATH;
while there
begin
is
a path
P from
s to
in G(x)
do
A = min
:
(rjj
(i, j)
e P);
augment A
end;
P and update
G(x);
end;
For each
increases
r:j
(i,
j)
P,
r^:
by A and
a
by
A.
We now
more
detail.
First,
we need
method
to
to identify a directed
show
that the
Second,
we need
to
show
that the
algorithm terminates
Finally,
last result
we must
with a
maximum
flow.
The
theorem.
directed path from the source to the sink in the residual network
path.
is
also called
an augmenting
of an
augmenting path
is
the
minimum
an
an additional flow of A
Xj;
increase in
by A
a
in the original
network, or
(i)
a decreeise in
(ii).
Xjj
by A
in the original
it
network, or
(iii)
convex combination of
and
is
easier to
work
and
to
compute
when
the algorithm
terminates.
network
to find a
It
s to find a
directed tree containing nodes that are reachable from the source along a directed path in the residual network.
At any
step,
we
refer to the
labeled
and those
its
The algorithm
selects a labeled
arc
adjacency
list (in
more
uiilabeled nodes.
Eventually, the
maximum
from
s to
it
t.
It
then erases the labels and repeats this process. The algorithm terminates
all
when
has scanned
labeled nodes
algorithmic description specifies the steps of the labeling algorithm in detail. The
71
Network with
arc capacities.
Node
is
the source
and node
is
the sink.
(Arcs not
shown have
zero capacities.)
Network with
a flow
x.
The
72
pred(i), for
indicating the
to trace back
rode
that
caused node
a
to
be labeled.
to the source.
node
algorithm LABELING;
begin
loop
pred(j)
:
for each
N;
L: =
(s);
while L *
begin
and
t is
unlabeled do
select a
node
(i, j)
L;
for each
if
e A(i)
do
rj;
is
unlabeled and
>
then
begin
pred(j)
:
i;
mark
end
end;
as labeled
and add
this
node
to L;
if
is
labeled then
begin
use the predecessor labels to trace back to obtain the augmenting path P
from
s to
:
t;
A = min
:
(rj;
(i, j)
e P);
augment A
erase
all
labels
and go
to loop;
end
else quit the loop;
end; (loop)
end;
The
rjj
uj; - xj:
+
x:j
x:j,the
can be used to obtain the arc flows as follows. Since arc flows satisfy xj: - x:j = uj; - Fjj. Hence, if u^: > rj: we can set x^; =
Ujj
- r^;
and
0;
otherwise
we
set x^:
and
x:j
fj; - u^;.
73
In order to
show
maximum
flow,
we
introduce
some
if
new
definitions
and
notation.
- Q)
N
s.
into
two
N - S:
an
is
the set of
nodes connected
t
to
S defines an
S).
j
s-t cutset.
Consequently,
j
we
(S,
alternatively designate
s-t
cutset as
i
(S,
An
is
arc
(i, j)
with
e S cind
is
and an
arc
(i, j)
with
and S
For
this
flow vector
X,
let
We
refer
cutset (S,
S) as
Fx<S< S)=
i
X
G S
j
X_Xij
e
I_ X
e
Xij.
(4.2)
e S
Def ne
s-t
cutset
(S,
S)
is
defined as
C(S, S) =
X X ie
S je
"ij
^'^^^
S
cutset equals the value of the flow
(4.1
We
in
s-t
and does
not exceed the cutset capacity. Adding the flow conservation constraints
b) for nodes
j
S and noting
-Xjj in
that
when nodes
i,
and
both belong to
S, x^j in
Cemcels
we
obtain
^=1
ie S
Substituting
x^;
I.'^ij
I_
i S
X
je S
''ij
Fx^S, S).
(4.4)
j S
in the first
<
u^;
summation and
xj:
in the
second summation
shows
that
Fx(S,
S)<
Z_
"ij
^ C<S,
S).
(4.5)
i S J6 S
74
This result
is
the
of the
maximum
it
viewed as
a linear program.
weak
duality results,
is
holds as
some
is
choice of
of an
s-t
cutset (S,
This strong
duality property
the
Theorem
equals the
4.1.
to
Proof.
value.
maximum
maximum
flow
(Linear
programming
some
to
be the
set of labeled
initial
nodes
flow
in the residual
x.
Let
S=
N-
Clearly, since x
is
S.
Adding
in
nodes
hence
rj:
for each
forward arc
x;,
(i, j)
in the cutset
xj;
Since
rj:
U|;
Xj:
Xjj,
the conditions
S)
<
Ujj
and
imply that
Uj; for
(S,
and
x^;
for each
backward arc
in the cutset.
Making
V = Fx(S, S) =
i
^
e S
j
]
S
Ujj
= C(S,
S)
(4.6)
But
we have
is
a lower
of
any
s-t cutset.
S) is a
minimum
capacity
cutset
and
its
capacity equals
maximum
flow value
v.
We
The proof
when
it
has at
and
minimum
cutset.
But does
at
it
terminate finitely?
Each labeling
eirc
iteration of the
in A(i).
Consequently, the
If all
arc
and bounded by
a finite
number U, then
N - {s})
is at
most nU. Since the labeling algorithm increases the flow value by
iteration,
it
one unit
in
any
is
terminates within
nU
iterations.
number
is
of iterations
75
exponential in the
number
of nodes.
if
many
iterations.
In addition,
may
not
terminate:
may
maximum
flow value.
Thus
if
the
method
is
to
be effective,
we must
augmenting paths
carefully.
we
overcome
this difficulty
the capacities are irrational; moreover, the max-flow min-cut theorem (and our proof
of
Theorem
4.1)
is
true even
if
A
iteration,
is
its
"forget fulness".
At each
when
it
much
of this information
may be
we
when
4.2
Decreasing the
Number
the
of Augmentations
The bound
not
satisfactory
of
nU on
a
number
is
from
theoretical
perspective.
may
example given
that, in principle,
thein
to find a
maximum
is
flow in no more
initial
any
property,
is
augmentations on
If
we
define
x'
x'
as
the flow vector obtained from y by applying only the augmenting paths, then
also
is
maximum
it
flow (flows around cycles do not change flow value). This result shows that
is,
maximum
flow using
at
most
augmentations.
to
we need
know
maximum
theoretical
flow.
No
bound.
Nevertheless,
on the bound
of
0(nU) augmentations
76
(a)
10
\l
10^,0
(b)
10^,1
10^,1
(0
Figiire 4.2
algorithm.
(a)
(b)
Arc flow
is
(c)
s-b-a-t.
is
s-b-a-t, the
flow
maximum.
77
One
is
to
augment flow
along a "shortest path" from the source to the sink, defined as a path consisting of the
least
number
of arcs.
If
we augment
same or
Moreover, within
m augmentations, the
in the
is
guaranteed
to increase.
(We
will
next section.)
more than
number
of augmentations
most (n-l)m.
a path of
An
the
alternative
is
to
maximum
residual capacity.
This specialization also leads to improved complexity. Let v be any flow value and v* be
maximum
flow value.
at
most
v).
Now
(v*
consider a
v.
sequence of
least
less,
2m
consecutive
maximum
have
At
or
v)/2m
otherwise
we
will
maximum
flow.
Thus
after
augmenting path by
the capacity
is initially at
U and
must be
maximum,
after
maximum.
(Note
we
number
of augmentations.
4.3
Shortest
by
would obtain
a shortest path
in the
Each of these
iterations
worst
case and in practice, and (by our subsequent observations) the resulting computation
is
excessive.
We
can
improve
this
minimum
78
node
to the sink
node
is
all
augmentations.
By
we
The Algorithm
The concept
of distance labels
w^ill
prove
to
be an important construct
in the
4.5.
maximum
we
and
in Sections 4.4
Tj: is
and
A
if it
distance function
-*
Z"*"
a fimction from
is
We say
valid
C4.1
C4-2.
d(t) d(i)
= <
0;
d(j)
(i, j)
We
condition.
validit}/
is
d(i)
a lower
boimd on
i
to
in the residual
network. Let
i^
i3
...
-\ -
be any path of length k in the residual network from node i to t. Then, from C4.2 we have d(i) = d(i|) < d(i2) + 1, d(i2) 2 d(i3) + 1, ... d(ij^) < d(t) + 1 = 1. These inequalities
,
imply that
d(i)
< k for any path of length k in the residual network and, hence, any
node
to
contains at
leaist d(i)
arcs.
i
If
t
for each
node
i,
the distance
from
to
in the residual
network, then
a valid
we
call the
4.1(c),
d =
(0, 0, 0, 0) is
(3, 1, 2, 0)
We now
admissible
if it
define
satisfies
some
d(i)
additional notation.
An
arc
(i, j)
in the residual
network
is
t
d(j)
1.
path from
s to
is
an admissible
path.
next repeatedly augments flow along admissible paths. For any admissible path of length
k, d(s)
k.
Since d(s)
is
a lower
bound on
the length of
to the
Thus,
we
exact.
However,
it
for other
nodes
network
it is
exact distances;
suffices to
distances. There
is
no particular urgency
exactly.
By allowing
flexibility
node
to
be
less
to
t,
we
maintain
79
We can
compute
backward breadth
first
circs,
i',
one
at a time, as follows.
maintains a path
to
some node
path a
We
=
call this
i
and
store
it
using predecessor
of the
pred(j)
(i, j)
two steps
at the current
(i*, j*)
node:
advance or
identifies
some
and
admissible arc
designates
j*
i*,
adds
as the
new
then
i*
(we
i*)
Increasing d(i*)
makes
inadmissible (assuming
s).
Consequently,
we
delete (pred(i*),
i*)
from the
and node
pred(i*)
Whenever
t),
contains node
the algorithm
makes
maximum
possible
augmentation on
this
path and begins again with the source as the current node.
The
algorithm terminates
when
d(s)
to the sink.
We
X =
:
from node
1*
:
s;
while
begin
d(s)
< n do
if i*
ADVANCE(i*)
=
else
if i*
RETREAT(i*);
=
t
then
s;
end; end;
procedure ADVANCE(i);
begin
let (i*, j*)
be an admissible arc in =
i*
A(i*);
pred(j')
and
i*
j*;
end;
80
procedure RETREAT(i');
begin
d(i*)
if !
:
= min
s
d(j)
(i, j)
A(i*)
and
^-
>
);
?t
then
i*
pred(i*);
end;
procedure
begin
AUGMENT;
using predecessor indices identify an augmenting path P from the source to the
sink;
A = min
:
{rjj
(i, j)
P);
augment A
end;
node.
We use the following data structure to select an admissible arc We maintain the list A(i) of arcs emanating from each node
Each node
i
emanating from
Arcs in each
i.
list
can be arranged arbitrarily, but the order, once decided, remains unchanged throughout
the algorithm.
has a current-arc
(i, j)
which
i
is
Initially,
list
is
the
arc in
its
is
arc
list.
The
inadmissible,
makes
When
i
all
arcs in A(i),
becomes the
implicitly
first
arc
list.
we
shall
always
assume
We
maximum
first
show
flow problem. The shortest augmenting path algorithm maintains valid distance
labels at
Lemma
4.1.
each step.
Proof.
We show
by
Initially, the
labels.
Assume, inductively,
i.e.,
condition C4.2.
We
need
to
after
and
(ii)
81
(i)
(i, j)
might delete
this arc
to the residual
i)
with
rjj
>
might, however, create an and, therefore, also create an additional condition d(j) <
j)
Augmentation on arc
that
needs to be
d(i)
satisfied.
1
The distance
though, since
d(j)
(ii)
node
is
when
Observe that
an arc
(i, j)
inadmissible at
some
stage, then
it
d(i) increases
when
A(i),
then no arc
1
:
(i, j)
and
rj;
>
0.
Hence,
d(i)
<
min{d(j) +
(i, j)
A(i)
and
>
0)
d'(i),
lemma.
Finally, the choice for
changing
d(i)
d(i)
<
d(j)
all
(i,
j)
remain valid
Theorem
flow.
4.2.
maximum
Proof.
d(s)
n.
Since
t,
d(s)
is
a lower
bound on
the
to the sink,
which
is
the
s-t
cutset as follows.
< k <
n, let
a^ denote
for
the
number
Note
that Oj^,
must be zero
e N: d(i) > k*)
S,
Let S =
{i
some k* < n - 1 since Oj^ ^n-1. (Recall that d(s) ^ n.) = k and S = N - S. When d(s; ^ n and the algorithm terminates, s e
sets
1
S and
s-t
By
for all
(S,
S).
The
for each
(i, j)
(S,
S).
Hence,
(S,
S)
is
minimum
cutset
is
maximum.
82
We
Lemma
number
Proof.
next
show
computes
maximvun flow
in
O(n^m)
time.
4.2. (a)
Each distance
is
label increases at
most n times.
of relabel steps
at most
n^
(b)
at
most nrnfl.
at
node
increeises d(i)
d(i)
by
at least one.
relabeled
selects
node
i
at
most n times,
n.
From
algorithm never
node
again during an advance step since for every node k in the current path,
relabels a
node
at
total
number
bounded by
n'^.
at least
one
arc,
i.e.,
decreases
its
residual capacity to
Suppose
(i, j)
becomes saturated
sent
at
some
iteration (at
is
which
from
d(i)
j
=
i
d(j)
1).
on
1
(i, j)
until flow
sent back
to
(at
which point
d'(i)
,
d(i)
d(j)
2).
saturations of arc
(i, j)
d(j)
any
arc
(i, j)
can
become saturated
than nm/2.
at
number
of arc saturations is
no more
Theorem
Proof.
4.3.
in
O(n^m)
time.
The algorithm performs 0(nm) flow augmentations and each augmentation takes
in
O(n^m)
augmentation
steps.
and each
length by one;
most
n, the
algorithm
n^m) advance
steps.
The
first
which
are
bounded by nm/2 by
For each node
i,
time.
The
total
time spent in
all relabel
operations
is
V
i
A(i)
= 0(nm).
Finally,
we
N
The time taken
to identify the admissible arc of
arcs.
node
I
is
scanning arcs in
A(i).
A(i)
i.
and
relabels
node
Thus the
time spent in
all
83
scannings
is
0(
V
i
nlA(i)l) = 0(nm).
The combination
of these time
bounds
for
is
The termination
of d(s) ^ n
may
Researchers
in relabeling, a
done
after
it
We
nodes
i.e., ex.
aj^
k, for
^ k <
n.
first
array,
(S,
for
some
k* <
n.
As we have seen
earlier, if
S =
d(s)
>
k*),
then
S)
denotes a
minimum
cutset.
shortest paths
is
intuitively appealing
and
identify at
most 0(nm)
this
bound
to
perform
augmentation.
reduces
OGog
n).
This implementation of
the
maximum
0(nm
log n) time
that tends to increase rather than reduce the computationjd times in practice.
detailed
is
this chapter.
Potential Functions
Lemma
4.2(b)
A
functions.
is
to use potential
Potential function techniques are general purpose techniques for proving the
The use
relationship between the occurrences of various steps of an algorithm that can be used to
84
obtain a
bound on
we
illustrate the
technique by
showing
is
that the
number
0(nm).
Suppose
in the shortest
number
of admissible
eis
end
we
an augmentation or as a
terminates.
steps before
it
m and
many
F(K) ^
capacity of at least one arc to zero and hence reduces F by at least one unit.
relabeling of
Each
node
creates as
cis
A(i)
new
increase in F
is at
most
nm
over
any node
at
most n times
(as a
consequence of
Lemma
its
4.1)
and
V
i
A(i)
nm.
Since the
initial
value of F
is at
most
is
more than
decrease in F due to
is at
all
augmentations
most
m + nm
was
= 0(nm).
representative of the potential function argument.
of augmentations.
This argument
objective
to
is fairly
Our
bound
the
number
We
The
when
the
number
the
of augmentations using
of relabels.
we
of
bound
number
knovm
boiands on the
number
4.4
Freflow-Push Algorithms
a path.
This basic
decomposes
into the
of
A A
decomposes
into
k basic
of
it
maintains
conservation of flow
nodes.
we
develop in
this
85
node
to
this
node.
We
will refer to
of the generic
(ii)
and
updating a
distance label, as in the augmenting path algorithm described in the last section.
define the distance labels and admissible arcs as in the previous section.)
(We
they are
flexible.
A preflow
of (4.1b):
is
a function x:
A R
relaxation
y
{j:(j,i)
Xjj
'^ij
SO
,foralli N-{s,
t).
A)
(j:(i,j)
A)
a
we
N-
{s, t}
as
e(>=
{)
:
Z
(j, i)
''ji
(j
:
A)
X'^ij (i, j) A)
We
refer to a
We
adopt the
convention that the source and sink nodes are never active.
algorithms perform
all
The preflow-push
iteration of the
le<ist
At each
algorithm (except
active node,
i.e.,
its initialization
and
t)
its
one
node
N - {s,
with
its
>
0.
iterative step is to
As
If
path
we send
to
the
it
from
this
node
then
of the
node so
that
new
when
no active nodes.
the
following subroutines:
86
procedure PREPROCESS;
begin
x: =
0;
network,
stairting at
node
t,
(s, j)
e A(s)
and
d(s)
n;
end;
procedure PUSH/RELABEL(i);
begin
if
(i, j)
then
push 5 = min{e(i),
:
r^:)
node
Tj:
to
node
by min
{d(j)
(i, j)
e A(i) and
>
0};
end;
push
of 5 units
e(j)
from node
to
node
decreases both
e(i)
and
r^:
by 6 units and
(i, j)
increases both
saturating
if
and
r;,
by
5 units.
We
is
5 =
rj;
We
The piirpose
is
one admissible arc on which the algorithm can perform further pushes.
algorithm
begin
PREFLOW-PUSH;
PREPROCESS;
while the network contains an
begin
select
active
node do
an active node
i;
PUSH/RELABEL(i);
end; end;
It
might be instructive
of a physical network; arcs represent flexible water pipes, nodes represent joints,
and the
how
far
in this
network,
we v^h
to
In addition,
we
visualize flow in an
87
Initially,
we move
at a
the source
node upward,
to its neighbors.
In general,
node
that has
no downhill
At
this point,
we move
the
towards the
sink.
As we continue
to
move
nodes upwards, the remaining excess flow eventually flows back towards the source.
all
Figure 4.3 illustrates the push/relabel steps applied to the example given in
Figure
4.1(a).
Suppose the
select step
1,
examines node
2.
Since arc
(2,
4)
and
d(2) = d(4) +
push
= min
{2, 1}
units.
node 2
to
1.
Arc
(2, 4) is
(4, 2) is
added
Since node 2
(2, 3)
an active
and
(2, 1)
have positive
do not
new
distance d'(2) =
min
{d(3)
1,
d(l)+l} = min{2,5) =
2.
The preprocessing
node adjacent
to
First,
it
gives each
node
s a
by selecting
all
positive excess.
s,
arcs
a lower
bound on
the length of
t.
any
Since
we
any need
to
s again.
we
is
identify
an admissible arc
in A(i)
data structure
we used
in the shortest
(i, j)
We
maintain vrith
each node
a current arc
which
push operation.
We
We
have seen
earlier that
takes
0(nm)
88
d(3) =
e3=4
d(l) = 4
d(4)
d(2) =
e,= 2
The
residual network after the preprocessing step.
(a)
d(3) =
d(l)
=4
d(4)
d(2)
l 1
6^ =
(b)
89
d(3)
d(l) = 4
d(4) =
d(2) = 2
(c)
Figure 4.3
An
illustration of
steps.
Assuming
we
can easily
resides
show
that
it
finds a
maximum
flow.
either at the source or at the sink implying that the current preflow
r, the residual
to the sink.
This condition
total
is
flow on
the
maximum
flow value.
We now
important
times.
We
4.1,
result:
first
many
The
from
Lemma
when no
it.
The
Lemma
is
43.
connected to
At any stage of the preflow-push algorithm, each node i with positive excess node s by a directed path from i to s in the residual network.
Proof.
By
to the original
(ii)
network
t,
paths from
s to active
nodes, and
(iii)
be an
90
active
node
Then
there
t
must be
a path
P from
s to
in the
flow decomposition of
Then
and hence
a directed path
from
to
s.
This
lemma imples
set.
over an empty
Lemma
Proof.
4.4.
e N,
dii)
<
2n.
The
last
i,
it
had a positive
excess,
i
and hence
s.
the residual network contained a path of length at most n-1 from node
fact that d(s)
to
node
The
d(i)
< d(s) + n -
<
2n.
Lemma
number
4.5.
Each distance
is
label increases at
.
most 2n times.
of relabel steps
at
most 2n^
(b)
The number
of saturating pushes
at
most
nm.
Proof. The proof
is ver>'
much
similar to that of
Lemma
4.2.
Lemma
Proof.
4.6.
The number
of nonsaturating pushes is
O(n^m).
Let
III
We
denote the
V
i
I
d(i).
Since
<
n,
and
d(i)
< 2n for
all
e
is
I,
(after the
preprocessing step)
step,
is at
most
2n^. At termination, F
cases
zero.
must apply:
1.
Case
The
<ilgorithm
is
it
node
increases by e ^
by
at
most
e units.
Since the total increase in d(i) throughout the running time of the
i
is
bounded by
2n''.
F due
to increases in
bounded by
is
Case
2.
The algorithm
it
it
might
1,
new
excess at node
d(j),
j,
number
of active nodes by
and
increasing F by
over
all
saturating pushes.
does not
91
increase III.
will decrease
F by
d(i) since
becomes
inactive,
but
it
simultaneously increases F by
If
d(j)
d(i)
1 if
node
to
become
The net
active.
node
F
was
decrejises
by an amount
d(i).
decreeise in
is at least 1
We
maximum
at
summarize these
possible increase in
facts.
The
initial
value of F
is
at
is Irr-
one unit and F always remains nonnegative. Hence, the nortsaturating pushes can occur
we
indicate
how
adds
to
S nodes
become
active following a
in S,
that
become
from
in
in
it is
preflow-push algorithm
theorem:
O(n'^m) time.
We
Theorem
1.4
in
O(n'^m) time.
is
comparable
to the
bound
of the shortest
its flexibility
potential for
By specifying
operations,
we
can derive
many
max
different algorithms
select
{d(i)
we always
Let h* =
e(i)
>
0,
e N) at
some point
h*-l,
of the algorithm.
in
to
push flow
to
h*-2,
and so on.
Note
all
If a
if
node
relabeled then
excess
dov^n.
that
node during n
node and
we
immediately obtain a bound of O(n^) on the number of node examinations. Each node examination entails
at
92
is
r for
which LlST(r)
is
nonempty.
We
as
doubly linked
lists
We
nonempty
list
We
leave
it
as
an exercise
to
show
bounded by n plus
is
O(n^).
now
evident.
Theorem
4.5
that
ipith the
U
preflow push algorithm
is
straightforward,
Researchers have
time.
We
to
will next
describe
that
number
O(n^m)
We
is
bcised
4.5
Excess-Scaling
Algorithm
at
to
By pushing flows from active nodes, the algorithm The function ej^g^ ~ ^^'^ ^^^'^ i is an
:
infeasibility of a preflow.
we would
0.
we
develop an excess-
The excess-scaling algorithm is based on the following ideas. Let A denote an upper bound on ejj^g^ we refer to this bound as the excess-dominator The excess-scaling
.
is
A/2 S
^jj^ax^^-
"^^
choice assures
that during nonsaturating pushes the algorithm sends relatively large excess closer to the
sink.
little
benefit
The algorithm
maximum
may prove
to
Suppose
93
The algorithm
maximum
may prove
to
Suppose
likely that
node
It is
node
Vkdll
could not
send the accumulated flow closer to the sink, and thus the algorithm
need
its
much
likely to
be a wasted
The
algorithm EXCESS-SCALING;
begin
PREPROCESS; K:=2riogUl.
for k
:
K down
to
do
A: = 2^
while the network contains
a
node
with
e(i)
> A/2 do
end;
end;
to phase.
We
phase with a
A as
Initially,
A=
2'
^6
^ when
'
base
2.
Thus,
< A < 2U. Ehjring the A-scaling phase, A/2 < Cj^g^ < A and ejj^^^
the phase.
may
vary up and
down during
When
Ul +
1
Cjj^g^
< A/2, a
new
and we obtain
The
the
maximum
flow.
in the generic
instead of pushing
units.
6 =
min
{e(i), Tj;}
units of flow,
it
pushes 6 = min
{e(i), Ij;,
- e(j)}
select a
node with
minimum
94
Lemma
C43.
C4.4.
4.7.
The algorithm
satisfies the
at least
No
(i, j),
we have
e(i)
e(j) is
is
label
(i, j)
<
d(i)
since arc
is
min
{A/2,
ijj)
units of flow,
of flow.
we
at leaist
Let
Tj;,
e'(j)
-
be the
e(j))
after
All
the push.
Then
e'(j)
e(j)
+ min
{e(i),
<
A-
e(j)
<A
less
than or equal to A.
Lemma
4.8.
scaling phase
0(n^
log
U) pushes
in total.
^
ie
e(i)
d(i)/A.
Using
N
Since the algorithm has
first.
we
lemma.
Odog U)
at
a consequence of the
The
e(i) is
initial
value of F
bounded by A and
bounded by
2n.
must apply:
Case
1.
The algorithm
is
it
node
increases
e(i)
by
units.
i,
<
A.
increaise in d(i)
4.4),
is
the total
due
to the relabeling of
nodes
bounded by 2n^
is at
in the A-scaling
all
phase
F due
to
node
relabelings
most
2n'^
over
scaling phases).
Case
2.
The algorithm
is
it
it
In either
F decreases.
i
A
and
sends
at leaist
A/2
tmits of flow
at least
from node
1/2
units.
to
node
F decreaases by
is at
Since the
initial
value of F
at the
in
during
this scaling
is
phase sum to
8rr.
at
the
number
of nonsaturating
pushes
bounded by
95
This
lemma
implies a
bound
of
0(nm
all
+ n^ log U)
algorithm since
we have
other operations
such
as saturating
require
0(nm)
time.
Up
to this
we have
if
minimum
distance label
easy,
this identification is
we
to the
one used
label.
4.4 to find a
e(i)
We
is
maintain the
LIST(r) =
{i
r),
and a variable
level
which
a lower
bound on
for
which LlST(r)
is
nonempty.
We
nonempty
starting
at LIST(level)
lists.
We
is
leave as an
exercise to
show
needed
is
operation.
result.
With
this observation,
we
Theorem
time.
4.6
in
0(nm + n^
log
U)
To conclude this section, we show how to solve maximum flow problems vdth nonnegative lower bounds on flows. Let /j; ^ denote the lower bound for flow on any
eu'C (i,
j)
e A.
Although the
maximum
has a feasible solution, the problem wiih nonnegative lower bounds could be
We
We
problem by solving a maximum flow set x^: = /j: for each arc (i, j) e A. This
i
N.
(We
We
e(i)
s*,
and
super sink,
node
t*.
with
>
we add an
t*)
arc
(s*,i)
with capacity
e(i),
and
for each
node
with
e(i)
<
0,
we add an
s* to
t*.
arc
(i,
with capacity
-e(i).
We
then solve a
v*
problem from
Let
x* denote the
maximum
v* =
{i:
flow and
e(i)
If
\
e(i)
>
0)
is
feasible
arc
(i, j)
as
x^;
/jj
a feasible flow;
otherwise,
the problem
infeasible.
96
a feasible flow,
initially
first
we
maximum
flow
as
(i, j)
Xjj)
(xjj
/jj).
The
cmd
on arc
(j,
i).
It is
it
is
maximum maximum
flow
flow
we have
already discussed.
97
5.
we
minimum
cost flow
problem.
We
Minimize
2^
Cj; x;:
'
{5.1a)
(i,j)A^
subject to
{j
(i, j)
X X) X::
(j
(j, i)
X^!k) =
''ii
t)(>)'
for
a"
>
e N,
(5.1b)
<
xjj
<
Ujj,
for each
(i, j)
A.
(5.1c)
We assume
nonnegative.
Let
that the
lower bounds
(
/j;
all
C
}
=
).
max
Cj;
(i, j)
and
max
max
lb(i)l
ie N},
max
ujj
(i, j)
in Section 2.4
We
assumption that
all
data
(cost,
supply/demand and
problem
We
also
assume
that the
minimum
cost flow
satisfies the
A5.1.
Feasibility Assumption.
We assume
that
X
ieN
^(^^
and
that
the
minimum
cost
We
maximum
t*.
feasibility of the
minimum
cost flow
problem by solving a
s*,
sink
node
i
with
arc
b(i)
(i,
>
0,
add an
arc
(s*, i)
with capacity
b(i),
for each
node
with
<
0,
add an
If
t*)
with capacity
-b(i).
flow problem
cost
from
s* to
t*.
the
maximum
T
b(D >
b(i) 0)
then the
minimum
flow problem
A5.2.
is feasible;
otherwise,
it is
infeasible.
Connectedness Assumption.
(i.e.,
directed path
We assume that the network G contains an uncapacitated each arc in the path has infinite capacity) between every pair of nodes.
this condition,
if
We
(j,
impose
j
(1, j)
and
1) for
each
and assigning
a large cost
to each of these
98
arcs.
No
minimum
problem
artificial arcs.
The
residual network
(i, j)
is
defined as follows:
Cj:
by two
arc
(j,
and
(j, i).
The arc
(i, j)
has cost
rjj
and
x^;.
residual capacity
u^j - x^;,
and the
has cost
-Cj:
The
The concept
example,
if
some
(i,
notational difficulties.
j)
For
and
(j,
i),
to
arcs
from node
to
node
costs.
Our
one node
more general
parallel arcs
never arise
by inserting
extra
nodes on
parallel arcs,
we
can produce a
parallel arcs).
in the residual
network G(x)
is
an augmenting
and vice-versa
augmenting
cycle).
Theorem
Theorem
2.4.
5.1.
feasible flow
is
an optimum flow
if
and only
if
5.1.
As we have seen
programming dual
in Section 1.2,
due
minimum
The
cost
linear
many
of these properties.
Moreover, the
minimum
cost flow
problem and
its
we
formally
conditions.
99
We
each arc
generality.
in (5.1b).
consider the
j)
minimum
is
cost flow
problem
that this
(5.1)
assuming
that
Uj;
>
for
(i,
A.
It
possible to
show
7t(i)
assumption imposes no
loss of
i
We
redundant,
,
we
that
can
set
We,
therefore
assume
7c(l)
=
(i,
Further,
in (5.1c).
we
The
dual problem to
(5.1) is:
Maximize
X
ie
t)(') '^(i^
~
(i,j)
X
e
"ij ^i\
^
(5 2a)
'
subject to
7c(i)
7c(j)
6ij
<
Cjj
for all
(i, j)
e A,
(5.2b)
5jjS
0,
foraU
(i,j)e
A,
(5.2c)
and
Ji(i)
are unrestricted.
>
=>
7i(i)
n(j)
5jj
Cjj
(5.3)
6jj
>
Xjj
Ujj.
(5.4)
to the
=*
7c(i)
7t(j)
<
Cjj
0<xjj <u^j=*
= Ujj=>
Jt(i)-
Jt(j)
Cjj,
(5.6)
Xij
n(i)
n{])
^ qj
<
Xj:
(5.7)
To see
this equivalence,
suppose that
<
Uj: for
some
arc
(i, j).
The condition
(5.3)
implies that
7t(i)-7t(j)
-5jj =
Cjj,
(5.8)
Since
(5.6).
Xj:
<
Uj;
Xj;
(5.4)
Uj:
implies that
6jj
0;
Whenever
>
for
some
arc
(i, j),
implies that
n(i)
n(j)
5jj
Cjj
100
Substituting
5jj
in this
6jj
equation gives
(5.7).
Finally,
if
xj:
<
uj;
for
some
arc
(i, j)
then
(5.4)
imples that
and substituting
We
(5.5)
(i, j)
as
Cj;
Cj:
Ji(i)
+
is
n(j).
The conditions
if it
(5.7)
imply
that a pair
x,
of flows and
node
potentials
optimal
satisfies
C5.1 C5.2
X
If
> = <
0, 0, 0,
then
then
Xjj
0.
C5.3
C5.4
If
If
<
x^:
Xj;
<
Ujj.
then
U|j.
Observe
however,
we
retain
is
feasible.
(i, j)
in the residual
network G(x).
Note
note that
if
>
and
Xj;
>
C5.2, C5.3,
(i,
j)
and
C5.4.
To
<
in the original
Cjj.
residual network
C5.6.
would contain
arc
with
Cj;
Cjj
= Xjj
But then
for
Cjj
contradicting
if
<
and
<
Uj;
some
(i, j)
in A.
It is
easy to establish the equivalence between these optimality conditions and the
condition stated in
satisf)'ing
Theorem
5.1.
x,
W be any
X
(i,j)
C:;
'^
S 0. Further
S
(i,j)e
q: =
''
(i,j)
XW
C;;
I
(i,j)
(-Jt(i)
Jt(j))
(i,
-t^ Cjj
cycle,
i)eW
To see
the converse, suppose that x
is
feasible
negative cycle.
Then
The
in the residual
Cj:,
network the shortest distances from node 1, with Let d(i) denote the shortest distance from
node
to
node
i.
shortest path optimality condition C3.2 implies that d(j) < d(i) + q;
101
for aU
(i, j)
in G(x).
Let n =
x,
71
d.
Then
< q; +
d(i)
d(j)
Cj; - Jt(i)
7t(j)
Cj;
for all
(i, j)
in G(x).
satisfies
5^
The minimum
cost
maximum
flow problems.
The
shortest path
s to all
,
other nodes
can be formulated as a minimum cost flow problem by setting b(l) = (n - 1) b(i) = -1 for all 1 * s, and Uj; = for each (i, j) e A (in fact, setting Uj: equal to any integer greater
than (n 1)
will suffice
if
we wish
s to
to
maintain
t
finite capacities).
Similarly, the
maximum
=
node
minimum
cost
(t,
with
Cj:
c^g
= -1 and u^^ =
for each arc
(in fact,
Uj^
max
{u|;
(i, j)
(i, j)
A. Thus,
minimum
cost flow
maximum
minimum
cost flow
problem either
explicitly or implicitly
for these
improved algorithms
minimum
cost flow
for
problem.
the
minimum
We
how
to obtain
to obtain
is
c is the vector of
reduced
costs.
We
Any
arc
(i, j)
bound
defined as follows:
102
(i)
(ii)
(i, j)
in
(i, j)
(iii)
For each
(i,
j)
A with Cj; > 0, A* contains an arc in A with Cj; < 0, A* contains an arc in A with c,; = 0, A* contains an
(i, j)
with
u^:*
1j:
0.
(i, j)
with u^* =
(i,
1^:*
=Uj;.
arc
j)
with
Uj;*
uj;
and
hf =
0-
The lower and upper bounds on arcs in the cost-residual network G* are defined so that any flow in G* satisfies the optimality conditions C5.2-C5.4. If Cj; > for some
(i, j)
xj:
in the
optimum
(i, j)
flow.
Similarly,
if
Cjj
<
for
some
(i, j)
must be
at the arc's
upper bound
in the
optimum
0,
condition C5.3.
.
Now
network
the problem
is
same
supply/demand
We
first
bounds of arcs
problem
to a
maximum
cost
Then
maximum minimum
problem
in G.
5.3.
engineers and
many
others
have extensively studied the minimum cost flow problem and have proposed a number
of different algorithms to solve this problem.
sections,
we
minimum
We
first
The negative
to attain
x and strives
dual
feasibility.
when
when
Theorem
5.1
implies that
has found a
minimum
cost flow.
103
algorithm
NEGATIVE CYCLE;
begin
establish a feasible flow x in the network;
W;
= min
[t^
(i, j)
e W);
augment
end;
end;
A
cycle
3.4,
is
maximum
flow problem
One
the label correcting algorithm for the shortest path problem, described in Section
to identify a negative cycle.
flow cost by
zero
is
one
unit.
Since
mCU
is
an upper bound on an
cost, the
a lower
bound on
the
optimum flow
O(mCU)
iterations
in total.
we
briefly
(i)
much
less
The simplex
algorithm
solution
be discussed
later)
maintains a tree
and node
0(m)
effort.
However, due
to degeneracy, the
amoimt
(ii)
maximum improvement
due
in the objective
function value.
The improvement
is
to the
augmentation
x* be
along a cycle
(i, j)
IW
m
(min
^
(rjj
(i, j)
e W)).
Let x be
an
optimum
flow.
The augmenting
cycle theorem
(Theorem
to x.
Further,
improvements
to ex -ex*.
objective
due
to
sum
Consequently,
at least
to x
function by at least
(ex -cx*)/m.
104
cycle with
obtain
maximum improvement, then Lemma 1.1 implies an optimum flow within 0(m log mCU) iterations.
cycle
is
that the
method would
Finding a
of this
maximum
improvement
modest variation
approach yields
minimum
(iii)
minimum mean
it
cost.
We define
of a cycle
cycle
cost divided
contains.
minimum mean
whose mean
cost
is
as small as possible.
time.
possible to identify a
minimum mean
that
if
cycle in
0(nm)
or 0(Vri
m log nC)
shown
the
minimum mean
is
cycle, then
minimum mean
(negative) cycle
1.1
cycle value
nondecreasing;
iterations.
Since
mean
cost of the
minimum mean
-1/n,
is
Lemma
0(nm
5.4.
Successive
Shortest
Path
Algorithm
The negative
algorithm maintains dual feasibility of the solution at every step and strives to attain
primal
feasibility.
It
supply/demand
At each
step, the
to
The algorithm
when
the
supply/demand
the capacity
i
constraints.
<md normegativity
constraints.
x,
we
as
e(i)
b(i)
+
{j:
(j, i)
X A]
''ii
{j: (i,j)
X a1
e(i)
''ii'
for all
e N.
If e(i)
-e(i) is
>
for
some node
i,
then
e(i) is
of
node
Let S
i,
if e(i)
<
0,
then
node
vdth
called balanced.
105
sets of excess
and
deficit
nodes respectively.
to a
pseudoflow
is
way
that
we
The successive shortest path algorithm successively augments flow along shortest paths computed with respect to the reduced costs Cj;. Observe that for any directed path
P from
node k
to a
node
/, (i,
fe P
C;;
''
=
(i,
Y fe
C:; - nil)
+ n(k).
?'>
potentials change
all
and the
Cjj.
the
same
bls
The correctness
on the following
result.
5.1. Suppose a pseudoflow x satisfies the dual feasibility condition C5.6 unth respect to the node potentials it. Furthermore, suppose that x' is obtained from x by sending flow along a shortest path from a node k to a node I in Gix). Then x' also satisfies the dual feasibility conditions with respect to some node potentials.
Lemma
Proof.
jt,
Since x satisfies the dual feasibility conditions with respect to the node potentials
Cj:
we have
to
for all
(i, j)
in G(x).
node k
any node v
We
satisfies the
= 7t-d.
The
d(j)<d(i)+
Substituting
cjj
for all
(i, j)
in G(x).
Cj:
=
-
Cj;
Jt(i)
n(j) in
these conditions
and using
7t'(i)
7t(i)
d(i) yields
qj"
Cjj
7:'(i)
n'(j)
0,
for all
(i, j)
in G(x).
Hence, x
every arc
every arc
satisfies
(i, j)
(i, j)
n'.
/,
Cj;'
=
Cjj
for
node
for
P and
Cj:
c^;
;c(i)
Jt(j).
We
in
are
now
its
in a position to
arc
Augmenting flow on an
Cj:
arc
(i, j)
(i, j)
may add
,
reversal
arc
(j,
i)
to the residual
Cjj
0,
and so
(j, i)
The node
Besides using
them
to
we
106
lengths are nonnegative, thus enabling us to solve the shortest path subproblems
efficiently.
more
begin
X
:
7t
0;
e(i)
compute imbalances
while S ^
and
S and T;
do
begin
select a
node k
e S
and
node /
T;
d(j)
from node k
to all
Cj;;
P denote
:
a shortest
path from k to
1;
ujxJaten
6
:
= 7t-d;
e(k), -e(/),
= min
min
rj:
(i, j)
];
augment 6
update
end; end;
X,
S and
To
satisfies
we
set x
0,
which
is
a feasible pseudoflow
and
arc
since,
by assumption,
all
5*0,
then
T*
because the
sum
of excesses always
equals the
sum
of deficits.
node k
to
node
/.
Each iteration of
algorithm solves a shortest path problem with nonnegative arc lengths and reduces
the supply of
some node by
Cj:
at least
one
unit.
Consequently,
if
is
an upper bound on
iterations.
nU
Since
this
algorithm
is
S(n,
m, C)
Currently,
is
bound
implement
Dijkstra's algorithm
is
CXm + n
bound
is
0(min {m
log log C,
m
it
+
is
nVlogC
).
The successive
n,
polynomial in
and the
supply U.
The algorithm
however, f>olynomial
107
minimum
cost flow
problem
for
which
1.
In Section 5.7,
we
will
minimum
cost
5.5.
The
primal-dual algorithm
is
except that instead of sending flow on only one path during an iteration,
might send
flow along
many
paths.
To explain
we
transform the
minimum
cost flow
problem
into a single-source
in the
assumption A5.1).
primal-dual
algorithm solves a shortest path problem from the source to update the node potentials
(i.e.,
as before, each
7:(j)
becomes
7t(j) -
d(j))
maximum
flow problem to
send the
reduced
maximum
possible flow from the source to the sink using only arcs with zero
that the excess of
cost.
some node
strictly
decreases at
each iteration, and also assures that the node potential of the sink
latter
strictly decreases.
The
flow
we have
solved the
maximum
problem, the network contains no path from the source to the sink in the residual
network consisting
iteration d(t)
reduced
1.
magnitude
of each
node
potential
is
is
better than that of the successive shortest path algorithm, but, of course, the algorithm
maximum
flow problem
at
each iteration.
Thus, the algorithm has an overall complexity of 0(min (nU S(n, m, C),
nC M(n, m,
U)),
where
S(n,
shortest p>ath
The successive
shortest path
mass balance
constraints.
These algorithnns
comes
mass balance
However, we could
The
just
intermediate steps.
and may
idea
is
on an arc
(i, j)
to Uj;
and the flow bound restrictior. The basic if Cj; > 0, Cj: < 0, drive the flow to zero if
=
0.
and
to permit
and
Uj: if
Cj:
The
kilter
number, represented by
k^:.
108
kjj,
of an arc
(i, j)
is
defined
cis
the
minimum
satisfy its
j)
feasibility condition.
(i,
with
is
Cjj
>
0, k^j
x^j
and
for
an arc
(i, j)
with
c^j
<
0, k^;
u^;
x^:
An
arc with
k^:
said to be in-kilter.
At each
it
number number
terminates
when
all
Suppose the
kilter
the arc.
would obtain
augment
this
a shortest path
to
node
{(i, j)).
in the residual
at least
is
The proof
of the correctness of
algorithm
more
algorithm.
5.6.
for the
minimum
cost flow
problem
for
is
the
linear
problem
offers several
need
simplex tableau.
The
in
two decades
for maintaining
have
substantially
of the algorithm.
testing,
Though no
known
to
minimum
In this section,
we
We
first
define the concept of a basis structure and describe a data structure to store and to
manipulate the
basis,
which
is
a spanning tree.
structure.
and node
potentials for
any basis
We then show how to compute arc flows We next discuss how to perform various
to
simplex operations such as the selection of entering arcs, leaving arcs and pivots using
the tree data structiire. Finally,
we show how
simplex algorithm.
109
each stage
U); B,
minimum
The
cost flow
set
problem
defined by a triple
i.e.,
(B, L,
and
tree,
U p>artition
and L and
B denotes the
arcs of a spanrung
U
by
and upper
U) is j) g U,
bounds.
We
U) as
a basis structure.
(i, j)
A basis
xj:
structure (B, L,
u^: for
called feasible
setting
Xj;
for each
e L,
and
setting
(5.1c).
each
(i,
A
+
U)
is
called an
optimum
basis structure
if it is
Cj;
by
Cj;
nii)
n(j) satisfy
the following
optimality conditions:
Cjj
= S <
for each
(i, j)
e B,
L,
(5.9)
Cij
for each
for each
(i, j)
(5.10)
,
Cjj
(i, j)
U.
/
(5.11)
We
shall see a
that
if
tree path in
B from node
to
node
j.
Then,
imply that -7t(j) denotes the length of the cj; = Cj; - jc(i) + 7t(j) for a nonbeisic arc (i, p in
L denotes
the change in the cost of flow achieved by sending one unit of flow through
node
to
node
j
i,
(i, j),
to
node
1.
The condition
not profitable for any nonbasic arc in L. The condition (5.11) has a
similar interpretation.
becomes an optimum
basic structure.
procedure.
110
algorithm
NETWORK
SIMPLEX;
begin
determine an
initial btisic feasible
compute node
do
an entering arc
(k,
/)
(k, /)
add arc
to the
forming a cycle
we
in greater detail.
Basis Structure
of obtaining an initial
We
(j,
{!),
the network
contains arcs
(1,
j)
and
1)
initial basis
(1, j)
with flow
and arc
set
(j,
1)
with flow
b(j) if b(j)
>
0.
The
this
jmd the
as
U is
computed using
(5.9),
we
The
is
simplex algorithm can perform operations efficiently and update the representation
quickly
when
We
root.
as "hanging"
1 is
We associate
node
in the tree:
node
it
Ill
to the root.
stores the
first
node
in that
i)
number
The Figure
5.1
shows an example
of these indices.
Note
that
by
we
We
is
the predecessor of
node
i
and
is
a successor of
node
The descendants
and so
node
consist of the
node
itself, its
successors, successors of
successors,
on.
set (5, 6, 7, 8, 9)
is
of
node
5 in Figure 5.1.
nodes
4, 7, 8,
and 9 are
leaf nodes.
The thread
threads
its
way through
nodes of the
tree, starting at
the root
and
visiting
nodes
in a
"left to right"
order,
and then
The
first
Section 1.5 and setting the thread of a node to be the node encountered after the
itself
node
in
this
depth
first
search.
i.
two
properties;
(i)
node appears
itself;
and
(ii)
The thread
means
descendants of a node
visited until the
We
5,
at least as large as
node
i.
For
example, starting
node
5,
we
visit
nodes
3.
6,
8, 9,
and 7
in order,
descendants of node
and then
left
visit
node
we know
that
we have
node
5.
As we
will see,
finding the descendant tree of a node efficiently adds sigiuficantly to the efficiency of the
simplex method.
basic steps:
(i)
p>otentials of a
structure.
We
now
describe
how
to
Computing Node
Potentials
We
first
We
assume
that n(l)
0.
Note
112
can be set arbitrarily since one constraint in (5.1b) is redundant. We compute the remaining node potentials using the conditions that Cj: = for each arc (i, j) in B. These
conditions can alternatively be stated as
113
n(j)
Ji(i)
Cjj,
(i, j)
e B.
(5.12)
is
to start at
node
potentials.
j,
The
whenever
this
its
node
it
comput
in
7t(j)
using
(5.12).
The thread
compute
node potentials
method.
procedure
begin
7t(l):
COMPUTE POTENTIALS;
=
0;
j:
= thread(l);
j
while
do
begin
i
:
pred(j);
if
(i, j)
then
;:(])
7t(i)
Cj;; Cjj;
if
(j, i)
A then
7t(j)
7t(i)
= thread (j);
end; end;
compute flows on
We
in
computing flows
this task.
114
procedure
begin
e(i)
:
COMPUTE FLOWS;
=
b(i) for
aU
N;
let
T be
for each
U do
subtract Uj; from
e(i)
set X|j
u^j,
and add
u^: to e(j);
while T*{1) do
begin
select a leaf
i
:
node
in the subtree T;
pred(j);
if
(i, j)
T then
=
e(j);
Xj;
-e(j);
else
Xjj
add
e(j)
to e(i);
j
delete
node
and the
arc incident to
it
from
T;
end; end;
One way
thread indices.
is
to select
nodes
this task in
the nodes
Note
node appears
after
prior to
its
its
examining
descendants.
Now
additional
The
we
set x^;
demand
of Uj; units at
Xj:
node
u^:
explains the
The manner
sum
of the
connected to the
(i, j)
(or
(j,
i)),
this arc
must carry
-e(j)
(or
e(j))
units of flow to satisfy the adjusted supply /demand of nodes in the subtree.
system of equations Bx =
b,
corresponding to
2.6 in Section
is
Theorem
is
precisely
115
Compute
B =
by back
Entering
Arc
types of arcs are eligible to enter the basis:
a negative
is
Two
bound with
aiiy
nonbasic arc
at its
lower
a
at its
(5.10) or
for selecting
inajor effect
selects
I
An
i.e.,
implementation that
Cjj
among such
arcs,
which
is
very time<onsuming.
On
the other
cyclically
the
would quickly
might require a
of the
relatively
number
of iterations
due
to the
list
One
most successful
effective
compromise
strategies.
a candidate
list
minor
iterations.
In a
major
iteration,
we
list.
We
examine arcs
at a time,
adding
(if
We
repeat this
list
until either
we have examined
all
nodes or the
has reached
its
maximum
allowable
size.
node
iteration ended.
Once
minor
list
in a
major
iteration,
it
performs
list
iterations,
scanning
all
most
As we scan
the arcs,
we
list
no longer
conditions.
Once
minor
the
list
on the
list
number
of
iterations to
iteration.
we
rebuild the
116
Leaving Arc
Suppose we
basis
1)
The addition
is
B forms
W, which
sometimes referred
(k, /) if (k, /)
pivot cycle.
We
/)
around
W along and opposite to the cycle's orientation. Sending additional flow the pivot cycle W in the direction of orientation strictly decreases the cost of
its
We
(i, j)
until
in the
W reaches
5j:
its
change
on an arc
|Uj:
X|:
if (i, j)
W,
^j=[Xi;,
if(i,j)eW.
e
We
W.
If P(i)
send 6 = min
{5jj
(i, j)
W)
units of flow
around W, and
is
select
an arc
(p, q)
arc.
The
to the root
node, then
/)}
P(k)
P(/))
P(/))).
P(/).
In other words,
(k,
/)
and the
and
Using predecessor
W as
/
follows.
Start at
predecessor indices trace the path from this node to the root and label
this path.
the nodes in
until
we
labeled, say
the
first
common
P(/)
The
cycle
/).
up
It
arc (k,
but
it
can be improved.
has the drawback of backtracking along some arcs that are not in
the portion of the path P(k) lying between the apex
W, namely,
those in
and the
root.
The simultaneous
use of depth and predecessor indices, as indicated in the following procedure, eliminates
this extra
work.
117
*
'
begin
i
:
= k and
i
/;
while
^ j do
begin
if
pred(i)
j
:
else
depth(j)
= pred(j)
else
= pred(i) and
pred(j);
end;
w
end;
i;
A
/.
simple modification of
this
procedure permits
first
it
W as
it
determines the
common
W,
ancestor
of
nodes k and
flows on arcs.
typically
The
entire flow
worstose, but
Basis
Exchange
In the terminology of the simplex
is
a pivot operation.
If
is
nondegenerate.
basis
is
called degenerate
nondegenerate otherwise.
basis.
Observe
(k, /) for
it
must update the basis structure. If the leaving arc is the same as the entering arc, which would happen when 6 = uj^j the basis does not change. In this instance, the arc (k,J)
,
set
to the set
U, or vice versa.
If
becomes
more extensive chmges are needed. In this instamce, the arc (p, q) nonbasic arc at its lower or upper bound depending upon whether Xpg = or
Oc,
/)
and deleting
tree.
(p, q)
new
basis
spanning
The node
potentials also
follows.
The deletion
Note
of the arc (p, q) from the previous b<isis partitions the set of nodes
,
into
root node,
and the
The
arc (k,
/)
has
118
one endpoint
Cjj - 7t(i)
in
T-j
in T2.
As
is
0,
and
7t(j)
new
nodes
in the
subtree T^ remain unchanged, and the potentials of nodes in the subtree T2 change by a
constant amount.
-
If
k e T^ and
e T2,
then
all
the
node potentials
Cjj.
in
T2 change by
Cj^/
if
e T|
the thread
and depth
procedure
begin
if
UPDATE POTENTIALS;
: :
q e T2 then y = q else y =
:
p;
if k e T| then change =
7t(y)
Cjj else
change = Cjj
:
7t(y)
+ change;
= thread(y);
begin
7c(z)
:
7:(z)
+ change;
= thread (z);
end;
end;
The
exchange
is
This step
is
we
refer the reader to the reference material cited in Section 6.4 for
it is
the details.
time.
We
do
Termination
as just described,
basis structure
easy to
is
show
number
of steps
if
each
pivot operation
nondegenerate.
Recall that
cj^/
which 6 >
the
new
61
cy
structiire.
number
of basis structures
and every
basis structure
cost, the
we address next.
119
number
we impose an
an
leaving arcs. Researchers have constructed very small network examples for which poor
i.e.,
infinite repetitive
is
Degeneracy
in
network problems
Computational studies have shown that as many as 90% of the pivot operations
common
runs
we show
next,
by maintaining a
special type of
finitely;
moreover,
it
practice as well.
minimum
cost flow
problem with
integral
As
earlier,
we
conceive of a basis tree as a tree hanging from the root node. The
upward pointing (towards the root) or are downward pointing (away from
(B, L,
the root).
positive
We
U)
is
strongly feasible
if
we
can send a
amount
any node
in the tree
Observe
no upward pointing
at its
eirc
can be at
lower bound.
for avoiding cycling in the
The
perturbation technique
is
well-known method
fecisible basis is
it
is
easy to
optimum
optimum
solution of the
original problem.
We show
network
simplex method
basis technique.
is
The minimum cost flow problem can be perturbed by changing the supply/demand vector b to b+E We say that e = (Ej, , t^) is a feasible perturbation
.
... ,
if it
satisfies the
following conditions:
(i)
Ej
> n
for all
2, 3,
...
n;
(ii)
i
1
= 2
ti
<
1;
ard
120
r
(iii)
El
^^
= 2
is Cj
,
One
E|
= 1/n
with
for
2,
...
n (and thus
= -{n - l)/n
Another choice
is Ej
= a* for
2,
...
n,
o chosen
as a very small
justification
positive number.
basic arcs.
The
procedure
we gave
Compute-Flows,
1.
If
(i, j)
is
downward
B and
node
Ei,
j,
(i, j)
by
kD(j)
1,
is
Ew-
Since
<
keD(j)
<
2.
If
(i, j)
is
an upward pointing
arc of tree
B and
in arc
D(i)
is
node
El.
i,
(i, j)
by
k
El.-
Since
<
<
rXi)
k CKi)
1,
is
Theorem
U) of
the
minimum
(B, L,
U)
is
(ii)
No upward
the basis
(B, L,
is
at its
at its lower
(iii)
U)
L,
is
feasible if
we
replace b
(iv)
(B,
U)
is
feasible
...
,
if
we
by
b+e,
for
the
perturbation
1/n).
Proof,
(i)
(ii).
(i, j)
is
at its
cannot send any flow to the root, violating the definition of a strongly feasible
the
For
same
=^
reason,
at its
lower bound.
(ii)
(iii).
Suppose
true.
As noted
strictly
earlier,
between
than
its
and
1.
upward pointing
arc
is
integral
and
strictly less
(integral) upp>er
bound, the
Similar reasoning
shows
that after
we have
downward
feeisible.
121
(iii)
=>
(iv).
...
1/n)
is
a feasible
perturbation.
(iv)
=*
(i).
(B, L,
B has a
If
original problem.
we remove
(i.e.,
b +
by
b),
flows on the
the
U|:
downward
upward
Consequently,
(B, L,
x^:
upward pointing arcs decreaise, and > for downward pointing arcs, x^; <
and
U)
is
is
equivalent to
applying the ordinary simplex algorithm to the perturbed problem. This result implies
that both
if
same
As
shows
any
most
nmCU
pivots.
( -
To
problem
1/n).
With
on every arc
is a
1/n units of flow and therefore decreases the objective function value by
units.
1/n
Since
mCU
is is
solution
and zero
bound on
the
minimum
nmCU
iterations.
the
simplex
algorithm
that
maintains a
strongly
basis
runs
in
pseudopolynomial time.
We can
e.
However, there
no need
to actually
Instead,
we
can
equivalent to applying
method
after
we have imposed
the perturbation.
Even though
this
it is
guaranteed to converge.
our discussion of
this
method.
starts
initial basis
basis.
The algorithm
selects the leaving arc in a degenerate pivot carefully so that the next basis is also
122
feasible.
first
Suppose
/) is
at its
w
/).
is
the
common
tree.
Let
(k, /) to
the basis
We define
5jj
After
in the
If
those arcs
(i, j)
W that satisfy
5.
If
unique, then
the
i.e.,
cycle contains
some
upper bounds.
W along
its
introducing an arc into the basis for the network simplex say arc (p, q), encountered in traversing the orientation starting at the apex w.
When
We next
do
so,
show
is
strongly feasible. To
we show
node
in the cycle
root node.
Notice that since the previous basis was strongly feasible, every node could
to the root
node. Let
W^
apex
w and arc
-
(p, q),
when we
Further,
W2 =
for
W|
{(p, q)).
W^ and W2 to
W, no
arc in
be compatable vdth
the orientation of
W. See
W|
is
and
W2
our example.
Since arc
(p, q) is
W2
blocking and
W2
and
via
node w.
If
W2 can send positive flow to the root along the Now consider nodes contained in the segment W^.
We
distinguish
two
cases.
augmented
segment
via
a positive
amount
was a nondegenerate pivot, then the pivot flow along the arcs in Wj; hence, every node in the
to the orientation of
W^
W^
and
in
node w.
was
W^
must be contained
the segment of
feasibility,
between node
and node
/
k,
to
node
w can send a
positive
amount
of
this
degenerate pivot.
Now
must
W^
could send
every node in
W^
be able to send positive flow to the root after the pivot as well.
is
strongly feasible.
We now
study the
change on node potentials during a at its lower bound, cj^j < 0. The leaving
lies in
from node k
the subtree
T2 and
123
the potentials of
all
nodes
in
0.
Consequently,
this
degenerate pivot
sum
of all
assumptions
the
is
integral).
Since the
sum
of all
node
number
So
is finite.
far
we have assumed
lower bound.
If
the entering
arc (k,
/)
is at its
W as opposite to
its
The
leaving arc
starting at
is
the
Icist
along
orientation
node w.
in
In this case,
node
is
pivot
all
nodes
increases the
sum
node
potentials.
Complexity Results
The strongly
some
pivoting in
the arc
(k,
/)
value of
Cj^j
among
all
also yields polynomial lime simplex algorithms for the shortest path
and assignment
problems.
We
have already shown that any version of the network simplex algorithm
that
O(nmCU)
pivots.
0(nmU
defined as
e
mCU. As
...
earlier,
,
we
1/n). Let
of the perturbed
minimum
cost flow
problem
(B, L,
basis
Let
arc.
A >
If
denote the
maximum
any
nonbasic
maximum
Hence,
by
at least
A/n
units.
^k.^k+l^^/n
(513)
We now
total
possible
that
improvement
in the
easy to
show
124
ap>exw
(3,4)
(2,2)
0,5)
(0,5)
Entering arc
Figure
5.2.
capacities
represented as
The entering
arc
(2, 3)
and
(7, 5);
arc
is (7, 5).
This pivot
is
a degenerate pivot.
W2
are as shown.
125
(i,j)e
'
'
(i,j)
A^
ieN
is
node
C:: x;'
(i,j)A^
is
equal
to
the
total
improvement
Further,
the
with
respect
to in
the
the
objective
objective
function
c;; x;;.
''
total
improvement
(i,j)A'^
function
Cj; Xj; is
bounded by
the total
.'"
improvement
(i, j)
'
'
problem:
minimize
subject to
X
{i,j)6
C;; x;;,
(514a)
1]
<
xjj
<
Ujj,
for all
(i, j)
A.
(5.14b)
For a given basis structure (B, L, U), we construct an optimum solution of (5.14) by setting Xj; = u^ for all arcs (i, j) L vdth Cj: < 0, by setting xj: = for all arcs (i, j) e U
with
Cjj
>
0,
and by leaving
most mAU.
We have
thus
shown
that
z^-z^mAU.
Combining
(5.13)
(5.15)
and
(5.15)
we
obtain
nmu
By Lemma
1.1, if
mCU,
0(nmU
log
W)
iterations.
5.3.
We
Theorem
feasible basis
and uses
0(nmU
log
H)
pivots.
126
This result gives polynomial time bounds for the shortest path and assignment
minimum
cost flow
problems with
= n and
respectively.
to
In fact,
it
is
arguments
pivots
show
in
problems
0(n^ log C)
and runs
0(nm
5.7
among
minimum
In this
we
technique.
scaling,
upon
cost
path algorithm
is
that
augmentations
may
amounts
number
augmentations substantially.
We
i.e.,
shall illustrate
minimum
problem with
for each
(i,
j)
e A.
it
This
minimum
cost flow
problem
2.4).
after
has been
e(i)
as defined in
Much
A be
as
we did
maximum
for all
i,
flow problem,
(ii)
we
let
i,
the least
power
of 2 satisfying
Initially,
is
:
e(i)
< 2A
or
e(i)
> -2A
for all
A=
'S
'.
sum
of excesses
{ i
:
(whose magnitude
)
equal to
the
sum
of deficits)
bounded by 2nA.
Let S(A) =
e(i)
^A
and
let
T(A) =
0.
{ j
e(j)
< -A
).
Then
at the
or T(2A) =
In the given
i
A-scaling phase,
we perform
number
node
c S(A) to a
node
j
T(A),
imits of flow.
The
definition of
A by
a factor of at
scaling
least 2.
At
this point,
we
begin a
new
scaling phase.
Hence, within
Odog U)
127
phase,
A <
1.
By
all
imbalances are
now
has found an
optimum
flow.
The driving
is
we
will
prove
later) that
invariant property
we
can always
node
in T(A).
algorithm RHS-SCALING;
begin
X
:= 0, e := b,
>
let
,^ 2f log
S(A):={i N:e(i)^A);
T(A)
:=
{ i
e(i)
< -A
);
while S(A) *
and T(A) * e do
begin
select a
node / e
T(A);
all
other nodes
to the
reduced costs
let
P denote
node k
to
node
/;
update n:=n-d;
augment A
update
end;
X,
S(A)
and
T(A);
A := A/2;
end;
end;
able to send
units of flow
node / e
128
Lemma
5.2.
The residual
A
Proof.
initial
We
The
Each
A because they
or
are either
or .
units
Let
S(n,
problem on a network
Theorem 5.4. The RHS-scaling algorithm correctly computes a minimum cost flow and performs 0(n log U) augmentations and consequently solves the minimum cost flow problem in 0(n log U
Sin, m,
O)
time.
Proof.
is
minimum
We show
performs
l+Flog
at
Ul
would imply
At the
We
when
phase,
S(2A) =
I
0.
A
n.
when
T(2A) =
S(A)
<
Observe
that
A<
at a
e(i)
e S(A).
Each augmentation
starts at a
node
it
in S(A),
I
ends
I
node with a
and
carries
units of flow;
at
therefore,
decreases
S(A)
most n augmentations.
minimum
cost flow
subtlety, because
fails to
Lemma
5.2
this situation.
be true
are
or
Uj;.
As we noted
problem
is
previously, one
cajjacitated
minimum
cost flow
to
We
transformed network.
n+m
phase performs
at
most
n+m
augmentations.
The
shortest path
problem on the
transformed problem can be solved (using some clever techniques) in S(n, m, C) time.
minimum
in
cost flow
problem
in
0(m
log
U S(n,
m,
O)
time.
recently developed
minimum
cost flow
0(m
lof^
129
(m + n
This method
is
polynomial-time
minimum
5.8.
We now
maximum
miiumum
This algorithm can be viewed as a generalization of the preflow-push algorithm for the
flow problem.
flow x
is
said to
>
if
x together with
some node
C5.7
C5.8.
(Primal feasibility) x
(e -EHial feasibility)
is Cj;
feasible.
-e for
each arc
(i, j)
in the residual
network G(x).
We
The
Cjj
<
Cj;
<
for
and reduce to C5.5 and C5.6 when e is 0. an arc (i, j) at its lower bound and e S
is
>
for
an arc
(i, j)
at its
conditions.
The
follovsdng facts are useful for analysing the cost scaling algorithm.
is
Lemma
5.3.
e -optimal for
ekC. Any
E<l/n
is
an optimum flow.
Proof.
Clearly,
any
node
^ C.
Now
/n.
The e-dual
i^
C;:
C;;^-n>-l. Since
(i, j)
X W' ^ 6
^\\
0.
Theorem
5.1
the flow
is
optimum.
The
and
Initially e
= C, and
finally e
procedure that transforms an e-optimal flow into an e/2-optimal flow. After l+Tlog nCl
130
< 1/n and the algorithm terminates with an optimum flow. More
formally,
we
algorithm
COST SCALING;
and
e := C;
begin
j:
:=
let
while e S
/n do
begin
IMPROVE- APPROXIMATION-I(,
E:=/2;
end;
X
is
x,
re);
minimum
end;
i
an
does so by
is
(i) first
pseudoflow
(a
pseudoflow x
(ii)
called e -optimal
We
if
call a
node
c^;
with
0.
e(i)
>
and
call
an arc
(i, j)
in the residual
network admissible
-e/2 <
<
The
basic
shall
operations are selecting active nodes and pushing flows on admissible arcs.
see later that pushing flows on admissible arcs preserves the e/2-dual
conditions.
We
feasibility
procedure PUSH/RELABEL(i);
begin
if
(i, j)
then
push 6
else
Jt(i)
:=
min
e(i), rj:
node
>
0);
to
node
:=
7c(i)
+ e/2 + min
c^:
(i, j)
e A(i) and
r^j
end;
Recall that
r^:
(i, j)
in G(x).
As
if
in
our earlier
r^;,
maximum
it
flow problem,
5 =
then
we
refer to the
is
nonsaturating.
We
relabel operation.
The purpose of
create
new
admissible arcs.
Moreover,
we
131
used
in the
i,
we
i.
which
is
node
The current
arc
is
Improve-Approximation procedure
essential operations.
x,
Jt);
>
then
Cjj
Xj; :=
else
<
then
Xj: := uj;
an active node
i;
PUSH/RELABEL(i);
end;
end;
The correctness
Lemma 5.4. The Improve-Approximation procedure always maintains e /2-optimality of the pseudoflow, and at termination yields an e /2-optimal flow.
Proof. This proof
is
similar to that of
Lemma
4.1.
is
a 0-optiCTiaI
that the
pseudoflow).
We
(j,
show
(i, j)
might
add
its
reversal
i)
to the residual
Cj;
<
admissibility),
Cjj
>
satisfied for
(i, j)
any value of
>
0.
The
when
Cj;
By our
and
fjj
we
Jt(i)
by e/2 + min
rj:
Cj:
(i, j)
e A(i)
>
0) units, the
with
>
still
satisfies
Cj;
^ -e/2.
In
addition, increasing
residual network.
cj^ t -e/2
in the
throughout and,
at termination, yields
an e/2-optimal flow.
132
We
will
We
a
show
to those
maximum
Lemma 5.5. No node potential increases more than 3n times during an execution of the ImproveApproximation procedure.
Proof. Let X be the current /2-optimal pseudoflow and
x'
at the
n'
x'
repectively.
It is
of the flow decomposition properties discussed in Section 2.1, that for every
node v with
node
is
and
(ii)
its
reversal
is
vj
w
...
with the
-
P = vq
- v-j -
...
reversal
to arcs
P = vp
vj.j
V|
is
path
in G(x').
on the path P
in G(x),
we
obtain
^-/(e/2). Alternatively,
(i,j)eP
7i(v)
<
Jt(w)
+ /(e/2) +
Cjj.
(5.16)
apeP^J
Applying the
in G(x'),
we obtain
(5.17)
7l'(w)
<
7t*(v)
I
(j,i)
_C;;
7t'(v)
/ -
C;;.
P^'
(i,j)eP'J
Combii\ing
(5.16)
and
+
(5.17) gives
Jt(v)
<
n'(v)
(7c(w)
n'(w))
(3/2)/.
(5.18)
Now we use
n,
(i)
k(w) =
for push/relabel),
units.
<
is
and
(iii)
Ji(v)
by
at least
e/2
The len\ma
now
immediate.
133
Lemma
Proof.
that
5.6.
This proof
is
similar to that of
Lemma
amounts
to
showing
i
and
also saturates
0(nm)
total saturating
pushes.
To bound
number
of nonsaturating pushes,
result.
We
The
define the admissible network as the network consisting solely of admissible arcs.
following result
is
Lemma
Proof.
5.7.
We
to the
number
of
pushes and
The
result
is
the pseudoflow
We
always
(j, i)
push flow on an
arc
with
Cjj
Cj:
<
0;
hence,
if
its
reversal
to
>
0.
new
node
relabel operation at
may
create
new
but
(k,
it
also deletes
cj^j
all
admissible arcs
i),
7t(i)
by
at least
Lemma
5.8.
Proof (Sketch).
number
of nodes
from node
in the
to
g^i)-
active
showing
most n units
at least
unit.
most 3n2
relabel operations
and
5.6,
bound
O(nTn) on
the
number
of nonsaturating pnjshes.
As
in the
maximum
is
Approximation procedure
time.
The
134
The
between the
Solving
maximum
flow
and
the
minimum
is
cost
flow
problems.
an
Improve-Approximation problem
bottleneck operation
is
the
number
of nonsaturating pushes.
We
called the
wave algorithm.
is
the
same
as the
it
nodes
As
is
well
known, nodes
i
of an acyclic
(i, j)
in the
network,
<
j.
It is
0(m)
time.
Observe
pushes do not
change the admissible network since they do not create new admissible
operations, however,
arcs.
The
relabel
may
create
new
may
affect the
and
if
the
node
then
it
When examined
relabel operation
method again
if
However,
active
their
operations,
we immediately
obtain a
bound
number
of
node
examinations.
entails at
Consequently, the wave algorithm performs O(n^) nor\saturating pushes per Improve-
Approximation.
We now describe a
relabel operation.
An
topological ordering
is
algorithm.
Suppose
that while
examining node
i,
i,
Note
that
node
no incoming admissible
i
arc at
node
Lemma
5.7).
We
then
move node
from
its
present position in
135
first
position.
is
a
(i)
new
node
(i, j),
node
precedes node
in the order;
and
(iii)
is still
valid.
list)
set of
it
doubly linked
in this order.
Whenever
node
i,
the algorithm
this
moves
at
and again
examines nodes in
order starting
node
We
Theorem minimum
result.
5.6.
0(n^
nC)
time.
5.9.
and
cost
we
= 0^^
double scabng algorithm on the N2, A), with Nj and N2 as the sets of
capacitated
minimum
cost flow
problem can
be solved by
first
problem
(as
described in Section 2.4) and then applying the double scaling algorithm.
The double
scaling algorithm
is
it
the
same
Approximation procedure.
natural alternative
would be
an
a path in
is
admissible.
would
0(nm)
at least
this
Lemma
to
0(nm)
arc saturations.
Thus,
We
number
of
augmentations
to
0(n log U)
that
136
number
algorithm
algorithm.
The advantage
problem
shortest path
in the
RHS-scaling algorithm,
in
is
that the
identifies
an augmenting path
fact,
augmentations. In
maximum
also
The double
scaling
procedure
begin
IMPROVE- APPROXIMATION-n(e, x,
and compute node imbalances;
+ E for
,
n);
set X :=
7t(j)
:=
7t(j)
all
N2;
A:=2riogUl;
while the network contains an active node do
begin
S(A) :=
(
Nj
u N2
e(i)
^A
};
while S(A) ^
do
node k
in S(A)
and delete
it
from
S(A);
/
<
0;
augment A
end;
units of flow
on P and update
x;
A := A/2;
end; end;
We
shall describe a
method
to
first
commenting
e
on the correctness of
this procedure.
c^;
^
j
-e for all
(i, j)
at the
for each
e N2/
we
obtain an e/2-
pseudoflow.
5.4, this
Lemma
pseudoflow. Thus,
we
137
property that
all
The algorithm
maintain a
partial
identifies
We
admissible path
P using
a predecessor index,
i.e., if
(u, v)
P then
prediy)
steps,
u.
At any point
is
in the algorithm,
leist
we perform one
of P, say
of the following
two
whichever
has a
applicable, at the
node
node
i,
terminating
when
the last
node
deficit.
advanced).
e(j)
If
(i, j),
then add
(i, j)
to P.
If
<
0,
then stop.
the residual network does not contain an admissible arc
{
rctreat(i).
n(i) to
7t(i)
If
(i, j),
then ujxiate
then delete
+ e/2 + min
Cj;
(i, j)
A(i)
and
r^:
>
0).
If
P has
at least
one
arc,
(pred(i),
i)
from
P.
The
creating
oO node
for the
purpose of
i)
new
admissible arcs emanating from this node; in the process, the arc (pred(i),
becomes inadmissible.
Hence,
we
The proof
of
Lemma
5.4
implies that increasing the node potential maintaii^s e/2-optimality of the pseudoflow.
We
l+flog
e(i)
next
consider
the
complexity
of
this
implementation
of
the
Improve-Approximation procedure.
Ul RHS-scaling
for each
A<
< 2A
node
e S(A).
node k
in S(A) to a
node
with
e(/)
<
0.
A and
node
/,
if
there
is
any,
at
than A.
new
scaling phase.
We next
coimt the number of advance steps. Each advance step adds an arc to the
and
later
acyclic (by
Lemma
5.7), after
first
an admissible path
138
and
vsdll
perform an augmentation.
first typ>e
at
at
Lemma
is
5.5,
0{t\^) times.
The
total
number
The amount
is
0(
i=l
lA(i)ln)
0(nm)
since
i,
examine
result.
A(i)
We
Theorem 5.7. The double scaling 0((nm + rr log U) log nC) time.
To
solve
the capacitated
minimum
cost flow
problem ,we
first
transform
it
into
an uncapacitated transportation problem and then apply the double scaling algorithm.
We
leave
it
show
that
how
to use the
minimum
cost flow
problem
of the
0(nm
log
algorithm.
algorithm
using
structures
is
currently
the
fastest
minimum
Sensitivity Analysis
The purpose
solution of a
is
to
minimum
(supply/demand
practitioners
any
arc).
Traditionally, researchers
and
have conducted
There
simplex algorithms.
is,
approach.
The
by determining changes
by changes
in the data.
The
in
is
do not
simplex based approach does not give information about the changes in the solution as
the data changes; instead,
it
tells
basts tree.
139
We
This approach
we have
just
mentioned.
For simplicity,
we
limit
our
discussion to a unit change of only a particular type. In a sense, however, this discussion
is
quite general:
it
is
possible to reduce
to a
sequence of the
simple changes
cost flow
we
cor^sider.
We
show
minimum
flow
problem
maximum
problems.
Let X* denote an
optimum
solution of a
Cj;
minimum
Cj;
Let n* be
7C*(i)
7t*(j)
Further,
let
d(k,
Since for
P from node k
to
node
/ ,
Z
(i,j)6P
to
^ij
X
(i,j)
Cjj
- K(k)
jt(l),
d(k,
equals the
P
cjj
node k
node
plus
7t*(k)
nonnegative. Hence,
we
/)
nodes k and
Supply/Demand
Sensitivity Analysis
We
becomes
problem
of
first
Suppose
that the
supply/demand
b(/)
of another
node
from Section
b(i)
1.1
minimum
cost flow
dictates that
ie
X N
0;
hence,
we must change
two nodes by equal magnitudes, and must increase one value and decrease the
is
other).
Then x*
pseudoflow
moreover,
of flow from
this
node k
to
node
into
pseudoflow
/ )
Tliis
units.
Lemma
optimum
for the
We
next consider a change in an arc capacity. Suppose that the capacity of an arc
(p, q) increases
by one unit
The flow x*
is
140
addition,
if
Cpq S
0,
it
hence,
it
is
an
optimum flow
Cpg <
capacity.
We
at
satisfy this
by one
unit,
node q and
a deficit of
one unit
node
p.
We
flow by augmenting one unit of flow from node q to node p along the shortest path in the residual network which changes the objective function value by an amount Cpg +
d(q, p).
This flow
is
sensitivity analysis.
When
strictly less
is
than
its
for the
is at its
we
by an amount -Cpn +
d(p, q).
to
due
to unit
We can, however,
/)
obtain useful upper bounds on these changes by solving only two shortest path
problems.
fact that
d(k,
/)
S d{k,
1)
+ d(l,
nodes k and
1
Consequently,
we need
all
to
and from
other nodes to
node
to
all
d(k,
/)
Recent empirical studies have suggested that these upper bounds are very
5%
of each other.
we
we assume
an arc
Cpq =
<
c^
<
0.
Similarly,
if
Cpq >
0,
c_
^
if
In
both the
Ctises,
we
Cpg =
before the
violates the
Cpq =
>
141
condition C5.2.
To
we must
flow on arc
(p, q) to zero, or
that the
becomes
zero.
We
first try to
from node p
to
node q without
violating
(i)
We
at
do so by solving
is set
maximum
flow problem
to zero, thus
creating an excess of
X Pi
sink.
at
node p and
a deficit of x
node
Pi
(iii)
q;
(ii)
define
of
send a
maximum
x__
We
C5.4.
permit the
maximum
would generate
and
Let
v"
If
and
x"
denote the
v =
then
denotes a
minimum
Pi
modified problem. In
cut
On the (X, N- X)
other hand,
if
v < x
then the
maximum
s-t
N - X, and
It is
at the arc's
capacitated.
We
node
that
potential of every
this
node
in
N-X
by one unit.
eeisy to verify
by case
aruilysis
change
in
Consequently,
we
v"
can set
In
x^
minimum
is
cost flow.
x_,
units
more than
5.11
Assignment Problem
is
minimum
is
Section
(
I
1.1
this
problem
,
network flow problem. As already indicated in defined by a set N|, say of f)rsoris, a set N2, say of objects
cost
Nj
N2 = n)
1
a collection of
node
pairs
A C Nj
N2
to-object assignments,
in A.
and a
cost
Cj;
The
objective
is
one object
142
minimum
program:
possible cost.
linear
Minimize
2(i, j)
Cj;X::
(5.18a)
'
subject to
{j
(i. j)
X e A) =l,foraUi
X::
Xji
N-i,
(5.18b)
(i
(i, j)
X X) =l,foraUje N2,
xjj
(5.18c)
0,
for all
(i, j)
A.
(5.1 8d)
minimum
cost flow
Cj;,
set
A, arc costs
b(i)
N| and
b(i)
=
is
-1 if
m= A
|
arcs.
also
known
We
Xjj
^
an assignment.
"ii {j:(i,j)eA)
If
1,
then
is
assigned to
and
is
assigned to
i.
A 0-1
solution x satisfying
for all
Ni and
''ii
1 fo'" 3^'
No
X
.
is
Associated
{i:(i,j)e
A)
is
an index
set
defined as
X=
{(i, j)
x^;
1}.
node
is
unassigned.
problem. Several of these algorithms apply, either explicitly or implicitly, the successive
shortest path algorithm for the
typically select the initial
These algorithms node potentials with the following values: nii) = for all i e N|
cost flow problem.
all
j
minimum
e N2-
and
7t(j)
= min
{cj;
(i, j)
e A) for
node
is
the time
143
The
relaxation approach
is
is
The
person.
is
easy to solve:
to
with the
value.
As
a result,
some
objects
objects
may be
overassigned.
shortest paths
assignment by identifying
these paths.
at
Because
this
One
method,
is
well knovkn solution procedure for the assignment problem, the Hungarian
efficient in practice;
it
bounds.
For problems that satisfy the similarity assumption, however, a cost scaling
algorithm provides the best-knowT> time bound fo- the tissignment problem. Since these
we have
described earlier,
we
will not
we
we show
another intimate
connection between the assignment problem and the shortest path problem.
We
we
can solve
we
we
The
first
application determines
if
the
network contains
shortest path.
Section
2.4.
The node
replaces each arc
each node
(artificial)
by two nodes
(i, i').
and
i',
by an arc
(i,
j),
and adds an
We
first
:
note that the transformed network always has a feasible solution with cost zero
144
all
artificial arcs
is
(i,
i').
We
if
next
show
that the
negative
if
and only
First,
...
Jl^-jj.
negative cost.
^^^ 2 Ok+1 Jk+1^' '^h\' jp^) Therefore, the cost of the optimal assignment must be negative.
(j^,
j
2), (J2
J3)/
(Jk' J])
is
i
negative.
This solution
must contain
at least
(i,
j')
with
*
{
Consequently, the
PA
(j|
(J2
jo
) /
^'-
^^^
nonpositive, because
j,'), (J2
/
it
can be no
^
^
(jj
jA
(Jk- Iv
Since
negative,
some
partial
assignment
PA
j|
must be
J2
Jk
)l
145
(a)
(b)
Figure
5.3.
(a)
The
146
If
we
n,
can obtain
shortest path
between
from node
to
node
as follows.
1'
We
consider the transformed network as described earlier and delete the nodes
the arcs incident to these nodes.
and n and
Now
to
node n
in the original
network has
corresponding assignment of the same cost in the transformed network, and the
converse
is
also true.
For example, the path 1-2-5 in Figure 5.3(a) has the corresponding
in Figure 5.3(b),
assignment
(4,
5'),
and an assignment
(3,
3'))
We now describe an algorithm for the assignment problem known as the auction algorithm. We first describe a pseudopolynomial time version of the algorithm and then
incorporate scaling to
make
is
an
To describe
the auction
this
algorithm,
we
cor\sider the
version appears
more
to
by
auction.
j
Each person
(i, j)
for each
set
A(i).
The
objective
this
is
to find
We can
Cj;
-uj;
to
reduce
problem
is
to (5.18).
max
j,
{lu^jl
(i, j)
e A).
algorithm, there
represented by
i
price(j).
j
for
buying car
is U|j
price(j).
At each
We
assume
and
prices are
measured
a
We
call a
number
-
valued),
which
is
an upper bound on
:
that person's
highest marginal
^ max
{u^: - price(j)
(i, j)
A(i)}.
We
The
bid
(i, j)
admissible
if
valued) = uj:
price(j)
person
is
next in turn to
we
max
(u^j - price(j)
(i, j)
e A(i)).
147
cars.
If
a jjerson
makes
a bid
on
is
assigned to car
for car
j,
if
was
car.
As
to the
persons decrease.
assigned a car.
starts
We now
j
The procedure
can
i.
For example,
we
set price(j)
and
max
{u^
(i, j)
A(i)} for
each person
Although
this initialization is
more
clever initialization.
x.
optimum
tissignment
procedure BIDDING(u,
begin
let
the
initial
unassigned do
an unassigned person
bid
(i, j)
i;
some
is
admissible then
begin
assign person
price(j)
if
:
to car
j;
price(j)
1;
j,
then
end
else update vzJue(i)
:
= max
{uj: - price(j)
(i, j)
A(i)};
end;
let
end;
We now show
of the
that this
utility is
vdthin $n
optimum
utility.
some
optimum assignment.
utility of
person
i.e.,
valued) ^
Uj:
(i, j)
e A(i).
Consequently,
148
X
The
partial
Uji
<
(x,i)eX''
iNi
valued) +
JN2
satisfies the condition
price(j)
(5.19)
assignment \ also
-
value(i) = Ujj
price(j)
1,
for all
(i, j)
e X,
(5.20)
because
priceCj)
at the
Uj:
price(j)
and immediately
goesupby
UB(x)=
(i, j)
Z X e
"ii
^
+
i
I value(i), N
in
(5.21)
with N
N^. Using
obtain
n.
(5.20) in (5.21)
and observing
N2
have zero
prices,
we
UB(x^) ^
value(i) +
J
I
e
price(j)
(5.22)
N2
(5.23)
As we show
in
number
of times. x
is
modify
price
whenever
number
of steps the
Then
utility
assignment (since Nj
less
empty)
Hence, the
most $n
than the
maximum
utility.
It
is
optimum assignment.
Suppose we multiply
Since
all utilities
are
now
will differ
by
at least
(n+1) units.
The procedure
is
within n
units of the
We
assignment problem
largest utility is
multiplied by (n+1 ).
C = (n+l)C. We
show
149
times.
Since
all utilities
Substituting this
valued)
^ -n(C' +
1).
ie
No
1
leist
it
of a person
persor\s
is
needed
O
ie
Ad)
= O(nmC').
N^
We
number
of iterations performed
i
by the procedure.
Each
j.
some
car
By
Further, since
the price of car
j
person
i
hais
and
I
a person
in valued).
A(i)
consecutive decreases
total
This
observation gives us a
bound
O(nmC') on the
the "current
number
of times
all
C = nC,
K
we have
Theorem
5.8.
in
O(n^mC)
it
time.
is
can
increase prices
(and thus decreases values) in small increments of $1 and the final prices can be as
large as
n^C
in the auction
algorithm ensures that the prices and values do not change too many times. As in the
bit -scaling
1.6,
we decompose
sequence of
algorithm.
We
per
sctiling phaise.
Thus,
we
the original
problem
in
0(nm
The scaling version of the auction algorithm first multiplies all utilities by (n+1) and then solves a sequence of K = Flog (n+l)Cl assignment problems Pj, ?,
...
150
Pj^
The problem
Pj^ is
an assignment problem
ujj,
in
which the
utility of arc
(i,j) is
the k
if
Uj; is
bits long.
problem
Pj^
Note
that in the
problem
Pp
all utilities
are
and
subsequently
k+1
u^-
k = 2u- +
{0 or 1),
bit is
or
1.
The
scaling algorithm
works as
algorithm
ASSIGNMENT;
all Uj;
begin
multiply
by
(n+1);
K: = riog(n+l)Cl
price(j)
:
=
:
j;
value(i)
=
to
i;
for k
K do
=
:
begin
let ujj
:
L Ujj /
(i, j)
A;
price(j)
= 2
:
price(j) for
(i)
each car
1
j;
value(i)
= 2 value
i;
BIDDING(uK
end;
end;
x, value, price);
number
k
u--.
It is
easy to verify that before the algorithm invokes the Bidding procedure, prices
satisfy value(i)
and values
max
{uj; - price(j)
(i, j) e.
A(i)), for
its
each person
i.
The
execution.
In the last
Observe
phase, the algorithm starts with a null assignment; the purpose of each scaling phase
to obtain
good
prices
and values
We
is
The
crucial result
and values change only 0(n) times during each execution of the
151
Bidding procedure.
as
We
(i, j)
phase
_
Ujj
= Ujj
ic
price(j)
value(i).
and
value(i)
just before
we have
value(i).
y
(i,
_
u;;
=
(i,
ic
U:: j
)U X
X
e
price(j)
jfe X'^
N2
X
e
Nj
utility of
an
utility of that
maximizes the
utility.
Since
t u-
price(j) for
each
(i, j)
e A,
we have
(5.24)
Uij
<
0, for
aU
(i, j)
e A.
Now
assignment
k-1
assignment
(5.20)
x*^"*
(the final
at tie
end of the
The equality
V
1
implies that
u.
price'(j)
value'(i)
-1,
for all
(i, j)
x*^"',
(5.25)
where
price'(j)
and
corresponding values
at the
end of the
(k-l)-st
scaling phase.
Before calling the Bidding procedure, we set price(j) = 2 price'(j), value(i) k k-1 = 2 value'(i) + 1, and Uj; = 2 u- + (0 or 1). Substituting these relationships in (5.25), we
find that the reduced utilities
Uj; of arcs in x*'"
If
*
are either -2 or
-3.
reduced
is
some
partial
then (5.23) implies that UBCx") t -4n. Using this result and (5.24) in (5.21) yields
I
icNj
valued) ^-4n.
(5.26)
i,
Using
proof of
Theorem
5.7,
we
The assignment algorithm applies the Bidding procedure Odog nC) times and,
consequently, runs in
0(nm
We
152
The
in
improved
to
run
0(Vn
m log nC)
If
This improvement
i
is
(5.26).
we
prohibit person
from bidding
value(i) S
4Vn
then by (5.26)
the
number
of unassigned persons is at
to assign n1
most Vn.
and 0((n
if
-
CXVn m) time
FVn
1 f>ersons
fVn
remaining FVii
persons.
first
For example,
n =
the
99%
of the persons in
1%
and
the remaining
1%
99%
it
of the time.
We
all
when
has assigned
but
rVn
It
persons
to assign these
persons.
so happens that the shortest paths have length 0(n) and thus Oial's
3.2, will find
0(m)
time.
0(Vn m) time
and
its
is
0{-\fn
m log nC).
.
If
we
assumption, then
heis
known
time
bound
problem
153
6.
Reference Notes
In this section,
we
text.
This
to
each topic,
(ii)
among
different algorithms,
and
(iii)
to
comment on
6.1
Introduction
The study
cf
linear
minimum
These
some
algorithms.
Interest in
He
the
optimum
solution.
Orden
work by
specializing the
minimum
The network
minimum
cost flow
bounded
variable simplex
method
programming
by Dantzig
(1955].
(1962] contains a
During the
1950's,
minimum
the
cost flow
problem as well as
maximum
mainly
because of their
to
important applications.
solve these problems.
Whereas Dantzig focused on the primal simplex based algorithms. Ford and
Fulkerson developed primal-dual type combinatorial algorithms to solve these
problems. Their book. Ford and Fulkerson (1962], presents a thorough discussion of
the early research conducted by
of flow decomp)osition theory,
them and by
is
others.
It
development
which
credited to Ford
and Fulkerson.
flow
Since
these
pioneering
works,
network
problems and
their
generalizations
154
is
documented
in
text
We
shall
be surveying
many important
Several
and serve as
a guide to the
11962]
(Programming
Games and
Transportation Networks),
Iri
(1969]
(Network
Hu
[1969] (Integer
Programming and
Frisch [1971]
An
(Combinatorial (Linear
Optimization:
Jarvis [1978]
[1978] (Optimization
Algorithms for
Network
and
Phillips
Swamy and
Thulsiraman
Steiglitz [1982]
Kowalik
[1983] (Discrete
Optimization
Algorithms), Tarjan [1983] (Data Structures and Network Algorithms), Gondran and
Minoux
[1984]
(Programming
in Netorks
and
As an additional source
at
Bonn (Kastning
[1976],
Hausman
[1978],
[1982,
source provides a comprehensive account of network flow models and their impact
on
practice.
application areas.
Notable
among
these
is
and Klingman
[1976]
cost flow
cost flow
A number
also contain
modek. Examples
155
on
facility location
Golden
in Section
lists,
doubly
is
linked
[1983]
another useful source of references for these topics as well as for more complex data
structures such as
dynamic
trees.
We
Gabow
optimization problems.
6^
The
and
its
research literature.
As
we
Ruggen and
Starchi [1982]
Pang
[1984].
especially
The
first
was suggested by
Dijkstra [1959],
and
The
is
original
the optimal
m = fiCn^
)),
since
any algorithm
for sparse
arc.
The following
algorithm that have been designed to improve the running time in the worst case or
in practice.
In the table,
2.
d =
[2
in the
network plus
156
157
whose
analysis
largest
key
D stored
this
in a heap.
The
D).
When
Dijkstra's
algorithm
time.
implemented using
data structure,
it
runs in
0(nC +
m log
log nC)
it
Johiison [1982]
suggested an improvement of
this
to
implement
Dijkstra's algorithm in
0(m
log log
C) time.
The
is
due
to
Fredman and
is
Tarjan [1984]
ingenious, but
who
an
n)
Odog
time for each node selection (and the subsequent deletion) step and an average of
0(1) time for each distance update.
Dijkstra's algorithm in
0(m
+ n log n) time.
its
by Wagner[1976].
Dial, Glover,
have proposed an
improved version of
algorithm
is
Dial's algorithm,
in practice.
Though
Dial's
only pseudopolynomial-time,
case behavior.
that
if
improvements. Observe
max
minlcj,:
(i,j)
A}], then
we
in Dial's
number
of buckets
from 1+
C
if
to
l+(C/w).
is
The
d*
the current
minimum temporary
temporary distance
1]
1.
Then, using a multiple level bucket scheme, Denardo and Fox implemented
0(max{k C^^K
time for
log C).
any choice of
k.
Choosing k = log
yields a time
of
0(m
log log
C+n
0((m+n
log
Olog
log C).
This data
is
the
same
Odog C)
158
factor of
Odog
log C).
in section 3.3
of Dijkstra's algorithm.
structure consists of
(big) buckets,
each
bucket being further subdivided into L (small) subbuckets. Ouring redistribution, the
two-level bucket system redistributes the range of a subbucket over
buckets.
all
of
its
previous
much
0(m+n
log
C/log log C)
time.
to
0(m
+ nVlog
).
If
we
of
graphs except very sparse ones, for which the algorithm of Johnson [1982] appears
more
attractive.
of two-level R-heap
is
very complex,
in practice.
however, and so
is
in skeleton
form, the
first label
Ford and
Though
specific
0(nm)
[1970].
time, the
nonpolynomial-time, as
shown by Edmonds
The
modification that adds a node to the LIST (see the description of the Modified Label
3.4.) at
the front
if
popular.
159
Though
this
runs in
shown by Kershenbaum
[1981].
For
in
0(nm)
computational
attributes can be
[1985].
Dial, Glover,
Karney and
pivoting in
Klingman
[1979]
and Zadeh
[1979]
showed
(i.e.,
the arc with largest violation of optimality condition) for the shortest path problem
starting
from an
0(n)
artificial basis
pivots
is
if all
shortest path
Akgul
[1985a]
developed a simplex algorithm for the shortest path problem that performs O(n^)
pivots.
in
can be reduced
0(nm
Hao and
Kai [1986] described another simplex algorithm for the shortest path
this
Akgul's algorithm.
Orlin [1985]
showed
Dantzig's pivot rule solves the shortest path problem in 0{rr log nC) pivots.
Ahuja
and Orlin
0(n^
log C) pivots
and runs
in
0(nm
log C) time.
structures, uses very T\atural pricing strategies, aiul also permits partial pricing
Most algorithms
manipulation.
The
first
The complexity of
this algorithm is
0(n3
more
sophisticated matrix
multiplication procedures.
is
due
in
to Floyd [1962]
and
is
nms
160
is
another procedure requiring exactly the same order of calculations. The bibliography
From
solve the
all
a worst -case
it
might be desirable
to
problems.
As pointed out
to construct
an
equivalent problem with nonnegative arc lengths and takes 0(n S(n,m,C)) time to
solve the n shortest path problems (recall that S(n,m,C)
shortest path
is
lengths).
approach
worst<ase complexity.
Computational Results
Researchers have extensively tested shortest path algorithms on a variety of
network
classes.
to Gilsinn
and Witzgall
[1973],
Pape
[1974], Kelton
and Law
[1978], [1979],
Van
[1979],
Denardo
,
and Fox
Imai and
[1984], Glover,
results, the
factors:
for
of
tested.
The
depend
greatly
of the network.
Dial's algorithm is the best label setting algorithm for the shortest
network
classes tested
is fcister
by these
researchers.
would be
faster for
implementation and so
available.
at this
161
Among
by Glover
algorithm.
and Schneider
two
fastest.
The study
finds that
their algorithm is
with label
correcting algorithms.
perform
better.
Kelton and
Law
[1978]
aill
faster
in Section 3.5.
6.3
upon
all,
of these
in practice.
Several researchers
and
Elias, Feinstein
and Shannon
min-cut theorem.
maximum
flow problem
[1956]
and
[1956] solved
it
by augmenting
p>ath algorithms.
In the figure,
is
the
number
of nodes,
m is the number of arcs, and U is an upper bound on the integral arc capacities.
algorithms whose time bounds involve
The
assume
bounds
specified for the other algorithms apply to problems with arbitrary rational or real
capacities.
162
#
1
Discoverers
Running Time
[1972]
0(nm2) CKn2m)
0(n3)
2 3 4
5 6
Karzanov
Cherkasky
Malhotra,
[1974]
[1977]
0(n2 VIS")
[1978]
0(n3)
Galil [1980]
0(n5/3m2/3)
[1980]; Shiloach [1978]
7
8
0(nm
CXn3)
log2 n)
9
10
11
and Tarjan
[1983]
0(nm
0(n3)
log n)
Tarjan [1984]
Gabow[1985]
Goldberg [1985]
0(nm
0(n3)
log U)
12 13
14
CXnm
0(n3)
log (n^/m))
15 16
0(n2
Vm
+
,
[1987]
0(nm + n^
,.
Ca)
log
.
J O nm
1^
U)
r?-
log log
log
"
U
.,
17
(b)
uvnm ol
+ n ^VlogU)
(c)
O nm
V
Table
6.2.
Running times
of
Ford and Fulkerson [1956] observed that the labeling algorithm can perform as
many
an
the
They
also
showed
that for arbitrary irrational arc capacities, the labeling algorithm can
perform
infinite
maximum
that
[1972] suggested
two
specializations of
They
one
showed
if
the algorithm
(i.e.,
number
breadth
first
163
Edmonds and
was
to
augment
maximum
residual capacity.
They proved
performs
path
augmentations.
shown how
determine a
this version
residual capacity in
0(m2
log
maximum
flow problem.
layered network
lie
on
to the sink.
.
The nodes
.,
in a layered
(i, j)
in adjacent layers
(i.e.,
Nk and
Nk+1
for
some
k).
network G'
(N', A') is a
flow augmentations
residual capacity
Dinic showed
how
to
construct, in a total of
0(nm)
network by
performing
at
most
augmentations.
and
blocking flow iteration, the length of the layered network increases and a^er at most
The
shortest
maintains distance
in the context
They are
simpler to understand than layered networks, are easier to manipulate, and have led
to
more
efficient algorithms.
[1987]
label
4.3.
They
also
showed
that this
equivalent both to
all
Edmonds and
same augmenting
paths in the same sequence. The algorithms differ only in the manner in which they
obtain these augmenting paths.
complexity of
maximum
164
Even
(1976] for a
comprehensive description of
this
showed
preflows and pushes flows from nodes with excesses, constructs a blocking flow in
0(n2) time. Malhotra, Kumar and Maheshwari [1978] present a conceptually simple
maximum
flow algorithm that runs in OCn^) time. Cherkasky [1977] and Galil [1980]
presented further
The search
more
efficient
maximum
researchers to develop
first
new
The
such data structures were suggested independently by Shiloach [1978] and Galil
[1980].
and Naamad
described in Section 4.3) takes 0(n) time on average to identify an augmenting path
it
saturates
some
If
we
delete the
is
we
The
basic idea
to
some data
Aho,
Hopcroft and Ullman [1974] for a discussion of 2-3 trees) and use them
identify
later to
Naamad
[1980]
showed how
way
0(nm
Sleator
and Tarjan
[1983]
improved
this
approach by using a
trees to store
0(m
log n) time
an
0(nm
log n) time
bound
Gabow
to the
bound by applying
a bit scaling
approach
maximum flow problem. As outlined in Section 1.7, this approach solves a maximum flow problem at each scaling phase with one more bit of every arc's
capacity.
initial
flow value by
most
units
most
augmentations.
Consequently, each
log C) time.
If
0(nm
we
this
time bound
is is
much
simpler to implement.
algorithm achieving
Ga bow's
165
Goldberg and Tarjan [1986] developed the generic preflow push algorithm and
the highest-label preflow
that the
push algorithm.
shoum
in the
FIFO version
first-in-first-out
of
selects a
at this
node, and adds the newly active nodes to the rear of the
queue.) Using a dynamic tree data structure, Goldberg and Tarjan [1986]
the running time of the
improved
This
to
0(nm log
(n^/m).
algorithm currently gives the best strongly polynomial-time bound for solving the
maximum
flow problem.
maximum
minimum
Recently, Cheriyan
and Maheshwari
[1987]
showed
OCn^Vin
OiriNm
time.
Ahuja and Orlin [1987] improved the Goldberg and Tarjan's algorithm using
the excess-scaling technique to obtain an
0(nm
If
we
invoke
the similarity
assumption,
this
and nondense.
Further,
this
a large excess
label,
number
of nonsaturating pushes to
which further
VlogU
).
The use
of the
dynamic
variations,
E>inic's
0(nm + n^ Vlog U
trees, as
algorithm improves to
O nm log
+2
by using dyiuimic
Tarjan [1987]
p noraturating pushes
trees.
can be implemented in
0(nm
log
Although
this
166
conjecture
is
true for
all
algorithms,
it
is
still
open
for the
general case.
maximum
Recently,
essentially
is
is
augmented along
a shortest path
from the
As one would
0(nm)
pivots
and
to
Tarjan[1988] recently
showed how
implement
this
algorithm in
0(nm
logn) using
dynamic
trees.
maximum
(i.e.,
flow problems:
(ii)
the
maximum
flow problem on
(i.e.,
(i)
U=l);
network, except source and sink, has one incoming arc or one outgoing arc)
bipartite networks;
(iii)
and
(iv)
planar networks.
is
maximum
flow value
networks
less
0(nm)
time.
than are problems with large capacities. Even and Tarjan [1975] showed that
maximum
Ahuja [1987] have achieved the same time bounds using a modification of the
shortest
rely
on ideas
maximum
bipartite matching.
[1987]
Versions of the
bipartite
Let
maximum
= (N^
networks
N2, A)
Nj
<<
N2
|(or
N2
.
N^
).
Suppose
first
that
nj < n2
Gusfield, Martel
and Fernandez-Baca
how
the
et al.'s
0(n^
n2
and 0(nj
+ nm) respectively.
that
is
new
bipartite networks.
This result implies that the FIFO preflow push algorithm and the
167
maximum
flow problem on
networks
in
0(n,
m + n,
and 0(n,
m + n,
log U) time.
It is
maximum
much
at the
more
efficiently than
on general networks.
(A network
is
called planar
if it
can be
drawn
nodes.)
in a
6n
arcs;
maximum
solution techniques, that have even better running times, are quite different than
Some important
[1979],
maximum
Itai
and Shiloach
maximum
case
tight,
i.e.,
their worst-
bounds
some
families of networks.
is
Zadeh
[1972]
showed
that the
bound
of
algorithm
tight
when
bound
= n^.
[1975] noted
that the
when
m=
n2-
Baratz [1977]
showed
that the
is tight.
Galil
[1981] constructed
an interesting
class of
Edmonds and
achieve
their worst-case
Cheriyan and Maheshwari [1987] have showTi that the bound of 0(n2
highest-label preflow
Vm)
for the
push algorithm
is tight.
family of examples to
show
that the
bound O(n^)
push algorithm
The research
push algorithms,
these knovkTi worst-case examples are quite artificial and are not likely to arise in
practice.
maximum
flow algorithms.
[1979],
Cheung
168
[1980], Glover,
Klingman, Mote and Whitman [1979, 1984), Imai (1983] and Goldfarb
[1986]
and Grigoriadis
are noteworthy.
that
to the
development of algorithms
use distance
and Karp,
Ehnic's
in increasing
is
most
classes of networks.
Dinic's algorithm
algorithm for sparse networks, but slower for dense networks. Imai [1983] noted that
Galil
and Naamad's
is
data structures,
Sleator
and Tarjan
(1983] reported a similar finding; they observed that their implementation of Dinic's
is
et al.
to
be
A number
the computational
[1988]
are substantially (often 2 to 10 times) faster than Dinic's and Karzanov's algorithms
for
most
classes of networks.
Among
all
highest-label preflow
The
excess-scaling algorithm
and
its
We
do not anticipate
that
dynamic
practice;
Finally,
we
problem:
problem.
(i)
the
flow
flow
maximum flow value between every pair of nodes. Gomory and Hu (1961] showed how to solve the multi-terminal flow problem on undirected networks by solving (n-1) maximum
In the multi-terminal flow problem,
we wish
to
determine the
flow problems. Recently, Gusfield [1987] has suggested a simpler multi-terminal flow
algorithm.
These
results,
however
do not apply
to the multi-terminal
maximum
169
maximum dynamic
tj:
flow problem,
we
associate
(i, j)
in the
is
network a number
to
The
objective
send the
maximum
node
first
to the sink
node within
Ford and
Fulkerson [1958]
showed
that the
maximum dynamic
nunimum
in
this problem).
Orlin [1983]
is to
6.4
Minimum
The minimum
The
classical
the
minimum
cost flow
problem,was posed
[1939],
Koopmans
[1947].
first
programming.
He observed
for linear
optimum
bounding technique
programming
minimum
first
combinatorial algorithms
known
minimum
Jewell [1958],
Iri
[1960]
how
to solve the
minimum
cost flow
problem
[1971]
Tomizava
if
the computations
use node potentials, then these algorithms can be implemented so that the shortest
path problems have nonnegative arc lengths.
Minty
algorithm.
[1960]
and Fulkerson
and
describe the
170
for the
minimum
problem (which
is
perform iterations that can (apparently) not be polynomially bounded. Zadeh [1973a]
describes one such example on which each of
several
algorithms
the primal
rule, the
negative cycle algorithm (which augments flow along a most negative cycle), the
successive shortest path algorithm, the primal-dual algorithm, and the out-of-kilter
algorithm
- performs an
exponential
number
of iterations.
Zadeh
The
fact that
one example
is
bad
for
many network
insightful
algorithms suggests
inter-relationship
among
the algorithms.
The
showed
this relationship
by pointing out
mentioned
of
same sequence
augmentations provided
ties are
rule.
paths.
its
practical implementations
have been
first
tree
first
The
implementations using these ideas, due to Srinivasan and Thompson [1973] and
Glover,
[1974],
significantly
Brown and
Graves
[1977],
and
Barr, Glover,
and Klingman
of
improved data
excellent
structures.
The book
an
developements.
Researchers have conducted extensive studies to determine the most effective
pricing strategy,
i.e.,
choice of the pricing strategy has a significant effect on both solution time and the
number
strategy
BrovkTi
minimum
The candidate
list
we
described
is
due
to
Mulvey
[1978a].
and Graves
[1983]
[1978], Grigoriadis
and Hsu
[1979],
Mead
and Grigoriadis
[1986]
have been
171
effective in practice.
It
appears that the best pricing strategy depends both upon the
size.
minimum
cost
method can be
degenerate (see Bradley, Brown and Graves [1978], Gavish, Schweitzer and Shlifer
[1977]
and Grigoriadis
(1986]).
Thus, degeneracy
is
theoretical issue.
The strongly
proposed by Cunningham
[1977a, 1977b, 1978) has
{1976]
and independently by
Barr, Glover
and Klingman
shown
that maintaining
number
of degenerate pivots.
On
the
theoretical front, the use of this technique led to a finitely converging primal simplex
algorithm. Orlin [1985] showed, using a p>erturbation technique, that for integer data
a strongly
performs
O(nmCU)
pivots
pivots
and 0(nm
C log (mCU))
The strongly
sequence of
number
is
may
be exponential.
This
phenomenon
known
the
as stalling.
Cunningham
[1979]
described an example of stalling and suggested several rules for selecting the entering
variable to avoid stalling.
One such
rule
is
LRC
fixed,
(Leaist
in
an arbitrary, but
manner.
where
it
left
off earlier,
first
Cunningham showed
pivots.
most
nm
consecutive degenerate
Goldfarb,
Hao and
the
minimum
minimum
cost flow
problem or
its
special CJises.
The only
is
minimum
cost flow
problem
a dual
minimum
Developing a polynomial-time
minimum
cost flow
problem
is
still
open.
maximum
Dial et
al.
[1979],
Zadeh
172
Akgul
[1985a], Goldfarb,
Hao and
and Hao
maximum
flow
Hung
and Orlin
[1988]
for the
assignment problem.
The
attractive
relaxation algorithms
minimum
cost
its
generalization.
mirumum
to a deficit
node along
(ii)
In the
satisfy
resets flows
on some arcs
however,
this
and
deficits at nodes.
change
it
in the
node
and when
finally
determines
the
optimum dual
it
optimum primal
Bertsekas
solution.
[1985]
minimum
this
cost flow
problem (with
integer data).
extended
approach
for the
minimum
cost flow
cost flow
problem with
and
minimum
problem
A number
sizes.
of empirical studies
minimum
to
cost flow
and problem
is
NETGEN, due
Klingman, Napier
and Stutz
which
is
minimum
Glover,
Kamey and
Klingman
[1974]
and
Aeishtiani
and Magnanti
out-of-kilter algorithms.
and Whitman
algorithm.
[1980]
more rigorous
,
Glover,
and Graves
[1977],
Mulvey
[1978b], Grigoriadis
[1979]
and Tseng
173
we would
path algorithm, the primal-dual algorithm, the out-of-kilter algorithm, the dual
simplex algorithm, and the primal simplex algorithm with Dantzig's pivot rule
By using more
all arcs,
we would
expect that
All the
computational studies
have verified
this expectation
and
classes of
network
is
problems. Bertsekas and Tseng [1988] have reported that their relaxation algorithm
substantially faster than the primal simplex algorithm.
However, Grigoriadis
[1986]
new
At
Tseng, and the primal simplex algorithm due to Grigoriadis are the two fastest
algorithms for solving the
minimum
cost flow
problem
in practice.
Computer codes
public domain.
for
some minimum
cost flow
in the
developed by Grigoradis and Hsu [1979] and Kennington and Helgason [1980],
respectively,
[1988].
RELAX
Polynomial-Time Algorithms
In the recent past, researchers have actively pursued the design of fast
minimum
if
is
strongly polynomial-time
its
running time
is
polynomial
number
or U.
of nodes
and
arcs,
The
minimum
The
m arcs,
capacitated.
It
bounded
value by C, and the integral capacities, supplies and demands are bounded in absolute value by U. The term S()
is
and the
flow
maximum
174
#
1
Discoverers
Running Time
[1972]
2 3 4
5 6
0((n +
0(n 0(n
log log
7 7
8
0( n^ log nC
Gabow and
Tarjan [1987]
[1987, 1988b]
0(nm
U log nQ
nC)
0(nm
0(nm
0(nm
and Tarjan
[1988]
and
log log
U log nQ
175
we
bounds
problems
are:
Polynomial-Time Bounds
S(n,m, C) =
Discoverers
min (m
log log C,
m + rh/logC
Johnson
[1982],
and
M(n, m, C) =
nm
^%rT^gTJ
log
[
+ 2
J
Discoverers
m) =
m+
nm
n log n
log
[1984]
M(n, m) =
(n^/m)
minimum
L>
cost flow
which
a Vciriant of
[1988].
The
scaling technique
it
did not
many
as
having
practical utility.
that the
scaling technique has great theoretical value as well as potential practical significance.
Rock
[1980]
minimum
cost
flow problem, one using capacity scaling and the other using cost scaling. This cost
scaling algorithm reduces the
minimum
cost flow
problem
to a
sequence of
for the
minimum
cost flow
problem
optimality,
introduced
independently by Bertsekas [1979] and Tardos [1985]. Bertsekas [1986] developed the
first
algorithm to
Tarjan [1984]
5.8.
maximum
flow problem.
176
for the
minimum
cost flow
problem described
in Section 5.8
independently by Goldberg and Tarjan [1987] and Bertsekas and Eckstein [1988],
upon
similar ideas.
Using a dynamic
pseudoflow
bound
of
0(nm
They
also
showed
minimum
cost flow
problem cam be
Using both
Mehlhom
[1984])
and dynamic
tree
0(nm
bound
for ^he
wave
algorithm.
algorithm
is
very practical,
its
is
situation has
prompted researchers
improving the
computational complexity of
minimum
first
any
complex data
Tarjan [1987],
log
structures.
The
was due
to
who
developed a
triple scaling
log nC).
[1988],
who developed
The double
as described in Section
runs in
0(nm
log
Scaling costs by
an
appropriately larger factor improves the algorithm to 0(nm(log U/log log U) log nC)
and
dynamic
tree
0(nm
log log
log nC).
algorithm
for
very dense networks; in these instances, algorithms by Goldberg and Tarjan appear
more
attractive.
Goldberg and Tarjan [1988b] and Barahona and Tardos [1987] have developed
other polynomial-time algorithms.
cycle algorithm
due
to Klein [1967].
if
the
minimum mean
cycle (a
W for which V
(i,j)
Cj;
|W
is
minimum), then
is
strongly polynomial-time.
W
this
approach running
in
time
0(nm(log
n) minflog nC,
log
n)).
[1987], analyzing
an
177
a cycle with
maximum improvement
performs
is
0(m
log
mCU)
iterations.
maximum
improvement
difficult
(i.e.,
determine a disjoint
set of
augmenting cycles
with the property that augmenting flows along these cycles improves the flow cost
by
at least as
much
as
cycle.
in 0(.Tr\^ log
(mCU)
S(n,
m,
O)
time.
[1972]
proposed the
first
minimum
and
This desire
was
motivated primarily by
C and
log
typically
range from
to 20,
two reasons:
run on
real
that can
and
(ii)
they might, at
more fundamental
i.e.,
source of the
difficult or
are problems
more
equally difficult to solve as the values of the tmderlying data becomes increasingly
larger?
The
Tardos
first
strongly polynomial-time
minimum
is
due
to
[1985].
Tardos
time.
[1986],
and Orlin
improvements
in the
running
show
by cancelling minimvun
mean
cycles
is
polynomial-time algorithm
due
to Orlin
[1988].
minimum
cost flow
problems. For very sparse networks, the worst-case running time of this algorithm
nearly as low
cis
assumption.
minimum
Kapoor and
to the
Vaidya [1986] have shown that Karmarkar's [1984] algorithm, when applied
minimum
cost
flow
problem performs
0(n^-^
mK)
operations,
where
178
K=
log n + log
+ log U.
Vaidya [1986]
programming
minimum
cost flow
problem
in
Asymptotically, these time bounds are worse than that of the double scaling
algorithm.
At
fully
community has
assess
the
programming algorithms
folklore,
minimum
According
to the
even though they might provide the best-worst case bounds on running
eu-e
times,
and Orlin
minimum
Bland and Jensen [1985] also reported encouraging results with their cost
scaling algorithm.
We
believe that
techniques, scaling algorithms have the potential to be competitive with the best
other algorithms.
6.5
Assignment Problem
The primary
efficient
many
common
The successive
minimum
algorithms.
[1955],
at the heart of
many assignment
due
to
This algorithm
is
Kuhn
known
and
is explicit
in the papers
by Tomizava
[1971]
When
= (N^
u N2
A)
To use
this solution
approach,
we
(j,t)
first
minimum
arcs
cost flow
(s,i)
problem by adding
node
;
and
a sink
node
t,
for all
iN|, and
for all
JN2
capacity.
The
with respect
to the lir;ar
179
programming reduced
costs,
the assignment
problem by n
and runs
in
is
O(n^) and
for a Fibonacci
heap implementation
is
it is
0(m+nlogn).
log log C,
min(m
m+nVlogC}.
The
path problems with arbitrary arc lengths follows from the works of Jewell [1958],
[1960]
[1961]
Tomizava
Edmonds-Karp
assignment
threshold
and Klingman
algorithm which integrates their threshold shortest path algorithm (see Glover,
Carraresi
and
an assignment problem.
is
node
potentials, the
to
maximum
flow problem
send the
maximum
node
s to the sink
node
Hungarian method
to the sink node.
If
node
we
maximum
0(nm) time
time.
n augmentatior\s
0(nm
some time
after the
community considered
it
to
be O(n^) method.
Oiri^)
180
Subsequently,
many
Hungarian method
in fact runs in
The
minimum
cost flow
problem
is
due
to E>inic
is
and Kronrod
Hung
eind
Rom
[1980]
and Engquist
[1982].
This approach
Both approaches
start writh
is in
an
infeasible assignment
it
feasible.
The successive
object
overassigned.
objects
assigned, but
may
feasibility
by solving
at
most n shortest
The algorithms
of Dinic
and Kronrod
but
and Engquist
just described,
somewhat disguised
Kronrod
[1969].
The algorithm
of
node and,
is
due
to Balinski
and Gomory
[1964].
assignment and
gradually converts
into
negative cycles or by modifying node potentials. Derigs [1985] notes that the shortest
this
it
rurrs in
0(nS(n,m,C)) time.
Researchers have also studied primal simplex algorithms for the assignment
problem.
The
is
highly degenerate; of
its
2n-l
variables, only
n are nonzero.
much
research on the
network simplex method for the assignment problem until Barr, Glover and
Klingman
These authors
to
maintain a strongly feasible basis for the assignment problem; they also reported
encouraging computational
results.
ISl
Hung
most
Hence, his algorithm performs 0(n^log nC) pivots. Akgul [1985b] suggested another primal simplex algorithm performing O(n^) pivots.
This algorithm essentially
in
amounts
to solving
0(nS(n,m,C)) time.
Orlin [1985] studied the theoretical properties of Dantzig's pivot rule for the
netvk'ork simplex algorithm
and showed
problem
this rule
0(n^m
and Orlin
to
run
in
0(nm log C)
The algorithm
cost.
sufficiently large
reduced
C and
within
O(n^) pivots
its
value
is
halved.
is
a dual
simplex
is
this
Balinski's algorithm
and runs
O(n^) time.
0(nm +
is
due
to Bertsekas
Bertsekas
and Eckstein
more
recent
its
Out
somewhat
[1988].
we have
one unit
at a time,
by the
maximum amount
Bertsekas
is
problem which
cost flow
in fact a
minimum
problem
(see
Bertsekas [1985]).
bound
to solve the
assignment
algorithms.
problem
is
0(nm
+ n^ log n)
which
is
achieved by
many assignment
do
better for
problems that
assumption.
Gabow
[1985]
assignment problem.
^m
Gabow and
push algorithm
the assignment
^m log nC). Observe that the generic pseudoflow for the minimum cost flow problem described in Section 5.8 solves problem in 0(nm log nC) since every push is a saturating push.
showed
that the scaling version of the auction
algorithm runs in
this
0(nm
log nC).
Section 5.11
algorithm in Orlin and Ahuja [1988]. They also improved the time bound of the
auction algorithm to
is
comparable
to that of
Gabow and
different
computational attributes.
boimd
to solve the
assignment problem
As mentioned
Over the
many
algorithms.
Some
Glover and Klingman [1977a] on the network simplex method, by McGinnis [1983]
[1988]
[1986]
compared
all
of these zilgorithms,
it
is difficult to
Nevertheless, results to date seem to justify the following observations about the
algorithms' relative performance.
is
Among
due
to
Glover
et al. [1986]
[1987]
appear
to
be the
fastest.
found
competitive with
Jonker and Volgenant's algorithm. Carpento, Martello and Trlh [1988] present
183
several
cases.
FORTRAN
6.6
Other Topics
Our
domain
of
and
practical interest.
(iii)
network design.
We
shall
now
in this chapter
assume
that arcs
the flow entering an arc equals the flow leaving the arc.
If
In
xj:
then
Tj: Xj:
units "arrive" at
arc.
If
node
1,
j;
Tj; is
a
is
<
rj:
<
if
<
Tj;
<
>,
is
gainy.
may
application contexts.
For
example, the multiplier might model pressure losses in a water resource network or
losses incurred in the transportation of perishable goods.
An
maximum
two
flow problem
is
the generalized
maximum
node or maximizes
of
node
(these
Maximize v^
(6ia)
subject to
X
{j:
"ij
{j:
S
(j,i)
"'ji'^ji
K'if = s
S 0,
if
i ?t
(i,j)
A)
A)
s,t,
t
for
aU
(6.1b)
[vj, if
184
<
x^j
<
uj:
for all
(i, j)
e A.
Note
the arcs.
v^,
within arcs.
minimum
minimum
algorithm, the negative cycle algorithm, and the primal-dual algorithm for the
cost flow
problem apply
to the generalized
maximum
flow problem.
The
paper by Truemper [1977] surveys these approaches. These algorithms, however, are
not pseudopolynomial-time, mainly because the optimal arc flows and node
potentials might be fractional.
The
and Tardos
[1986]
describes the
first
generalized
maximum
flow problem.
In the generalized
minimum
which
is
an extension of the
ordinary
minimum
we wish
to
determine the
minimum
first
cost
main approaches
to solve this
problem. The
approach,
is
due
and Klingman
among
they
Elam
it is
et al.
find that
minimum
[1988b],
The
third approach,
due
to Bertsekeis
and Tseng
generalizes their
minimum
cost
generalized
minimum
We
V
(i,j)
Cjj
(x^j).
A
are substantially
X-J3
more
difficult to solve
and continue
to
pose a significant
nonseparable, but convex objective functions are more difficult to solve; typically.
185
programming techniques
to solve these
problems. The separable convex cost flow problem has the follow^ing formulation:
Minimize
V
(i,j)
Cj; (xj;)
(6.2a)
subject to
Y
{j: (i,j)
^i]
{j:
S
(j,i)
''ji
^^'^'
^^ all
N,
(6.2b)
A
<
Ujj
,
<
x^j
for all
(i, j)
e A.
(62c)
(xjj)
for each
(i,j)
e A,
is
a convex function.
The
research
classes of separable
problems:
each Cj;
(xjj) is
of
(ii)
different.
There
a separable
convex
program
Hax
minimum
problem
size.
However,
it
is
possible to
cost
size.
Batra,
and Gupta
Observe that
segments chosen
(if
it
is
More
elaborate
For example,
we knew the optimal solution to a separable course, we don't), then we could solve the
if
any arc
(i,
j)
186
breakpoints: at
0, Uj;
arc.
Any
linear approximation
would be
irrelevant
computationally wasteful.
adaptive approximations that iteratively revise the linear approximation beised upon
the solution to a previous, coarser, approximation.
of this approach).
If
(See
Meyer
[1979] for an
example
could
we were
we
and
progamming
Some important
references on this
[1980],
and Kennington
[1978],
[1981],
Dembo and
Klincewicz
Tseng
[1987].
Some
time.
cases, the
Minoux
one of
[1986]
its
special
mininimum quadratic
Minoux
has also
Muticommodity Flows
Multicommodity flow problems
arise
when
arc capacities.
we
state
programming formulation
of the
multicommodity minimum
problem and
its
cost flow
to contributions to this
specializations.
Suppose
through
r.
that the
b*^
problem contains
distinct
commodities numbered
k.
Let
Then
the
multicommodity minimum
^
Minimize
V
1^=1
V
(i,j)e
k
c^:
k
x^(6.3a)
subject to
187
k
X;;
1]
{j:
{j:
V
(i,j)
k ~
^i
'
^OT a\]
and
k,
(6.3b)
''ii
(i,j)
e A)
y
ktl
'
k
X..
'^
all
(i,j),
(63c)
<
k
Xj.
<
k
u,
for all
(i,j)
and
all
(6.3d)
k
In this formulation,
x--
and
k
c--
represent the
amont
of flow
and the
unit cost
of flow for
commodity k on
arc
(i,j).
As indicated by
its
by
(6.3d), the
restrictions
Observe
that
it
if
constraints, then
decomposes
single
commodity minimum
cost flow
corxstraints
is to
commodities
way
that
We
problem
is
first
a special instance of
commodity k has
objective
a
is
and tK The
t*^
from
s*^
to
for all k.
Hu
[1963]
showed how
network
in
to solve the
Frisch [1968]
showed how
source or a
to solve the
multicommodity
maximum
flow problem
with a
common
common
sink by
maximum
flow algorithm.
maximum
[1960]
Researchers have proposed three basic approaches for solving the general
multicommodity minimum
resource-directive
price-directive
decomposition,
We
188
the excellent surveys by Assad [1978] and Kennington [1978] for descriptions of these
methods.
minimum
cost
flow problem.
minimum
made on
cost flow
at nearly the
the single
commodity minimum
Although specialized
primal simplex software can solve the single commodity problem 10 to 100 times
faster than the
developed for the multicommodity minimum cost flow problems generally solve
thse problems about 3 times faster than the general purpose software (see Ali et
[1984]).
al.
Network Design
We
network;
is
of
its
own.
Many
any flow.
that indicate
Typically, these
models involve
k
x^- are
multicommodity flows.
related
yjj
2 k=l
''ii
"ij yij
^^^
'
^"
^^'^^
of the
form
(6.3c)
in
k
These constraints force the flow
the arc
is
x^-
of each
if
commodity k on
the arc
is
arc
(i,j)
to
be zero
if
(i,j)
flow to be the
arc's
design capacity
constraints
Many
may
restrict the
(for instance, in
some
applications, the
network must be
network might
189
to
Also,
is
many
different objective
One
of the
most popular
""
Minimize
^ k=l
(i^j)e
k
c
k
x^^
Y. ^
(i,j)
ij
A
(as well zs fixed costs
of solution
polyhedral combinatorics.
Magnanti and
Wong
[1984]
and Minoux
[1985,
1987] have described the broad range of applicability of network design models
and
for these
many
underlying
Acknowledgments
We
Wong and
Goemans, Hershel
Safer,
many
for
useful
suggestions.
We
Cunningham
many
The research
Presidential
of the first
and
third authors
was supported
in part
by the
Young
and Prime
Computer.
190
References
Aashtiani, H.A., and T. L. Magnanti.
1976.
055-76,
OR
Aho, A.V.
J.E.
Hop>croft,
and
J.D.
of
Computer
Algorithms.
Addison-Wesley, Reading,
L. Batra,
MA.
1984.
Ahuja, R. K.,
J.
and
S.
K. Gupta.
Goldberg,
J.B.
Orlin,
Finding
Scaling.
Management,
M.I.T.,
Cambridge,
MA.
Orlin.
J.B.
1988.
Personal Communication.
J.B.
Orlin,
and
R.E. Tarjan.
1988.
the Shortest Path Problem. Technical Report No. 193, Operations Research Center,
M.I.T.,
Cambridge, MA.
Orlin.
J.B.
1987.
Fast
for the
Maximum
M.I.T.,
Flow Problem.
Working Paper
Management,
in Oper. Res.
J.B.
1988.
for the
Minimum
and
Ahuja, R.K.,
Bipartite
J.B.
Orlin, C. Stein,
R.E. Tarjan.
1988.
Improved Algorithms
for
Ahuja, R.K.,
Orlin,
and
R.E. Tarjan.
1988.
for the
Maximum Flow
M.I.T.,
Problem.
Working Paper
Management,
Cambridge, MA.
1985a.
Akgul, M.
of
Raleigh, N.C.
191
Akgul, M.
1985b.
Assignment Problem.
Ali,I.,
J.
Kennington,
B. Patty, B. Shetty, B.
McCarl and
P.
Wong.
Ali, A.
I.,
and
J.
L.
Kennington.
1978.
Flow Problem:
State-of-the-Art Survey.
Texeis.
Technical Report
OREM
78001, Southern
Methodist University,
1980.
Implementation and
A Survey.
Networks 8,37-91.
1985.
Balinski, M.L.,
and
R.E.
Comory.
1964.
Sci. 10,
578-593.
Barahona,
F.,
and
E.
Tardos.
1987.
MA.
Baratz, A.E.
1977.
MA.
Basis Algorithm
Ban,
R., F.
1977a..
for the
12, 1-13.
and D. Klingman.
1977b.
A Network Augmenting
of the International
Path Basis
Symposium on
192
Barr, R., F. Glover,
and D. Klingman.
Euro.
].
1978.
and D. Klingman.
1979.
Enhancement
17, 16-34.
of
INFOR
J.J.
Jarvis.
John
Wiley
&
Sons.
Bellman, R. 1958.
On
a Routing Problem.
16, 87-90.
Berge, C.,
and A. Ghouila-Houri.
John Wiley
1979.
1962.
Networks.
&
Sons.
Bertsekas, D.P.
Cambridge,
MA.
Bertsekas, D.P. 1981.
152-171.
21,
Bertsekas, D. P.
1985.
Minimum
Network Flow
Problems.
Proc. of 25th
Bertsekas, D. P.
1987.
Distributed Relaxation
Method
for
MA.
Also in Annals
1988.
Bertsekas, D.P.,
and
J.
Eckstein.
Methods
for Linear
and
R. Gallager.
1987.
Bertsekas, D.
P., P.
A. Hosein,
and
P.
Tseng.
.
1987.
SIAM
of Control
and Optimization
193
Bertsekas, D.P.,
and
P.
Tseng.
1988a.
for Linear
Minimum
Cost
In B. Simeone, et
(ed.),
FORTRAN
As Annals
and
P.
of Operations Research
13, 125-190.
Bertsekas, D.P.,
Tseng.
1988b.
Minimum
Cost
On
Van Emde,
R. Kaas,
and
E. Zijlstra.
1977.
Efficient Priority
Queue. Math.
Bodin, L. D., B.
of Vehicles
L.
Ball.
1983.
Orlin.
1986.
Personal Communication.
1977.
Man.
21, 1-38.
Bradley,
S.
P.,
1977.
Applied
Mathematical
Programming.
Addison-Wesley.
P.J.
Gowen.
1961.
Operational
MD.
Carpento, G.,
S.
Martello,
In B.
and
P. Toth.
1988.
Assignment Problem.
Optimization.
Simeone
et al. (eds.),
FORTRAN
As Annals
and
J.
Carraresi, P.,
C. Sodini.
1986.
An
Efficient
Problem. Eur.
Cheriyan,
J.
1988.
Technical Report,
Institute of
Fundamental Research,
Bombay,
India.
194 Cheriyan,
J.,
1987.
Maximum Network
New
Delhi, India.
Cherkasky, R.V.
1977.
Maximum Flow
in
Networks
Vl
112-125
(in Russian).
Cheung,
T.
1980.
ACM
An
Algorithmic Approach.
Academic
Press.
Cunningham, W.H.
1976.
Cunningham, W.H.
Math, of Oper. Res.
Dantzig, G.B.
In T.C.
4,
1979.
196-208.
1951.
Koopmans
359-373.
&
Sons,
Inc.,
Dantzig, G.B.
in Linear
1955.
Constraints,
Programming. Economeirica
23, 174-183.
On
Man. Sd.
6,
187-190.
Dantzig, G.B.
Princeton, NJ.
1962.
(ed.),
Theory of
91-92.
1956.
On
the
of
Networks.
In
and Related
Dantzig, G.
B.,
and
P.
Wolfe.
1960.
Decomposition Principle
for Linear
Programs.
195
Dembo,
R. S.,
and
J.
G. Klincewicz.
1981.
C Pang.
1984.
Shortest-Route Methods:
1.
Reaching, Pruning
and Buckets.
Derigs, U.
1985.
for Solving
Assignment
Problems:
4,57-102.
Derigs, U.
1988.
Programming
in
W.
Meier. 1988.
Computational Investigation.
Germany.
Dial, R. 1969.
Algorithm 360:
Comm.
ACM
12, 632-633.
1979.
Computational Arvalysis of
Alternative Algorithms and Labeling Techniques for Finding Shortest Path Trees.
Networks
9, 2-[5-248.
Dijkstra, E.
1959.
Note on
Two
Problems
in
Mathematics 1,269-271.
Maximum Flow
in
1969.
An
10, 1324-1326.
Edmonds,
J.
1970.
196
Edmonds,
J.,
1972.
/.
Efficiency for
ACM
19, 248-264.
Elam,
J.,
F.
1979.
1956.
Note on
Problem.
INFOR 20,
1976.
370-384.
Even,
S.
of Dinic
and Karzanov:
An
Exposition.
Computer
Science, M.I.T.,
Cambridge,
MA.
Even,
S.
Computer Science
Press, Maryland.
Even,
SI
S.,
}.
and
R.E. Tarjan.
4,
1975.
AM
Comput.
507-518.
Femandez-Baca,
D.,
1987.
On
the Efficiency of
Maximum Flow
To appear
in
Research Report,
lA.
State University,
Ames,
Florian,
M.
1986.
Comm. >4CM
P-923,
5, 345.
Ford, L.R.,
Jr.
1956.
Rand
CA.
Ford, L.R.,
J.
Jr.,
1956.
a Network.
Canad.
Math.
8,
399-404.
Ford, L.R.,
Jr.,
1956.
Sd.
3, 24-32.
197
Ford, L.R.,
Jr.,
1957.
Ford, L.R.,
Jr.,
1958.
Ford,
L., R.,
and D.
R. Fulkerson.
1958.
Sci. 5,
97-101.
Jr.,
Princeton University
Francis, R.,
and
P.
Mirchandani
(eds.).
1988.
John Wiley
&
Sons.
To appear.
I.T.
Frisch.
1971.
Networks. Addison-Wesley.
Fredman, M.
L.
1986.
New Bounds
5,
SIAM
].
of
Computing
83
89.
Fredman, M.L., and R.E. Tarjan. 1984. Fibonacci Heaps and Their Uses
in
of
Improved
25th Annual
Comp. Sci
of
Fujishige, S.
1986.
Cost Circulation
298-309.
An 0(m^ log n) Capacity -Rounding Algorithm for the Minimum Problem: A Dual Framework of Tardos' Algorithm. Math. Prog. 35,
An
Out-of-Kilter
Method
for
SIAM J.
1955.
Computation of
Maximum Flow
in
Networks. Naval
Gabow, H.N.
31, 148-168.
1985.
R.E. Tarjan.
1987.
Network
SIAM
].
Comput. (submitted).
198
GaUl, Z.
1980.
for the
Maximum Flow
Problem, Acta
Galil, Z.
1981.
On
Theoretical
Comp.
Sci. 14,
Galil, Z.,
and A. Naamad.
/.
1980.
An 0(VE
log^ V)
Algorithm
for the
Maximum
Flow Problem.
Galil, Z.,
ofComput.
Tardos.
and
E.
1986.
An 0(n^(m
+ n log n) log n)
Sci.
,
Min-Cost Flow
136-146.
and
S.
Pallottino.
B.
1988.
In
Network Optimization,
(eds.).
Simeone,
Toth, G. Gallo,
F. Maffioli,
and
S.
Pallottino
As Annals
and G.
Starchi.
1982.
Shortest Paths:
Bibliography. Sofmat
Document
81 -PI
-4-SOFMAT-27, Rome,
1977.
Italy.
Gavish,
B., P.
Schweitzer, and
Its
E. Shlifer.
in
12, 226-240.
Gibby, D.,
F.
Comparison
of Pivot
Network Codes.
Gilsinn,
J.,
and C. Witzgall.
1973.
R. Glover,
and D. Klingman.
1.
1984.
Glover,
F.,
R. Glover,
and D. Klingman.
1986.
Glover,
F.,
D.
1974.
Comparisons of Primal,
Minimum
Cost
Network
Eow Problem.
Networks
191-212.
199
Glover, R, D.
1974.
Computational Study on
for Tranportation
Glover,
F.,
and D. Klingman.
9,
1976.
and
363-376.
D. Klingman,
J.
Maximum Flow
Algorithms. Applications of
Management
Glover,
for the
F.,
Science 3, 109-175.
D. Klingman,
J.
1984.
Maximum Flow
F.,
Glover,
1985.
A New
Polynomially Bounded
Glover,
F.,
1985.
New
Polynomial
Sci. 31,
Man.
Glover,
F.,
D. Klingman, and
J.
Stutz.
1974.
for
12, 293-298.
A New Max-Flow
for
Algorithm.
M.I.T.,
Technical
Report
MIT/LCS/TM-291, Laboratory
Computer Science,
E.
Cambridge,
MA.
Tardos.
1988.
Research Report.
MA.
1986.
A New
Approach
to the
Maximum Flow
/.
18th
ACM
ACM.
Goldberg, A.V., and R.E. Tarjan.
Successive Approximation.
1987.
Solving
Minimum
Proc. 19th
ACM
Comp. 136-146.
200
Solving
Minimum
appear
in
1988b.
ACM
Golden,
B. 1988.
An
Application of
M.
I.
T.
Cambridge,
B.,
MA.
Magnanti.
7,
Golden,
and
T. L.
1977.
Deterministic
Network Optimization:
Bibliography. Networks
149-183.
Goldfarb, D.
1985.
Efficient
for the
Assignment Problem.
1986.
for
Maximum
Simeone
et al. (eds.)
Optimization.
As Annals
Hao, and
S.
Kai.
1986.
Efficient Shortest
Columbia University,
Goldfarb, D.,
New
York, NY.
J.
Hao, and
S.
Kai.
1987.
Network
Simplex Algorithm.
Industrial Engineering,
Columbia University,
Hao. 1988.
in
New
York, NY.
J.
At Most
nm
Pivots and
O(n^m) Time.
Technical
New
York, NY.
1977.
Practicable
Steepest
Math. Prog.
12,361-371.
Gomory,
551-570.
R. E.,
and
T. C.
9,
201
M. D.
1986.
An
Efficient
26, 83-111.
M. D.
1988.
Personal Communication.
Grigoriadis,
M.
D.,
and
T.
Hsu.
1979.
Subroutines.
SIGMAP
1987.
Bulletin of the
ACM
Gusfield, D.
Network
Flow Analysis.
CA.
1985. Fast Algorithms for Bipartite
Femandez-Baca.
University,
New
Hamachar, H.
1979.
Karzanov. Computing
Hassin, R., and D.
B.
Johnson.
1985.
An
Maximum Flow
in Undirected Planar
Networks.
Integer
SIAM J.
Comput.
612-^24.
Hausman, D.
1978.
Classified Bibliography.
160.
Springer-Verlag.
J.
L.
Kennington.
1977.
An
Efficient
Procedure for
9, 63-68.
Implementing
Hitchcock,
F. L.
The Distribution
Math. Phys
.
of a Product
to
Numerous
Facilities.
20, 224-230.
J.
and
R.
M. Karp.
J.
1973.
2,
An n
'
Algorithm for
Maximun Matching
in
Bipartite Graphs.
SIAM
of
Comp.
225-231.
Hu,
T. C.
1963.
202
Hu, T.C.
Addison-Wesley.
Hung, M.
Oper.Res.
S.
1983.
31,595-600.
Hung, M.
Oper. Res
.
S.,
and W. O. Rom.
1980.
28, 969-892.
Imai, H.
1983.
On
Various
Maximum Flow
Algorithms,
/.
26,61-82.
Imai, H.,
and M.
Iri.
1984.
and
Iri,
New
Bucket Algorithm.
M.
1960.
A New Method
J.
Oper.
Iri,
M.
Academic
Press.
Itai,
A.,
and
Y. Shiloach.
1979.
Maximum Flow
in Planar
Networks.
SIAM
J.
Comput.
8,135-150.
W.
Barnes.
1980.
&
Sons.
Jewell,
W.
S.
1958.
No.
8,
MA.
Gair>s.
Jewell,
499.
W.
S.
1962.
Oper. Res.
10, 476-
/.
ACM
24,1-13.
JohT\son, D. B.
1977b.
Efficient Special
Proc. 15th
Annual
Allerton Conference on
Johnson, D.
B.
1982.
Priority
Queue
in
Which
Initialization
and Queue
Operations Take
OGog
203
Johnson, D.
B.,
and
S.
Comm.
Champaign,
Johnson,
IL.
E. L.
1966.
Letters 5, 171-175.
Jonker, R.,
and A. Volgenant.
1987.
Shortest
for
1939.
Publication
House
Translated
6(1960), 366-422.
Kapoor,
S.,
and
P.
Vaidya.
1986.
Fast
ACM
Symp.
on the
147-159.
Karmarkar, N.
1984.
A New
Polynomial-Time Algorithm
for Linear
Programming.
Combinatorica 4, 373-395.
Karzanov, A.V.
1974.
Kastning, C.
1976.
Integer
Classified Bibliography.
Kelton,
W.
D.,
and A. M. Law.
1978.
for
97-106.
Kennington,
J.L.
1978.
Kennington,
J.
L.,
and
R. V. Helgason.
1980.
Programming,
Wiley-Interscience,
NY.
204
Kershenbaum, A. 1981.
400.
11,
399-
Klein,
M.
1967.
Sci.
14, 205-220.
Klincewicz,
J.
G.
1983.
A Newton Method
for
Problems. Networks
13, 427-442.
Klingman,
D., A. Napier,
and
Stutz.
1974.
Generating
Cost Flow
Koopmans,
T.
C.
1947.
Optimum
17 (1949).
reprinted
as supplement to Econometrica
Kuhn, H. W.
1955.
for the
Res.
and Winston.
Magnanti,
T. L.
1981.
Magnanti,
T.L.,
and
R. T.
Wong.
1984.
S.
N. Maheshwari. 1978.
An CK V
I
3)
Algorithm
Maximum Flows
1987.
in
Networks. Inform.
Process. Lett. 7
277-278.
Martel, C. V.
Comparison
of Phase
Algorithms.
McGinnis,
L.F.
1983.
for
Mehlhom,
K. 1984.
Springer Verlag.
Two Segment
C. Y. Kao.
Sri. 25,
285-295.
Meyer,
R. R.
and
1981.
Minieka,
E.
1978.
Marcel Dekker,
New
York.
Minoux, M.
1984.
J.
Mirumum
Problems. Eur.
Minoux, M.
1985.
Marie Curie,
Paris, France.
Minoux, M.
1986.
Solving Integer
Minimum
Minoux, M.
1987.
Minty, G.
J.
1960.
Monotone Networks.
London
Moore,
E.
F.
1957.
In Proceedings
II;
of the
International
Symposium on
Mulvey,
266-270.
J.
1978a.
Network Codes.
J.
ACM
25,
Mulvey,
J.
Prog.
15,291-314.
&
Sons.
Nemhauser,
Wiley
G.L.,
1988.
Integer
&
Sons.
Sci. 2,
276-285.
106
Orlin, J.B.
1983.
Prog. 27,
214-231.
Orlin,
J.
B.
1984.
for
the
Minimum
Management,
Orlin,
B.
Cambridge, MA.
J.
1985.
On
J.
B.
Minimum
Proc. 20th
ACM
and
Symp. on
Orlin,
J.
B.,
R. K. Ahuja.
1987.
New
E>istance-E>irected
Algorithms for
Maximum
MA.
and
Working Paper
1908-87, Sloan
Technology, Cambridge,
Orlin,
J.
B.,
and
R. K. Ahuja.
1988.
New
MA.
Minimum
Cycle
Mean Problems.
OR
178-88, Operations
Steiglitz.
1982.
Combinatorial Optimization:
Algorithms
ACM Trans.
Math. Software
6,
Phillips, D.T.,
and A. Garcia-Diaz.
1981.
HaU.
Pollack, M.,
and W. Wiebenson.
1960.
Problem-A
Potts, R.B.,
and R.M.
1980.
Academic
Press.
Rock, H.
Network Flows.
In V.
Page
and Algorithms
207
Rockafellar, R.T.
Interscience.
1984.
Wiley-
Roohy-Laleh,
Method.
E.
1980.
Improvements
Network Simplex
Rothfarb,
N.
P.
Shein,
and
I.
T.
Frisch.
1968.
Common
Terminal
1985.
An
0(nl log^(I))
Maximum Flow
Algorithm.
Technical Report
and U. Vishkin.
1982.
An OCn^
log n) Parallel
Max-Flow Algorithm.
/.
Algorithms 3 ,128-'i46.
Sleator, D. D.,
and
R. E. Tarjan.
1983.
/.
Comput.
Sys.Sci. 24,362-391.
&
-,.
Sons.
Thompson.
1973.
ACM
20, 194-213.
Swamy,
Wiley
Syslo,
1981.
&
J.S.
Kowalik.
1983.
Discrete Optimization
Algorithms.
Prentice-Hall,
New
Jersey.
An Improvement
to Dantzig's
4, 83-87.
Tardos,
E.
1985.
5,
Strongly Polynomial
Minimum
Combinatorica
247-255,
208
Tarjan, R. E.
Res. Letters 2
,
1984.
265-268.
Tarjan, R. E.
1-11.
1986.
Algorithms for
Maximum Network
Tarjan, R. E.
1987.
Tarjan, R. E. 1988.
Tomizava, N.
1972.
On Some
1,
173-194.
On Max Flow
AM
].
Appl.Math. 32,450-456.
Vaidya,
P.
1987.
An
Algorithm
for Linear
ACM
Symp. on the
Van
Vliet, D.
1978.
for Transport
Networks.
Transp.Res. 12,7-20.
Von Randow,
1978-1981.
R.
1982. Integer
Classified Bibliography
Springer-Verlag.
Von Randow,
1981-1984.
R.
1985. Integer
Classified Bibliography
Springer-Verlag.
Wagner,
23^-57.
R. A. 1976.
Edge
Sparse Graphs.
/.
ACM
Warshall,
S.
1962.
A Theorem on Boolean A
Matrices.
J.
ACM
9,11-12.
Weintraub, A.
1974.
Convex
Costs.
Man.
209
F.,
Barahona.
1979.
Ehial
Problem.
Universidad de Chile-Sede
Occidente, Chile.
Whiting,
P. D.
and
J.
A. Hillier.
1960.
A Method
Route
Through
Road Network.
WiUiams,
J.
W.
J.
1964.
347-348.
Zadeh, N.
1972.
Edmonds-Karp Algorithm
for
y4CM
19, 184-192.
A Bad Network
Problem
5,
for the
Minimum
Zadeh, N.
255-266.
1973b.
More
Prog. 5,217-224.
Zadeh, N.
1979.
Near Equivalence
of
No.
26, Dept. of
l^8^7
U^6
Date Due
ne
m^
?;*
>
SZQ0^
nrr
^^. 0.5
4Pi? 2 7 1991
W
t
1
,_
f^cr
CM-
OS
1992
::m
\995t-
o 1994
Lib-26-67
MIT
LIBRARIES DUPl
TDSD DQ5b72fl2