Week 8 1 LP Is Easy: The Ellipsoid Method: 1.1 An Iteration
Week 8 1 LP Is Easy: The Ellipsoid Method: 1.1 An Iteration
The idea behind the ellipsoid method is rather simply explained in a two-
dimensional picture (on the blackboard). The proof that it works is technical
and based on several ingredients. In what follows let A and b be integer and let
U be the maximum absolute value of the entries in A and b..
(a) We can guess a ball that contains all extreme points of P , and the volume
of which is not too large in terms of n, m and log U ;
(b) If P is non-empty then its volume is not too small in terms of n, m and
log U ;
(c) In each step either the center of the ellipsoid is in P or the next ellipsoid
contains P , and in fact the entire half of the previous ellipsoid containing
P;
(d) In each step the ratio between the volume of the next ellipsoid and the
volume of the previous ellipsoid is large enough in terms of n, m and log U .
1.1 An iteration
We shall first be concerned in showing (c) and (d). We will obtain perfect in-
sight by considering how this works for an ellipsoid that contains the half of the
unit ball E0 consisting of all points with non-negative first coordinate; that is
the unit ball E0 intersected with the halfspace H0 = {x IRn | eT1 x 0}. Any
general case can in fact be transformed into this result.
Lets start with a general description of an ellipsoid. For any positive definite
symmetric matrix D and centre z an ellipsoid is defined by
E(z, D) = x IRn | (x z)T D1 (x z) 1 .
1
written as the product of two symmetric non-singular matrices D = D1/2 D1/2 .
So, we would like to know the ratio V olume(E00 )/V olume(E0 ). We do this by
making an appropriate transformation F such that F (E00 ) = E0 , which allows
us to use the following lemma.
2
Lemma 1.1 (Lemma 8.1 in [B&T]) Given the affine transformation S(x) =
Bx+b, with B a non-singular matrix, then V ol(S(L)) = | det(B)|V ol(L), where
S(L) = {y IRn | x L s.t. y = Bx + b}.
The following transformation works
F (x) = D1/2 (x z)
because
x E00 (x z)T D1 (x z) 1
D1/2 (x z) E0
F (x) E0 .
Applying Lemma 1.1
V olume(E0 ) = | det(D1/2 )| V olume(E00 ).
Using the property that, for any non-singular square matrix B,
1
| det(B 1/2 )| = p ,
| det(B)|
we get
q
V olume(E00 ) = det(D) V olume(E0 ).
p
det(D) is easily seen to be
n/2 1/2
n2
2
1 .
n2 1 n+1
Hence,
n/2 1/2
V olume(E00 )
n2 2
V olume(E0 ) = n2 1 1 n+1
(n1)/2
n n2
= n+1 n2 1
(n1)/2
1 1
= 1 n+1 1+ n2 1
applying 1 + a < ea , a 6= 0,
2
(n1)/2
< e1/(n+1) e1/(n 1)
= e1/(2(n+1)) .
3
In general, let us be given ellipsoid
H = {x Rn | aT x aT z}
E 0 = E(z, D)
with
1 Da
z = z +
n + 1 aT Da
and
n2 2 DaaT D
D = 2
D .
n 1 n + 1 aT Da
V olume(E 0 ) V olume(E00 )
= < e1/(2(n+1)) .
V olume(E) V olume(E0 )
(c) Check if the volume dropped below the critical value. If so, conclude that
there does not exist a feasible point: P = . Otherwise reiterate (b).
4
and we correctly conclude that P is empty. Correct calculation gives that
and
2
v nn (nU )n (n+1)
.
Filling in V and v in the expression (2) yields a number of iterations of O(n4 (log nU )).
In case P is not full-dimensional then the perturbed polytope would have dif-
ferent bounds on V and v resulting in O(n6 (log nU )) iterations.
It requires technicalities, which the book does not give, to show that the preci-
sion of taking square roots and of multiplications of numbers can be cut down
sufficiently to conclude that also the running time per iteration can be polyno-
mially bounded.
I leave studying how to adapt the method to solve optimisation i.o. feasibility
problems in Section 8.4 to yourself.
It is extremely important for what follows to notice that the number of iterations
is independent of m the number of constraints! It is only dependent on n, the
number of variables, and log U , the size of the largest coefficient.
5
if for a family of polyhedra we can solve the separation problem in time poly-
nomial in n and log U only, then we can solve the linear optimization problems
over this family in time polynomial in n and log U .
0 xe 1, e E,
where, K is the set of all s, t-paths in G. In fact you could see the formulation
as the LP-relaxation of a min-cut, but it can be shown that all extreme points
of this polytope are {0, 1}-vectors. The restrictions say that there should be
at least 1 edge on each s, t-path, clearly the definition of a path. We do not
wish to specify all these paths since, indeed, there are exponentially many in
the number of edges.
Given a {0, 1}-vector x, it is however a matter of deleting from the graph all
edges e with xe = 1 and use a simple graph search algorithm to decide if there
exists an s, t-path in the remaining graph.P If there is not, then x represents a
cut, otherwise one finds a path K with eK xe < 1 and therefore a violated
constraint. In fact this was Exercise 8.8.
Thus, since separation over the cut-polytope can be done in time polynomial
in the number of edges only, we can conclude immediately that the min-cut
problem can be solved in polynomial time using the ellipsoid method (though
we know it can be done faster).
6
incident to node i.
P
min eE ce xe
P
s.t. e(i) xe = 2, i
P
e(S) xe 2, S V
0 xe 1, e E.
Clearly there are exponential many of them, so let us see how we can solve the
separation problem over the subtour elimination constraints.
P
If we interpret xe as the capacity of the edge e then e(S) xe can be inter-
preted as the size
Pof the cut S. Thus, if the subtour elimination constraint w.r.t.
S is violated if e(S) xe < 2, i.e. S is a cut of size less than 2. Hence, check-
ing if some subtour elimination constraint is violated is a matter of finding the
mincut in G with capacities on the edges xe , which can be done in polynomial
time.
The optimal solution of this LP-relaxation is usually not integer, but gives good
lower bounds. The value optimal of the LP-relaxation is conjectured to be
within 3/4th of the integer optimal value.
Exercises of Week 8
8.9