Chapter 8 - Linear Programming

Download as pdf or txt
Download as pdf or txt
You are on page 1of 163

1

Chapter 8: Linear Programming


Scientific Computing

SoICT 2023

2
Contents
1) Simplex method
1. The canonical and standard form of linear
programming problems
2. Basic feasible solution
3. Formula for incremental change of the objective
function. Optimality test
4. The algebra of the simplex method
5. The simplex method in tabular form
6. The simplex method: termination
7. Two-phase simple method
2) Duality theory
1. Dual problem
2. Duality theory
3. Some applications of duality theory

3
THE SIMPLEX METHOD

The simplex method is the basic algorithm to solve the Linear


programming problems

4
Content

1. The canonical and standard form of linear


programming problems
2. Basic feasible solution
3. Formula for incremental change of the objective
function. Optimality test
4. The algebra of the simplex method
5. The simplex method in tabular form
6. The simplex method: termination
7. Two-phase simple method

5
The canonical and standard form of
the linear programming problems

6
General form of the linear programming problem

• The standard form of the linear programming problem is the


optimization problem in which we have to find the maximum
(minimum) of a linear objective function with the condition that the
variables must satisfy a number of equations and linear inequalities.
The mathematical model of the problemn can be stated as follows:
subject to f ( x1 , x2 ,..., xn ) = ∑ c j x j → min(max), (1)
j =1
n

∑a x
j =1
ij j = bi , i = 1, 2,..., p ( p ≤ m) (2)

∑a x
j =1
ij j ≥ bi , i = p + 1, p + 2,..., m (3)

x j ≥ 0, j = 1, 2,..., q (q ≤ n) (4)
x j <> 0, j = q + 1, q + 2,..., n (5)
• Notation xj <> 0 shows that variable xj is unrestricted in sign.
General form of the linear programming problem

• Constraints:
n

∑a x
j =1
ij j = bi , i = 1,..., p

is called as basic functional constraints in equation form

• Constraints: n

∑a x
j =1
ij j ≥ bi , i = p + 1,..., m

is called as basic functional constraints in inequality form


• Constraints:
x j ≥ 0, j = 1,..., q
is called as nonnegativity constraints on variables
Ganonical form of the linear programming problem

n
f ( x1 , x2 ,..., xn ) = ∑ c j x j → min,
j =1
n

∑a x
j =1
ij j = bi , i = 1, 2,..., m

x j ≥ 0, j = 1, 2,..., n
Standard form of the linear programming problem

n
f ( x1 , x2 ,..., xn ) = ∑ c j x j → min,
j =1
n

∑a x
j =1
ij j ≥ bi , i = 1, 2,..., m

x j ≥ 0, j = 1, 2,..., n
Transform general form to canonical form
• Obviously, the canonical form is the special case of the general
form
n
f ( x1 , x2 ,..., xn ) = ∑ c j x j → min(max), (1)
j =1 n
n f (x1, x2,..., xn) = ∑cj xj →min,
∑a x
j =1
ij j = bi , i = 1, 2,..., p ( p ≤ m) (2) j=1
n
n

∑a x ij j ≥ bi , i = p + 1, p + 2,..., m (3) ∑a x =b,i =1,2,...,m


j=1
ij j i
j =1

x j ≥ 0, j = 1, 2,..., q (q ≤ n) (4) xj ≥0, j =1,2,..., n


x j <> 0, j = q + 1, q + 2,..., n (5)

• On the other hand, any linear programming problem can always be


transformed to the canonical form by using the following
transformations:

11
Transform general form to canonical form

a) Change the inequality form “≤” to the form “≥”. Linear


inequality
n

∑a x
j =1
ij j ≤ bi

is equivalent to the following linear inequality:


n
−∑ aij x j ≥ − bi
j =1
Transform general form to canonical form

b) Change the equality form “=” to the form “≥”.


Linear equation
n

∑a x
j =1
ij j = bi

is equivalent to following two linear inequalities:

∑a x
j =1
ij j ≥ bi

n
−∑ aij x j ≥ −bi
j =1
Transform general form to canonical form

c) Change the equality form “=” to the form “≥”. Linear


inequality
n

∑a x
j =1
ij j ≥ bi (1)

is “equivalent” to an inequality constraint and a nonnegativity


constraint on variable
n

∑a x
j =1
ij j − yi = bi
(2)
yi ≥ 0
• “Equivalent” means that: If (x1, x2, ..., xn, yi) is solution to (2)
then (x1, x2, ..., xn) is solution to (1).
• Variable yi is called as artificial variable.
Transform general form to canonical form
d) Replace each unrestricted sign variable xj by two restricted sign
variables:
x j = x+j − x−j ,
x+j ≥ 0, x−j ≥ 0.

e) Transform maximization problem to minimization problem.


max { f(x): x ∈ D}
is equivalent to optimization problem
min {- f(x): x ∈ D}
with the meaning that: Solution to a problem is also the solution to other
problem, and vice versa. We have the equation:
max { f(x): x ∈ D} = - min {- f(x): x ∈ D}
Transform general form to canonical form

f) Change the form “≤” to the form “=”. Linear inequality


n

∑a x
j =1
ij j ≤ bi (1)

is “equivalent” to an equality constraint and a nonnegativity


constraint on variable
n

∑a
j =1
ij x j + y i = bi
(2)
yi ≥ 0
• “Equivalent” means that: If (x1, x2, ..., xn, yi) is solution to (2)
then (x1, x2, ..., xn) is solution to (1).
• Variable yi is called as artificial variable.
Example

• Linear programming problem


x1 + 2x2 - 3x3 + 4x4 → max,
x1 + 5x2 + 4x3+ 6x4 ≤ 15,
x1 + 2x2 - 3x3+ 3x4 = 9,
x1 , x2 , x4 ≥ 0, x3< >0,
is equivalent to linear programming problem in canonical
form:

17
+ −
− x1 − 2 x2 + 3( x − x ) − 4 x4 → min,
3 3
+ −
x1 + 5 x2 + 4( x − x ) + 6 x4 + x5 = 15,
3 3

x1 + 2 x2 − 3( x3+ − x3− ) + 3x4 = 9,


+ −
x1 , x2 , x , x , x4 , x5 ≥ 0,
3 3

It means if ( , , , , , is optimal solution to linear


programming problem canonical form then ( , , , where
, is optimal solution to original problem.

18
Solve linear programming problem
graphically

19
Solve linear programming problem graphically

• Linear programming problem in standard form with 2


variables
f(x1, x2) = c1x1 + c2x2 → min,
subject to
ai1 x1 + ai2 x2 ≥ bi, i = 1, 2, ..., m
• Denote
D = {(x1,x2): ai1 x1 + ai2 x2 ≥ bi, i = 1, 2, ..., m}
the feasible region.

20
Solve linear programming problem graphically

• Each linear inequality


ai1 x1 + ai2 x2 ≥ bi, i = 1, 2, ..., m
corresponds to a line that forms the boundary of what
is permitted by the constraint

⇒ Feasible region D determined as intersection of m


lines will be a convex polygon on the plane.

21
Solve linear programming problem graphically

• Equation
c1x1+ c2x2 = α
has normal vector (c1,c2)
when α changes, it will determine parallel lines that
we call contour lines (with value α).
Each point u=(u1,u2)∈D is on the contour line with
value
αu= c1u1+ c2u2 = f(u1,u2)

22
Example 1

• Solve the following linear programming problem:


x1 – x2 → min
2 x1 + x2 ≥ 2,
– x1 – x2 ≥ – 7,
– x1 + x2 ≥ – 2,
x1 ≥ 0, x2 ≥ 0.

23
Optimal solution
x1 – x2 → min
2 x1 + x2 ≥ 2,
– x1 – x2 ≥ – 7,
– x1 + x2 ≥ – 2,
x1 ≥ 0, x2 ≥ 0.

x1 – x2 = – 7

x1 – x2 = 0 Feasible region: D = M1M2M3M4M5


24
Example 1

• Using geometric concepts, we get optimal solution of


the problem is the point M2(0,7): x* = (0,7), with the
optimal value of f* = -7.
• If we replace the objective function of the problem by
–x1+x2 → min,
then the optimal value will be -2, and all points on the
segment M3M4 are optimal solutions of the problem. For
example, the optimal solution of the problem can be
taken as x*=(2,0) (corresponding to the point M4).

25
– x1+ x2 → min
2 x1 + x2 ≥ 2,
– x1 – x2 ≥ – 7,
– x1 + x2 ≥ – 2,
x1 ≥ 0, x2 ≥ 0.
– x1 + x2 = 7

–c Optimal solution: M3M4


– x1 + x2 = 0

26
Comments

• In both cases, we always find the optimal solution as a


corner point of the feasible region (a point of
intersections between constraint boundaries).
• “The linear programming problem in the plane always
has the optimal solution as a corner point of the
feasible region.”
• This important geometrical remark has led to the
proposal of the simplex method to solve the linear
programming problem (LP).

27
Example 2

13 x1 + 23 x2 → max
5 x1 + 15 x2 ≤ 480
4 x1 + 4 x2 ≤ 160
35 x1 + 20 x2 ≤ 1190
x1 , x2 ≥ 0

28
Feasible region

4x1 + 4x2 ≤ 160 35x1 + 20x2 ≤ 1190


13x1 + 23x2 → max
5 x1 + 15 x2 ≤ 480
4 x1 + 4 x2 ≤ 160
35 x1 + 20 x2 ≤ 1190
(0, 32) x1 , x2 ≥ 0
(12, 28)

(26, 14)

5x1 + 15x2 ≤ 480

(0, 0) (34, 0)

29
Objective function

(0, 32)
(12, 28)

(26, 14) 13x1 + 23x2 = 1600

13x1 + 23x2 = 800

(0, 0) (34, 0)
13x1 + 23x2 = 442

30
Geometric meaning

• There exists an optimal solution that is the corner point of


the feasible region
Corner point of
feasible region

(0, 32)
(12, 28)

(26, 14)

(0, 0) (34, 0)

31
Geometric meaning of LP
• Feasible region is a a polyhedral convex set.
• Convex: if y and z are feasible solutions, then αy +(1- α)z is also feasible
solution for all 0≤α≤1.
• Corner: feasible solution x which can not be expressed as αy +(1- α)z,
0<α<1, for all distinct feasible solutions y and z.

n
(P) max ∑c x
j =1
j j

∑a x
j =1
ij j ≤ bi 1 ≤ i ≤ m

xj ≥ 0 1≤ j ≤ n

Convex Non convex


y
Corner y
z

32
Geometric meaning of LP

• Conclusion: If the
problem has an optimal
solution, then it always
has an optimal solution
that is the corner of the
feasible region even for
many dimensions.
Just find the optimal
solution among a finite
number of feasible
solutions.

33
Simplex algorithm

• Simplex Algorithm.
(Dantzig 1947)
• The algorithm starts from
any corner of the feasible
region and repeatedly
moves to an adjacent
corner of better objective
value, if one exists. When
it gets to a corner with no
better neighbor, it stops:
this is the optimal
solution.
• The algorithm is finite but
has exponential
complexity.

34
Some notations and definitions

• In the following sections, we will only work with the


linear programming in canonical form (LP-C):
Find min:
f (x1,x2,...,xn)= Σnj=1 cj xj → min,
subject to
Σni=1 aij xj = bi, i = 1,2,...,m
xj ≥ 0, j = 1,2,...,n.

35
Some notations and definitions

• Notations:

x=(x1, x2, ..., xn)T – variable vector


c=(c1, c2, ..., cn)T – objective function
coefficient vector
A = (aij)m×n – constraint matrix
b=(b1,...,bm)T- constraint vector (right side)

36
Some notations and definitions

• We can rewrite the problem in matrix form:


f(x) = cTx → min,
Ax = b, x ≥ 0
or
min{ f(x) = cTx : Ax = b, x ≥ 0}

• Vector inequality:
y = (y1, y2, ..., yk) ≥ 0
means that each component:
yi ≥ 0 , i = 1, 2, ..., k.

37
Some notations and definitions

• Denote index set:


J = {1,2,...,n} indices for variables
I = {1,2,...,m} indices for constraints
Then we use the following symbols
x = x(J) = {xj: j∈J} – variable vector;
c = c(J) = {cj: j∈J} – objective function coefficient vector;
A = A(I, J) = {aij: i∈I, j∈J} – constraint matrix
Aj = (aij: i∈I) – jth column vector of matrix A.
Basic constraint equations of the LP-C can also be written in the
form:
A1x1+ A2x2+...+ Anxn= b

38
Some notations and definitions

• Set
D = {x: Ax = b, x≥ 0}
is called constraint region (feasible region)
x is called feasible solution.
• Feasible solution x* giving the smallest value of objective
function, i.e.,
cTx* ≤ cTx for all x ∈ D
is called optimal solution of the problem, and the value
f* = cTx*
is called optimal value of the problem

39
BASIC FEASIBLE SOLUTION

The concept of an basic feasible solution (BF solution) is a


main concept in simplex algorithm

40
Basic feasible solution

• Firstly, we assume that


rank (A) = m (*)
that is, the system of basic constraint equations
consists of m linearly independent equations.
• Note: In fact, the assumption (*) is equivalent to the
assumption that the system of linear equations Ax = b
has solution.
• We will remove these assumptions later.

41
Basic feasible solution

• Definition 1. Basis of matrix A is a set of m linearly


independent column vectors B = { , , … , }.
Assume B = A(I, JB), such that JB = {j1, …,jm} is a basis of
matrix A.
Then vector ,.., such that:
0, ∈ \ ;
is the kth element of vector (k=1,..,m).
is called as basic feasible solution corresponding to basis B.
Variables , ∈ are called basic variables, and , ∈
are called nonbasic variables.

42
Basic feasible solution

• Thus, if we denote
xB = x(JB), xN = x(JN)

then the basic solution x corresponding to basic B


could be determined by the following procedure:
1. Set xN=0.
2. Determine xB from equations BxB = b.
• Using the assumption (*), we see that the problem
always has the basic solution.

43
Basic feasible solution

• Assume x = (xB ,xN ) is basic solution corresponding to


basis B. Then LP-C could be rewritten as following:

f(xB,xN)=cBxB + cNxN → min


BxB+ NxN = b,
xB, xN ≥ 0,
where N = (Aj: j∈JN) is called non-basic matrix of A.

44
Basic feasible solution

• Consider the LP:


6x1 + 2x2 − 5x3 + x4 + 4x5 − 3x6 + 12x7 → min
x1 + x2 + x3 + x4 =4
x1 + x5 =2
x3 + x6 =3
3x2 + x3 + x7 =6
x1, x2 , x3 , x4 , x5 , x6 , x7 ≥ 0

45
Basic feasible solution

• In the matrix form:


c = (6, 2, –5, 1, 4, –3, 12)T;
b = (4, 2, 3, 6)T;
1 1 1 1 0 0 0
1 0 0 0 1 0 0
A= 0 0 1 0 0 1 0
0 3 1 0 0 0 1
A1 A2 A3 A4 A5 A6 A7
Basic feasible solution

• Consider basis
B = {A4, A5, A6, A7} = E4
• Basic solution x = (x1, x2, ..., x7) corresponding to B could be
obtained by setting:
x1 = 0, x2 = 0, x3 = 0
and values of xB= (x4, x5, x6, x7) could be obtained by solving
equations
BxB = b or E4xB = b
Then we get: xB = (4, 2, 3, 6).
• Thus basic solution corresponding to basis B is
x = (0, 0, 0, 4, 2, 3, 6)
Basic feasible solution
• Consider basis
B1 = {A2, A5, A6, A7}
• Basic solution y = (y1, y2, ..., y7) corresponding to B1 could be obtained by
setting:
y1 = 0; y3 = 0, y4 = 0
and values of yB= (y2, y5, y6, y7) could be obtained by solving equations
B 1y B = b
or
y2 =4
y5 =2
y6 =3
3y2 + y7 =6
Then we get: yB = (4, 2, 3, – 6).
• Thus, basic solution corresponding to basis B1 is
y = (0, 4, 0, 0, 2, 3, – 6)
Basic feasible solution

• We can see that:


• Basic solution corresponding to basis B
x = (0, 0, 0, 4, 2, 3, 6)
is feasible solution
• Basic solution corresponding to basis B1
y = (0, 4, 0, 0, 2, 3, -6)
is non feasible solution.

Definition. Basic solution is called as basic


feasible solution if it is feasible solution.
Basic feasible solution

• How many basic feasible solutions are there in the LP?

• The number of basic feasible solutions ≤ the number of


basis ≤ C(n,m)

• Thus, there is a limited number of basic feasible solutions


in a LP
Basic feasible solution

• The LP always has basic feasible solution?


• Not correct!
• Example: If the LP does not have feasible solution, then
it also does not have basic feasible solution!
• However, we can prove the following theorem:

• Theorem 1. If the LP has feasible solution, then


it also has basic feasible solution
Formula for incremental change of
the objective function

52
Formula for incremental change of the objective function

• Assume x is a basic feasible solution corresponding to basis


B=(Aj: j∈JB). Denote:
JB={j1,j2, ..., jm} – indices of basic variables;
JN=J \ JB – indices of non-basic variables;
B=(Aj: j∈JB) – basis matrix;
N=(Aj: j∈JN) – non-basic matrix;
xB = x(JB) = {xj: j∈JB}, xN = x(JN) = {xj: j∈JN} – basic
variable vector and non-basic variable vector;
cB = c(JB) = {cj: j∈JB}, cN = c(JN) = {cj: j∈JN} – objective
function coefficient vectors of basic variables and non-
basic variables;

53
Formula for incremental change of the objective function

• Consider basic feasible solution z=x+∆x, where ∆x


= (∆x1, ∆x2,..., ∆xn) – incremental change vector in
variables. We find formula to calculate the
incremental change of objective function:
∆f = c’z – c’x = c’∆x.
• Since x, z are both feasible solution, so Ax=b and
Az=b. Therefore, the ∆x incremental change must
satisfy condition A∆x = 0, it means:
B∆xB + N∆xN = 0,
where ∆xB = (∆xj: j∈JB), ∆xN = (∆xj: j∈JN).

54
Formula for incremental change of the objective function

• Thus
∆xB= – B-1N∆xN . (1.10)
• So we have
c’∆x = c’B∆xB+ c’N∆xN= – (c’BB-1N – c’N) ∆xN .
• Denote:
u = c’BB-1 – transpose vector
∆N= (∆j: j∈JN) = uN- c’N – estimate vector.
we get the formula:
∆f = c’z – c’x = − ∆N∆xN = − Σ ∆j ∆xj
j∈JN

• The obtained formula is called formula for incremental change


of the objective function

55
Optimality criterion (Optimality test)

56
Optimality criterion (Optimality test)

• Definition. Basic feasible solution x is said to be not degenerated


if all its basis elements are different from 0. The LP is said to be
not degenerated if its all basic feasible solutions are not
degenerated

• Theorem 2. (Optimality criterion) Inequality


∆N ≤ 0 (∆j ≤ 0, j ∈JN) (1.13)
is a sufficient condition and in the non-degenerate case is also a
necessary condition for basic feasible solution x to be optimal.

57
Sufficient condition: objective function is unbounded

• Definition. Basic feasible solution x is said to be not degenerated if


all its basis elements are different from 0. The LP is said to be not
degenerated if its all basic feasible solutions are not degenerated

• Theorem 2. (Optimality criterion) Inequality


∆N ≤ 0 (∆j ≤ 0, j ∈JN) (1.13)
is a sufficient condition and in the non-degenerate case is also a
necessary condition for basic feasible solution x to be optimal.

58
Sufficient condition: objective function is unbounded

• Theorem 3. If among ∆j values of basic feasible


solution x, there is positive value ∆ >0 and the
corresponding elements of vector 0, then
the objective function of the problem is unbounded.

59
SIMPLEX METHOD IN THE MATRIX FORM

60
Simplex iterations

• We continue to analyze the basic feasible solution x


corresponding to the basis B. Consider the case when the
optimal criterion and the sufficient condition for the objective
function to be unbounded are not fulfilled. Then we must find
the j0 such that ∆jo>0.
• From the proof of the necessary conditions of the optimal
criterion, we can see that it is possible to build a feasible
solution ̅ with smaller value of objective function. We have
the formula to build ̅ :
̅ = x + ∆x,
where vector ∆x is determined as follows
∆xjo = θ , ∆xj = 0, j ≠ j0 , j∈ JN,
∆xB = - θB-1Aj
61
Simplex iterations
• Then we get
 xN= ∆xN,  xB= xB - θB-1Ajo ,
and the value of objective function atx is
c x = cx − θ∆ j0
• Clearly: the larger θ , the more objective function value can
be reduced. We are interested in the largest possible value of
θ. Since we need to choose θ such thatx is feasible
solution, it must satisfy the following linear inequalities:
 xB= xB - θB-1Ajo ≥ 0, θ ≥ 0.
• Denote B-1Ajo = {xi,jo: i∈ JB}, we rewrite the linear
inequalities as follows:
xi – θ xi,jo ≥ 0, i∈JB,
θ ≥ 0.
62
Simplex iterations
• Denote
 xi / xij0 , when xij0 > 0
θi =  , i ∈ JB
+∞, when xij0 ≤ 0
θ 0 = θi0 = xi0 / xi0 j0 = min { θi : i ∈ J B }

• We have the solution to inequalities is


0 ≤ θ ≤ θ0 .
• From that, it is deduced that the largest possible value of θ
is θ0. Then if replace x by x (θ 0) = x +∆x, where the
vector θ =θ0, the value of objective function is reduced by
an amount of θio ∆jo > 0.

63
Simplex iterations
• We will show that ̅ is also basic feasible solution.
Clearly, ̅ 0 with ∈ ̅ N\ # ∪ %# . Let
̅ B\%# ∪ # ,
&
' : ∈ ) *. ( & is obtained from B by replacing column by column
+ ). We have

so

Therefore det(B-1B ) = det V = xio,jo≠ 0, so detB ≠ 0 orB is the


basis of the problem. Consequently, ̅ is basic feasible solution.
We call the conversion from the basic feasible solution x to the
basic feasible solution ̅ according to the procedure described above
as a simplex iteration on the basic feasible solution x.
64
Simplex method in the matrix form

• In the calculations of a simplex iteration, the B-1


matrix plays an important role. The algorithm below
will be described using the the inverse matrix B-1
• Initialization step.
Find a basic feasible solution x with corresponding
basis B.
Calculate B-1.

65
Simplex method in the matrix form
• Step k=1,2,… At the beginning of each step, we have matrix
, , ,
, ( = ), basic variable index set , \ ,
, ,
basic feasible solution , , , ,0 .
1) Calculate - .′ (corresponding to solve equations - 0
, ,
.
2) Calculate ∆ -023 . , ∈ ,
3) If ∆ ≤0,∀ ∈ , then the algorithm finishes and xk is the optimal
solution.
4) If among values ∆ , ∈ , , there is still positive value, then select
∆ 506 # # 7
5) Calculate , ∈ , are elements of vector
8 , (corresponding to solve equations 78
6) If 0, ∀ ∈ , then the objective function of the problem is
not unbounded. The algorithm finishes.
66
Simplex method in the matrix form

7) Calculate

8) Set

Calculate inverse matrix ,9 of Bk+1, go to step k+1.

67
Simplex method in the matrix form

Note. In 4), ∆ is any positive value. However, in case | , | is not


too large, we can choose its value as following rule:

,
∆ max ∆ : ∈ .

To calculate inverse matrix ,9 of & Bk+1, from inverse


matrix , of B = Bk , looking at its relationship in the formula
(1.21) & > , we have:

68
Simplex method in the matrix form

• As

We denote
Then, from (1.22), (1.23) we get the following formula to
calculate elements of ,9

69
THE SIMPLEX METHOD IN
TABULAR FORM

70
The simplex method in Tabular form

• When we need to solve the problem by hand, we


recommend the tabular form described in this section.
• Suppose we have feasible basic solution x
corresponding to basis B. The notations are still the
same as before
• We call simplex tableau corresponding to the basic
feasible (BF) solution x is the following table

71
Simplex tableau

cj Basis Solution c1 ... cj ... cn


basic θ
A1 ... Aj ... An

c j1 Aj1 x j1 x j1 j θ j1

... ... ... ... ...

ci Ai xi xi j θi

... ... ... ... ...

c jm Ajm x jm x jm j θ jm

∆ ∆1 ... ∆j ... ∆n
Simplex tableau
• The first column is the objective function coefficients of basic variables
• The second column is the name of basis column.
• The third column is the values of basic variables (elements of vector xB =
{xj : j∈JB} = B−1b).
• Elements xij, i∈JB written on the next columns are calculated based on the
formular:
{xij , i∈JB} = B−1Aj , j=1,2,...,n.
• The last column is values of ratios θi..
• The first row: the objective function coefficients of variables (cj).
• Next rows contains name of columns A1,..., An.
• The last row is called estimate line:
∆j = Σ ci xij – cj , j=1,2,...,n.
i∈JB
• We can see that ∆j= 0, j∈JB .
73
The simplex method in Tabular form
With the simplex tableau, we can run a simplex iteration with the
current feasible solution x as following:
1. Optimality test: If elements on estimate line are not positive (∆j
<0, j=1,..,n), then the current feasible solution x is optimal, the
algorithm finishes.
2. Test the sufficient condition for the objective function to be
unbounded: if there is ∆ 5 0 while its corresponding elements
in the simplex tableau 0, ∈ , then the objective function
is unbounded, the algorithm finishes.
3. Find the pivot column: Find ∆ max ∆ : 1, . . , @ 5 0.
Column A is called pivot column (column entering basis), and x is
called entering basic variable
74
Find the pivot column

cj Basis Solution c1 ... c j0 ... cn


basic θ
A1 ... Aj0 ... An

c j1 Aj1 x j1 x j1 j0 θ j1

... ... ... ... ...

ci Ai xi x i j0 θi

... ... ... ... ...

c jm Ajm x jm x jm j0 θ jm

∆ ∆1 ... ∆j0 ... ∆n


The simplex method in Tabular form
4. Find the pivot row: Calculate

Row A+ is called pivot row (row leaving basis), and x+ is called


leaving basic variable. Put a box around this pivot row and a box
around the pivot column. We call the number that is in both boxes
the pivot number.
5. Perform a transformation from a basic feasible solution x to a
basic feasible solution ̅ : the simplex tableau corresponding to
̅ (referred to as new tableau) could be obtained from the
simplex tableau corresponding to x (referred to as old tableau)
based on the following transformation rules (get directly from
formulations (1.22), (1.23):

76
Find the pivot row

cj Basis Solution c1 ... c j0 ... cn


basic θ
A1 ... Aj0 ... An

c j1 Aj1 x j1 x j1 j0 θ j1

... ... ... ... ...

c i0 Ai0 x i0 x i0 j0 θ i0

... ... ... ... ...

c jm Ajm x jm x jm j0 θ jm

∆ ∆1 ... ∆j0 ... ∆n


Simplex transformation rules

i. Elements on the pivot row of new tableau ( ̅+ ) equal to


corresponding elements of old tableau divided by the pivot
number : ̅+ + / + , ∈
ii. Elements on the pivot column of new tableau, excluding element
on position of pivot number equals to 1, all remaining equal to 0.
iii. Remaining elements in new tableau: ( ̅+ , ∆& can be calculated
from the corresponding elements in the old table according to the
following formula:

78
The simplex method in Tabular form

• The above formulas can be easily memorized by the following


rectangles.

Thus, above formulas could be also called as rectangle rule.


Note. Transformations i), ii), iii) could be totally determined if we
known the pivot number + . We call them as simplex
transformations with the pivot number + .

79
The rectangular rule

cj Basis Solution c1 ... c j0 ... cn


basic θ
A1 ... Aj0 ... An

c j1 Aj1 x j1 xj11 x j1 j0 x j1 n θ j1

... ... ... ... ...

c i0 Ai0 x i0 x i0 1 x i0 j0 x i0 n θ i0

... ... ... ... ...

c jm Ajm x jm xjm1 x jm j0 x jm n θ jm

∆ ∆1 ... ∆j0 ... ∆n


Example

• Example: Solve the LP by using the simplex method

Starting from the basic feasible solution 0 60,0,0,9,6,0,2


corresponding to basis 6 , D, . As the basis B is the
unit matrix, we can easily build the simplex tableau with basic
feasible solution x.

81
Simplex tableau

cj Basis Solution 1 -6 32 1 1 10 100


basic θ
A1 A2 A3 A4 A5 A6 A7

1 A4 9 1 0 0 1 0 6 0 9

100 A7 2 3 1 -4 0 0 2 1 2/3

1 A5 6 1 2 0 0 1 2 0 6

∆ 301 108 -432 0 0 198 0

Find the column pivot: Column pivot is the one with maximum value of θ
Simplex tableau

cj Basis Solution 1 -6 32 1 1 10 100


basic θ
A1 A2 A3 A4 A5 A6 A7

1 A4 9 1 0 0 1 0 6 0 9

100 A7 2 3 1 -4 0 0 2 1 2/3

1 A5 6 1 2 0 0 1 2 0 6

∆ 301 108 -432 0 0 198 0

Find row pivot: Calculate values of θi . Row pivot is the one with
minimum value of θ
Change the simplex tableau

cj Basic Solution 1 -6 32 1 1 10 100


basic θ
A1 A2 A3 A4 A5 A6 A7

1 A4

1 A1 2/3 1 1/3 -4/3 0 0 2/3 1/3 2

1 A5

Change the tableau: Elements on the pivot row in new Tableau =


corresponding elements in old tableau divided by the pivot
Change the simplex tableau

cj Basis Solution 1 -6 32 1 1 10 100


basic θ
A1 A2 A3 A4 A5 A6 A7

1 A4

1 A1 2/3 1 1/3 -4/3 0 0 2/3 1/3 2

1 A5

Change the tableau: The elements on the basis columns are unit vectors.
Change the simplex tableau

cj Basis Solution 1 -6 32 1 1 10 100


basic θ
A1 A2 A3 A4 A5 A6 A7

1 A4 0 1 0

1 A1 2/3 1 1/3 -4/3 0 0 2/3 1/3

1 A5 0 0 1

∆ 0 0 0

Change the tableau: The elements on the basis columns are unit vectors.
Simplex tableau at Step 2

cj Basis Solution 1 -6 32 1 1 10 100


basic θ
A1 A2 A3 A4 A5 A6 A7

1 A4 25/3 0 -1/3 4/3 1 0 16/3 -1/3 -

1 A1 2/3 1 1/3 -4/3 0 0 2/3 1/3 2

1 A5 16/3 0 5/3 4/3 0 1 4/3 -1/3 16/5

∆ 0 23/3 -92/3 0 0 -8/3 -301/3

Change the tableau: The remaining elements are calculated according to the
rectangular rule.
Simplex tableau at Step 3

cj Basis Solution 1 -6 32 1 1 10 100


basic θ
A1 A2 A3 A4 A5 A6 A7

1 A4 9 1 0 0 1 0 6 0

-6 A2 2 3 1 -4 0 0 2 1

1 A5 2 -5 0 8 0 1 -2 -2

∆ -23 0 0 0 0 -18 -108

The optimal criterion is satisfied. Algorithm is terminated.


Optimal solution: x* = (0, 2, 0, 9, 2, 0, 0). Optimal value: f* = -1
THE TERMINATION OF SIMPLEX METHOD

89
The termination of simplex method

• Theorem 1.4. Assume that the simplex method is


applied to an LP, and that each generated basic
feasible solution is nondegenerate (xB > 0 at each
iteration). Then, in a finite number of iterations, the
method finds an optimal solution or determines that
the problem is unbounded.

90
THE TWO-PHASE SIMPLEX METHOD

91
The two-phase simplex method

• The simplex method described in previous section is


used to solve the LP:
min'. 0 : , G 0* (1.25)
and it is based on the following assumptions:
i) rank A = m;
ii H : , G 0 I ∅;
iii) Already know a basic feasible solution.
In this section, we build algorithm to solve the LP without
using the above assumptions.

92
Auxiliary LP
• Original LP
n n
min{∑ c j x j :∑ aij x j = bi , i = 1, 2,..., m, x j ≥ 0, j = 1, 2,..., n} (1.25)
j =1 j =1
• Without loss of generality, we assume
bi ≥ 0, i = 1, 2, ..., m,
because: if bi < 0 then just multiply the two sides of the corresponding
equation by -1.
• According to the parameters of the given problem, we construct the
following auxiliary LP:
m

∑x
i =1
n+i → min,
n

∑a x
j =1
ij j + xn +i = bi , i = 1, 2,..., m, (1.27)

x j ≥ 0, j = 1, 2,..., n, n + 1,..., n + m.

• Vector xu = (xn+1, xn+2, ..., xn+m) is called auxiliary variables.


Auxiliary LP
The following lemma shows the relationship between problem
(1.25) and problem (1.27).
Lemma 1. Problem (1.25) has feasible solution if and only if
element L∗ of optimal solution (x*, L∗ ) of problem (1.27) is
equal to 0.
Proof.
Necessary condition: Assume x* is feasible solution of
(1.25). Then (x*, L∗ ) is clearly the solution of (1.27). On the
other hand, since N 0 L∗ 0 N′ L , for all (x, xu) as the
feasible solution of (1.27), so (x*, L∗ ) is its optimal solution.
Sufficient condition: Clearly if (x*, L∗ =0) is optimal solution
of (1.27) then x* is feasible solution of (1.25)

94
First phase: Solve auxiliary LP
• The auxiliary LP has immediately a basic feasible solution

(x=0, xu=b)

corresponding with basis

B = {An+1, An+2 ,... , An+m} = Em

where An+i is column vector corresponding with auxiliary variable xn+i, i=1,
2, ..., m.

• So we can apply the simplex algorithm to solve the auxiliary LP. Solving
the auxiliary LP using the simplex method is called the first phase of the
two-phase simplex method that solves the canonical linear programming
problem (1.25), and the auxiliary LP is also called the problem of the first
phase problem.

95
First phase: Solve auxiliary LP

Once the first phase terminates, we get the optimal basic feasible
solution ∗ , L∗ corresponding basis ∗ ' : ∈ ∗ * of the
problem (1.27). One of the following three possibilities can happen:

• i) L I0;

• ii) L 0 and B* matrix does not contain column corresponding to
auxiliary variables, that means this matrix only contains columns
of constraint matrix of the problem (1.25):

• iii) L∗ 0 and B* matrix contains column corresponding to


auxiliary variables, that means

96
First phase: Solve auxiliary LP

We will consider each case one by one:


i) If L∗ I 0, then according to the Lemma 1, the problem (1.25)
has no feasible solution, the algorithm finishes.
ii) In this case, x* is a basic feasible solution of the problem
(1.25) corresponding to basis B*. Starting from it, we can use
simplex method to solve (1.25). It is called the 2nd phase of
two-phase simplex method, and whole process just described
is call two-phase simplex method to solve the problem (1.25).
iii) Denote +∗ , %∗ ∈ ∗ ∩ P is an element of auxiliary variable in
optimal basic feasible solution ∗ , L∗ of (1.27)
Denote +∗ , ∈ ∪ P are elements of row %∗ in simplex tableau
corresponding to optimal solution ∗ , L∗ of (1.27)

97
Simplex tableau

cj Basis Solution c*1 ... c*j ... c*n+m θ


basic
A1 ... Aj ... An+m

c*j(1) Aj(1) xj(1) xj(1)j

... ... ... ...

c*i* Ai* xi* xi*j

... ... ... ...

c*j(m) Aj(m) xj(m) xj(m)j

∆ ∆1 ... ∆j ... ∆n+m


First phase: Solve auxiliary LP

• If we could find index ∗ ∈ \ such that +∗ ∗ I 0 then


perform a transformation on simplex tableau with
selected pivot number +∗ ∗ , we get the auxiliary variable
+∗ leaving the basis and in its place is variable ∗ .
• If +∗ 0, ∀ ∈ \ ∗ , then that means equation
corresponding to it in the linearly equations Ax=b is a
consequence of the remaining equations. Then, from the
simplex tableau corresponding to optimal solution
∗ , ∗ of (1.27), we can delete the above row and also
L
delete the column corresponding to auxiliary variable
+∗ .

99
Two-phase simplex method

• Pointing out all auxiliary variable in the basis


according to the procedure we just did for xi*, we
will get to a new simplex tableau in which there is
no auxiliary variable in the basis, that is we get the
case ii), at the same time in this process we can also
remove all linearly dependent constraints in Ax=b.
• From the obtained tableau, we can start the
implementation of the second phase of the two-
phase simplex method.

100
Two-phase simplex method

• Thus, the two-phase simplex method applied to any LP


can only terminate in one of the following three
situations:
1) The problem has no feasible solution.
2) The problem has objective function unbounded.
3) Can find the optimal basic feasible solution to the
problem.
• At the same time, in the process of implementing the
method, we also detect and remove all linearly
dependent constraints in the basic constraint system
Ax=b.

101
Some theoretical results

• Theorem 1. Once the LP has feasible solution, it then


also has basic feasible solution.
• Prove.
Applying the two-phase simplex method to the given LP,
at the end of the first phase, we obtain a basic feasible
solution to the problem.

102
Some theoretical results

• Theorem 1.6. Once the LP has optimal solution, it


also has optimal basic feasible solution.
• Prove.
Assume the problem has optimal solution. Then, the
two-phase simplex method applied to solve the
given LP can only terminate in situation 3), that is,
obtain the optimal basic feasible solution.

103
Some theoretical results
• Theorem 1.7. The necessary and sufficient condition for the LP
to have an optimal solution is that its objective function is lower
bounded below on the non-empty feasible region.
• Prove.
• Necessary condition. Assume x* is optimal solution to the
problem. Then, f(x) ≥ f(x*), ∀x∈D, that means the objective
function is lower bounded.
• Sufficient condition. If the problem has the objective function is
lower bounded on the non-empty feasible solution, then the two-
phase simplex method applied to solve the given LP can only
terminate in situation 3), that is, obtain the optimal basic feasible
solution.

104
Example
• Solve the following LP by using the two-phase simplex
method

105
Example
• The auxiliary problem is

The basic feasible solution to the auxiliary problem is

106
Example: First phase

107
108
109
110
Example: 2nd phase

111
Solution

• Optimal solution:
x* = (0, 2, 0, 3, 0);
• Optimal value:
f* = 2.

112
The efficiency of the simplex method

• One weakness of simplex method is that, in theory, it has


exponential computation time. This is shown by Klee-Minty with
the following example:
n

∑ x j → max,
10 n− j

j =1
i −1
2∑10i − j x j + xi ≤ 100i −1 , i = 1, 2,..., n,
j =1

x j ≥ 0, j = 1, 2,..., n

• To solve this problem, simplex method requires 2n -1


iterations.
The efficiency of the simplex method

• Example Klee-Minty with n=3:


100 x1 + 10 x2 + x3 → max
x1 ≤ 1
20 x1 + x2 ≤ 100
200 x1 + 20 x2 + x3 ≤ 10000
x1 , x2 , x3 ≥ 0

• In fact: The simplex method has computation time of


O(m3)
Polynomial Algorithms to solve the LP
• Ellipsoid. (Khachian 1979, 1980)
• Computation time: O(n4 L).
• n = number of variables
• L = number of bits to represent the input data
• This is a theoretical breakthrough.
• No actual effect.
• Karmarkar's algorithm. (Karmarkar 1984)
• Computation time: O(n3.5 L).
• Polynomial time and can implement efficiently
• Interior point algorithms.
• Computation time: O(n3 L).
• Comparable to simplex method!
• Superior to simplex method when solving large size problems.
• This method is extended to solve more general problems.
115
Barrier Function Algorithms

Simplex solution path

Barrier central path


o Predictor
o Corrector
Optimal

Interior Point Methods

116
DUAL THEORY

The dual theory of LP is a research field in which the


LP is investigated through an auxiliary LP closely
related to it called dual problem.

117
Content

1. The dual problem of the general LP.


2. Duality theorem.
3. Solve LP on MATLAB

118
The dual problem

119
The general LP

• Consider the general LP


n
f ( x1 , x2 ,..., xn ) = ∑ c j x j → min,
j =1
n

∑a x
j =1
ij j = bi , i = 1, 2,..., p ( p ≤ m)

∑a x
j =1
ij j ≥ bi , i = p + 1, p + 2,..., m

x j ≥ 0, j = 1, 2,..., n1 (n1 ≤ n)
x j <> 0, j = n1 + 1, n1 + 2,..., n
The general LP

• Give the notations:

 a11 a12 K a1n 


 
 a a K a2n 
A= 21 22
- Constraint matrix,
M 
 
 am1 am 2 K amn 
General LP
Build the dual problem

• Then the general LP could be rewritten as following


form

f ( x) = c ' x → min,
ai x = bi , i ∈ M ,
ai x ≥ bi , i ∈ M , (2.1)

x j ≥ 0, j ∈ N ,
x j <> 0, j ∈ N .
The general LP

• Transform the general LP into canonical form by:


• using slack variables xis to convert the inequality constraint
to the equality form,
• replace each unsigned variable by two signed conditional
variables: xj = xj+ – xj –, then each column Aj will be replaced
by two columns Aj+ and –Aj-,

We get the following canonical LP:

124
Build the dual problem

• Problem (2.3):
)
c xˆ → min,
))
Ax = b, (2.3)
)
x ≥ 0,
where

ei = (0,0,…,0,-1,0,…,0 ′,6% ∈ Q) – vector has the ith element


equals to -1 and the remaining elements equal to 0.
Build the dual problem
• From the optimality criteria of simplex method, we see
that if problem (2.3) has optimal solution R # and basis
corresponding to it is S , then we must have:

• Therefore, vector 8 # .̂ ′ S is feasible solution of


following linear inequalities:

where 8 ∈ UV : m-vector. Linear inequalities in (2.4) could


be divided into three groups depending on which column
vector of matrix A appears in it

126
Build the dual problem
• The first constraint group is:

• The second group corresponds to unsigned variables xj,


∈W ), constraints will go together one by one

or they corresponds to the linear equalities

127
Build the dual problem

• The last constraint group corresponding to slack


)
variables +X , %YQ

or

Conditions (2.5)-(2.7) define the feasible region of the new


LP which is called dual problem of the original LP (2.1).
Then, the original LP (2.1) is called as primal problem.

128
Build the dual problem

• Then vector
$
y = cB B
0 −1

is feasible solution of dual problem. If we define the objective


function of dual problem as:
y’b → max,
then y0 is not only feasible solution but also the optimal
solution to dual problem.
• The above results will be stated exactly in the definitions and
theorems below.
Dual problem

• Definition. The general LP (called as primal


problem) has the dual problem which is the following
LP:

Primal Problem Dual Problem

f ( x) = c ' x → min, g ( y ) = y ' b → max,


ai x = bi , i∈M, yi <> 0,
ai x ≥ bi , i∈M, yi ≥ 0,
x j ≥ 0, j ∈ N, yA j ≤ c j ,
x j <> 0, j∈N yAj = c j ,
Duality theorem

131
Duality theorem
Primal problem Dual problem

First, we prove the following lemma: f ( x) = cT x → min, g ( y ) = yT b → max,


ai x = bi , i∈M, yi <> 0,
• Lemma 2.1 (Weak duality theorem).
Assume {x, y} is a pair of feasible ai x ≥ bi , i∈M, yi ≥ 0,
solutions of primal and dual problems. x j ≥ 0, j ∈ N, yAj ≤ c j ,
Then x j <> 0, j∈N yAj = c j ,
Z Z
f(x) = . G 8 [68
Prove: As y is feasible solution of dual problem, so:
. G 8 , ∈ W, . 8 , ∈W )
On the other hand, we have G 0, ∈ W (as x is feasible solution of primal
problem), so:
. G8 , ∈ W,
. 8 , ∈W),
Similarly, we have: 8 G 8 Z
Therefore: . Z G 8 G 8 Z
Duality theorem

• Lemma 2.1 (Weak duality theorem). Assume {x, y} is a pair of


feasible solutions of primal and dual problems. Then
Primal problem Dual problem
f(x) = . Z G 8 Z [68
f ( x) = cT x → min, g ( y ) = yT b → max,
ai x = bi , i∈M, yi <> 0,
ai x ≥ bi , i∈M, yi ≥ 0,
x j ≥ 0, j ∈ N, yAj ≤ c j ,
x j <> 0, j∈N yAj = c j ,

Corollary 2.1. Assume{ ∗ , 8 ∗ } is a pair of feasible solutions of primal


and dual problems that satisfy cT ∗ = 68 ∗ \b.
Then { ∗ , 8 ∗ } is a pair of optimal solutions of primal and dual
problems.

133
Duality theorem

Corollary 2.1. Assume{ ∗ , 8 ∗ } is a pair of feasible solutions of primal


and dual problems that satisfy
cT ∗ = 68 ∗ \b. 2
Then { ∗ , 8 ∗ } is a pair of optimal solutions of primal and dual
problems.

Prove. From the lemma, we have:


cTx ≥ yTb ∀x is feasible solution of primal problem and ∀ y is
feasible solution of dual problem cT ∗ ≥ 68 ∗ \b 1
1 2
cT ≥ 68 ∗ \b = cT ∗ ≥ yTb
For every pair of feasible solutions {x, y} of the primal and dual
problems. The last inequality proves the optimality of the pair of
solutions { ∗ , 8 ∗ }.

134
Duality theorem
Theorem 2.1. If the primal problem (2.1) has optimal solution then its
dual problem also has optimal solution, and their optimal values are
equal.
Prove. Assume the primal problem has optimal solution. Therefore, its
corresponding canonical LP (2.3) also has optimal basic
feasible solution R # respect to basis S . Then, as seen above,
vector 8 # .̂ Z S is feasible solution of dual problem. Let x0 be
the optimal solution of the primal problem obtained from R # , we
have
68 # Z .̂ Z S .̂ Z R # . Z # 2.9
From Corollary 2.1, it follows that y0 is the optimal solution of the
dual problem, and from (2.9) deduces that the optimal value of the
primal and dual problems is equal..
The theorem is proven

135
Duality theorem
A feature of duality is the symmetry shown in the following theorem:
Theorem 2.2. The dual problem of the dual of primal problem is the
same as the primal problem.
− y b → min,
T

Prove. Rewrite the dual problem as − A y ≥ −c , j ∈ N ,


j j

− Aj y = −ci , j ∈ N ,
yi ≥ 0, i∈M,
yi <> 0, i∈M

and consider it as primal problem. By definition its dual problem has


the form:: −c x → max,
T

−ai x = −bi , i∈M,


−ai x ≤ −bi , i∈M,
x j ≥ 0, j ∈ N,
x j <> 0, j∈N .

which conspicuously coincides with the primal problem.


136
Duality theorem

• For any LP, there are three possibilities:


1) The problem has an optimal solution;
2) The problem has objective function unbounded;
3) The problem has no feasible solution.

• Therefore, for the pair of primal-dual problems, there


can be 9 situations described in the following table:

137
Duality theorem

Dual Has optimal Unbounded No feasible


Primal solution solution
Has optimal
1) √ √
solution
Unbounded √ √ 3)
No feasible
√ 3) 2)
solution

138
Duality theorem

• The duality theorem. For the pair of primal-dual problems of


the LP, only three of the following three situations can occur:
1) Both problems have optimal solutions and their optimal values
are equal;
2) Both problems have no feasible solutions;
3) This problem has an objective function unbounded and the
other problem has no feasible solution.

139
Duality theorem

Example 1. Both problems have no feasible solutions.


Primal problem

Dual problem

140
Duality theorem
Example 2. Primal problem has no feasible solution, and dual
problem has objective function unbounded.
Primal problem

Dual problem

141
Theorem about complement

• If looking closely at the definition of the primal-dual


problems, one could feel the antagonism between the
primal problem and dual problem: The tighter
constraints are in one problem (e.g. : ^+ +, % ∈ Q ,
then to rebalance, in the other problem, the
corresponding constraint is loosened 68+ _5 0, % ∈ Q .
• To express this equilibrium exactly, we state necessary
and sufficient conditions (called the condition of
complement) for the pair of feasible solutions x and y of
the primal and dual problem to be optimal.
Theorem about complement

• Theorem 2.4. The pair of feasible solutions of primal-dual problems


{x, y} is optimal if and only if the following conditions are satisfied:
(aix – bi)yi = 0, i = 1,2,...,m;
xj(cj – yAj) = 0, j = 1,2,...,n.
• Prove. Set
ui = (aix – bi)yi , i = 1,2,...,m;
vj = xj(cj – yAj) , j = 1,2,...,n.
As {x, y} is the pair of feasible solutions, so
ui ≥ 0, i = 1,2,...,m; vj ≥ 0 , j = 1,2,...,n.
Theorem about complement

Set

So α+β=0 if and only if . Z 8Z


According to the Corollary 2.1. and Duality theorem,
condition . Z 8 Z is the necessary and sufficient
conditions for the pair of feasible solutions x and y of
primal-dual problems are optimal. Theorem is proven
Theorem about complement

Example. Consider the LP

has the optimal solution x* = (0, ½, 0, 5/2, 3/2), the


optimal value is f*=9/2
Theorem about complement

• Its dual problem:


Theorem about complement
• Let 8 ∗ 68 ∗ , 8 ∗ , 8 ∗ be optimal solution to the dual problem. Since the
primal problem has a canonical form, so according to the theorem about
complement, condition 6^+ + 8+ 0, % 1, . . , ` is always fulfilled
at all feasible solutions of the primal problem ^+ + , % ∈ Q . From
condition 6. 8 0, 1, . . , @ , do ∗ , ∗ , ∗ 5 0, we have
. 8 0
. 8 0
. 8 0
That means the 2nd, 4th, 5th conditions of the dual problem at its optimal
solution y* will have to be satisfied in terms of equality. So y* is the
solution of the following system of linear equations. Therefore, y* is
solution to the following linear equations:
Theorem about complement

• Thus we have the optimal solution of the dual


problem:

and the optimal values of the dual problem is 9/2


Corollary

• Corollary. Feasible solution x* is optimal solution of


the LP if and only if following linear equations
inequalities have solutions:

(ai x* − bi ) yi = 0, ∀i ∈ M ,
(c j − yAj ) x*j = 0, ∀j ∈ N ,
yi ≥ 0, i ∈ M ,
yA j ≤ c j , j ∈ N ,
yA j = c j , j ∈ N .
Example

• Consider the LP

• Optimality test for vector


x* = (0, 0, 16, 31, 14)
to the above LP

150
Example

• It is easy to check that x* is the feasible solution to


the given LP:
A=[1 -4 2 -5 9; 0 1 -3 4 -5; 0 1 -1 1 -1];
x=[0;0;16;31;14]; A*x
ans =
3
6
1
• Using the lemma, x* is optimal if and only if the
following equations and inequalities have solutions

151
Example

(y1 +2) x*1 = 0 Dual problem


(– 4y1 + y2 + y3 + 6) x*2 = 0
(2y1 – 3y2 – y3 – 5) x*3 =0 3y1 + 6y2 + y3 → min
(–5y1 + 4y2 + y3 + 1) x*4 = 0 x*1 = 0 y1 ≥ –2
(9y1 – 5y2 – y3 + 4) x*5 = 0 x* = 0 – 4y1 + y2 + y3 ≥ –6
2
y1 ≥ –2 2y1 – 3y2 – y3 ≥ 5
x*3 = 16
– 4y1 + y2 + y3 ≥ –6 –5y1 + 4y2 + y3 ≥ –1
2y1 – 3y2 – y3 ≥ 5 x*4 = 31
9y1 – 5y2 – y3 ≥ –4
–5y1 + 4y2 + y3 ≥ –1 x*5 = 14
9y1 – 5y2 – y3 ≥ –4

152
Example
• It corresponds to following equations and inequalities:
y1 ≥ –2
– 4y1 + y2 + y3 ≥ –6
2y1 – 3y2 – y3 = 5
–5y1 + 4y2 + y3 = –1
9y1 – 5y2 – y3 = –4
• The last three equations has a unique solution y* = (-1, 1, -10).
(A=[2 -3 -1;-5 4 1; 9 -5 -1]; b=[5;-1;-4]; y=A\b)
• It is easy to check that y* satisfies the first two inequalities. Hence y* is
the solution of the above of equations and inequations. Using the
lemma, it proves that x* is the optimal solution of the given LP.

153
Solve LP on MATLAB

154
Function LINPROG

• MATLAB provides function linprog to solve the LP.


• Here are some ways to use this function
• X=LINPROG(f,A,b)
• X=LINPROG(f,A,b,Aeq,beq)
• X=LINPROG(f,A,b,Aeq,beq,LB,UB)
• X=LINPROG(f,A,b,Aeq,beq,LB,UB,X0)
• X=LINPROG(f,A,b,Aeq,beq,LB,UB,X0,OPTIONS)
• [X,FVAL]=LINPROG(...)
• [X,FVAL,EXITFLAG] = LINPROG(...)
• [X,FVAL,EXITFLAG,OUTPUT] = LINPROG(...)
• [X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(...)

155
Function LINPROG
• Statement X=LINPROG(f,A,b) used to solve the LP:
min { f 'x : A x <= b }
• Statement X=LINPROG(f,A,b,Aeq,beq) used to solve the LP with
additional basic constraint in equality form Aeq*x = beq.
• Statement X=LINPROG(f,A,b,Aeq,beq,LB,UB) determines the
lower and upper bounds for the variables LB <= X <= UB.
• Assign Aeq=[] (A=[]) and beq=[] (b=[]) if without these
constraints
• Assign LB and UB as empty matrix ([]) if without using these
bounds.
• Assign LB(i)=-Inf if X(i) is not lower bounded and assign
UB(i)=Inf if X(i) is not upper bounded.

156
Function LINPROG
• Statement X=LINPROG(f,A,b,Aeq,beq,LB,UB,X0)
determines the starting point X0.
• Note: This choice is only accepted if the positive set algorithm
is used. The default method for solving is that the interior
point algorithm will not accept a starting point.
• Statement
X=LINPROG(f,A,b,Aeq,beq,LB,UB,X0,OPTIONS)
performs to solve the LP with optimal parameters defined by the
structured variable OPTIONS, created by the function
OPTIMSET.
• Asign option=optimset('LargeScale','off',
'Simplex','on’) to select simplex method to solve the problem.
• Type help OPTIMSET to see more details.

157
Function LINPROG

• Statement [ X,FVAL]=LINPROG(...) returns the value of


objective function at solution X: FVAL = f'*X.
• Statement [X,FVAL,EXITFLAG]=LINPROG(...) returns
EXITFLAG the description of the termination condition of
LINPROG.Values of EXITFLAG have the following meanings
• 1 LINPROG converges to the solution X.
• 0 Reach the the number of iterations limit.
• -2 Could not found the feasible solution.
• -3 The problem has objective function unbounded.
• -4 value NaN appears during the execution of the
algorithm.
• -5 Both the primal and dual problems are incompatible.
• -7 The search direction is too small, can't be improved
anymore.

158
Function LINPROG
• Statement [X,FVAL,EXITFLAG,OUTPUT] =
LINPROG(...) returns the structured variable OUTPUT with
• OUTPUT.iterations - number of iterations to perform
• OUTPUT.algorithm - algorithm used
• OUTPUT.message – announcement

• Statement
[X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=LINPROG(...)
returns the Lagrangian multiplier LAMBDA , corresponding to the
optimal solution:
• LAMBDA.ineqlin – corresponds to the inequality constraint A,
• LAMBDA.eqlin corresponds to the equality constraint Aeq,
• LAMBDA.lower – corresponds to the LB,
• LAMBDA.upper – corresponds to the UB.

159
Example
• Solve the LP:
2x1 + x2 + 3x3 → min
x1 + x2 + x3 + x4 + x5 = 5
x1 + x2 + 2x3 + 2x4 + 2x5 = 8
x1 + x2 =2
x3 + x4 + x5 = 3
x1 , x2 , x3 , x4 , x5 ≥ 0
• f=[2 1 3 0 0]; beq=[5; 8; 2; 3];
• Aeq=[1 1 1 1 1; 1 1 2 2 2;1 1 0 0 0;0 0 1 1
1];
• A=[]; b=[]; LB=[0 0 0 0 0]; UB=[];X0=[];
• [X,FVAL,EXITFLAG,OUTPUT,LAMBDA]=linprog(f,A
,b,Aeq,beq,LB,UB,X0)

160
Solution
• X = • OUTPUT =
0.0000 iterations: 5
algorithm: 'large-scale: interior point'
2.0000
cgiterations: 0
0.0000 message: 'Optimization terminated.'

1.5000 • LAMBDA =
1.5000 ineqlin: [0x1 double]
eqlin: [4x1 double]
• FVAL =
upper: [5x1 double]
2.0000
lower: [5x1 double]
• EXITFLAG =
1

161
Example
• Using simplex method:
opt=optimset('LargeScale','off','Simplex','on')
[X,FVAL,EXITFLAG,OUTPUT]=LINPROG(f,A,b,Aeq,beq,LB,UB,X0,opt)

we obtain the result:


• X = [0 2 0 3 0]
• FVAL = 2
• EXITFLAG = 1
• OUTPUT =
iterations: 1
algorithm: 'medium scale: simplex'
cgiterations: []
message: 'Optimization terminated.'

162
163

You might also like