Quadratic Programming Problem

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Quadratic programming

Question: What is quadratic programming?


Answer: Quadratic programming (QP) is the process of solving certain mathematical optimization
problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a
multivariate quadratic function subject to linear constraints on the variables.
Question: What is quadratic programming problem?
Answer: A quadratic programming problem (QPP) is the special class of nonlinear optimization
problems in which the objective function is quadratic and all the constraints are linear.
The general form of quadratic programming problem may be written as
1
Maximize or Minimize 𝑍 = ∑𝑛𝑗=1 𝑐𝑗 𝑥𝑗 + 2 ∑𝑛𝑗=1 ∑𝑛𝑘=1 𝑥𝑗 𝑑𝑗𝑘 𝑥𝑘

Subject to the constraint: ∑𝑛𝑖=1 𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 ; 𝑖 = 1, 2, … … . , 𝑛 and 𝑥𝑗 > 0; 𝑗 = 1, 2, … … 𝑛.


In matrix notations, it is written as
1
Optimize 𝑓(𝑥) = 𝑐𝑥 + 2 𝑥 𝑇 𝐷𝑥

Subject to the constraint: 𝐴𝑥 ≤ 𝑏 and 𝑥 ≥ 0.


Where 𝑥 = (𝑥1 , 𝑥2 , … … , 𝑥𝑛 )𝑇 , 𝑐 = (𝑐1 , 𝑐2 , … … . , 𝑐𝑛 ), 𝑏 = (𝑏1 , 𝑏2 , … … 𝑏𝑚 )𝑇 , 𝐴 = (𝑎𝑖𝑗 ) is an 𝑚 × 𝑛
matrix and 𝐷 = (𝑑𝑗𝑘 ) is an 𝑛 × 𝑛 symmetric matrix i.e. 𝑑𝑗𝑘 = 𝑑𝑘𝑗 .
Question: How does quadratic programming problem differ from linear programming problem?
Answer: In linear programming, the objective function and constraints are linear functions. In quadratic
programming, the objective function or constraints or both are nonlinear quadratic expressions.
Question: What are the uses of quadratic programming problem in operation research?
Answer: Quadratic programming is widely used in image and signal processing, to optimize financial
portfolios, to perform the least-squares method of regression, to control scheduling in chemical plants,
and in sequential quadratic programming, a technique for solving more complex non-linear
programming problems.
Question: What are the methods for solving QPP?
Answer: The methods of solving a quadratic programming problems are:
(i) Wolf’s modified simplex method
(ii) Beale’s method.
Question: What is Wolfe's method?
Answer: The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained
convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 1
the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip
Wolfe in 1956.
Question: Describe Wolf’s method for solving Q.P.P.
Or, Describe Wolf’s modified simplex method for solving Q.P.P.
Answer: Let us consider the Q.P.P.
1
Optimize 𝑓(𝑥) = 𝑐𝑥 + 2 𝑥 𝑇 𝐷𝑥

Subject to the constraint: 𝐴𝑥 ≤ 𝑏 and 𝑥 ≥ 0.


Where 𝑥 = (𝑥1 , 𝑥2 , … … , 𝑥𝑛 )𝑇 , 𝑐 = (𝑐1 , 𝑐2 , … … . , 𝑐𝑛 ), 𝑏 = (𝑏1 , 𝑏2 , … … 𝑏𝑚 )𝑇 , 𝐴 = (𝑎𝑖𝑗 ) is an 𝑚 × 𝑛
matrix and 𝐷 = (𝑑𝑗𝑘 ) is an 𝑛 × 𝑛 symmetric matrix i.e. 𝑑𝑗𝑘 = 𝑑𝑘𝑗 .
The iterative procedure for solving Q.P.P. by Wolf’s modified simplex method can be summarized as
follows:
Step 1: First we convert the inequality constraints into equations by introducing slack variables 𝑦𝑖2 (𝑖 =
1, 2 … . . 𝑚) ≥ 0 and the non-negative restrictions by introducing slack variables 𝑠𝑗2 ; 𝑗 = 1, 2, … . 𝑛) ≥ 0.
Step 2: Then we construct the Lagrange’s function 𝐿(𝑥, 𝑠, 𝑦, 𝜆, 𝜇) = 𝑓(𝑥) − 𝜆(𝐴𝑥 + 𝑦 2 − 𝑏) −
𝜇[−𝑥 + 𝑠 2 ].
Step 3: Derive the Kuhn-Tucker conditions by differentiating 𝐿(𝑥, 𝑠, 𝑦, 𝜆, 𝜇) partially with respect to the
components of 𝑥, 𝑠, 𝑦 and then equating them to zero. Thus we have
1
(𝑖) 𝑐 − (2𝑥 𝑇 𝐷) − 𝜆𝐴 + 𝜇 = 0
2

𝑖. 𝑒. 𝑐𝑗 − ∑𝑛𝑘=1 𝑥𝑘 𝑎𝑖𝑗 − ∑𝑚
𝑖=1 𝜆𝑖 𝑎𝑖𝑗 + 𝜇𝑗 = 0; 𝑗 = 1, 2, … … 𝑛.

(ii) −2𝜇𝑠 = 0 or 𝜇𝑗 𝑥𝑗 = 0; 𝑗 = 1, 2, … . . 𝑛.
(iii) −2𝜆𝑦 = 0 or 𝜆𝑖 𝑦𝑖2 = 0 or 𝜆𝑖 [∑𝑛𝑖=1 𝑎𝑖𝑗 𝑥𝑗 − 𝑏𝑖 ] = 0; 𝑖 = 1, 2, … … . 𝑛
(iv) 𝐴𝑥 + 𝑦 2 − 𝑏 = 0 or 𝐴𝑥 ≤ 𝑏.
(v) −𝑥 + 𝑠 2 = 0 or 𝑥 ≥ 0
(vi) 𝜆𝑖 , 𝜇𝑗 , 𝑥𝑗 , 𝑦𝑗 , 𝑠𝑗 ≥ 0 ; 𝑖 = 1, 2, … … . 𝑚 and 𝑗 = 1, 2, … … . , 𝑛.
Step 4: We introduce the non-negative artificial variables 𝑎𝑖 (𝑖 = 1, 2, … . . 𝑛) in the Kuhn-Tucker
conditions (𝑖) and to construct an objective function 𝑧 = 𝑎1 + 𝑎2 + ⋯ … . . +𝑎𝑛 .
Step 5: We obtain an initial basic feasible solution to the L.P.P.:
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 𝑎1 + 𝑎2 + ⋯ … . . +𝑎𝑛 .
Subject to the constraints: ∑𝑛𝑘=1 𝑥𝑘 𝑑𝑗𝑘 + ∑𝑚
𝑖=1 𝜆𝑖 𝑎𝑖𝑗 + 𝜇𝑗 + 𝑎𝑗 = 𝑐𝑗 ; 𝑗 = 1, 2, … … 𝑛.

∑𝑛𝑖=1 𝑎𝑖𝑗 𝑥𝑗 + 𝑦𝑖2 = 𝑏𝑖 ; 𝑖 = 1, 2, … … . . 𝑚. and 𝜆𝑖 , 𝜇𝑗 , 𝑥𝑗 , 𝑦𝑖 , 𝑎𝑗 ≥ 0, for all 𝑖 and 𝑗


𝜆𝑖 𝑦𝑖 = 0, 𝜇𝑗 𝑥𝑗 = 0 (Complementary slackness conditions).
Step 6: We use two-phase simplex method to obtain optimum solution to the L.P.P. of step 5.

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 2
Step 7: Thus the optimum solution obtained in step 6 is an optimum solution to the given quadratic
programming problem.
Problem: Solve the following quadratic programming problem using Wolf’s method.
Maximize 𝒛 = 𝟐𝒙𝟏 + 𝟑𝒙𝟐 − 𝟐𝒙𝟐𝟏
Subject to the constraints: 𝒙𝟏 + 𝟒𝒙𝟐 ≤ 𝟒
𝒙𝟏 + 𝒙𝟐 ≤ 𝟐
𝒙𝟏 , 𝒙𝟐 ≥ 𝟎
Solution: The given quadratic programming problem is rewritten as 1
𝑥1 1 4 0 𝑥1 𝑧 = 𝑐𝑥 + 𝑥 𝑇 𝐷𝑥
Maximize 𝑧 = (2 3) (𝑥 ) − 2 (𝑥1 𝑥2 )( )( ) 2
2 0 0 𝑥2
𝑠/𝑡: 𝐴𝑥 ≤ 𝑏
1 4 𝑥1 4
Subject to the constraints: ( ) (𝑥 ) ≤ ( )
1 1 2 2
and 𝑥1 ≥ 0 and 𝑥2 ≥ 0 𝐷𝑥 + 𝐴𝑇 𝜆 − 𝜇 = 𝑐 𝑇

∴ The Khun-Tucker condition are 𝐴𝑥 + 𝑠 = 𝑏


𝑦1
4 0 𝑥1 1 4 𝜆1 𝜇1 2 𝑠 = (𝑦 )
( ) (𝑥 ) + ( ) ( ) − (𝜇 ) = ( ) 2
0 0 2 1 1 𝜆2 2 3
1 4 𝑥1 𝑦1 4
and ( ) (𝑥 ) + ( 𝑦 ) = ( )
1 1 2 2 2
Where 𝜆1 𝑦1 = 𝜆2 𝑦2 = 𝜇1 𝑥1 = 𝜇2 𝑥2 = 0 and 𝑥1 , 𝑥2 , 𝜆1 , 𝜆2 , 𝜇1 , 𝜇2 , 𝑦1 , 𝑦2 ≥ 0
Now introducing artificial variables 𝑎1 ≥ 0 and 𝑎2 ≥ 0 the given Q.P.P. is equivalent to
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑍 = 𝑎1 + 𝑎2
Subject to the constraints:
4𝑥1 + 𝜆1 + 𝜆2 − 𝜇1 + 𝑎1 = 2
4𝜆1 + 𝜆2 − 𝜇2 + 𝑎2 = 3
𝑥1 + 4𝑥2 + 𝑦1 = 4
𝑥1 + 𝑥2 + 𝑦2 = 2
𝑥1 , 𝑥2 𝑦1 , 𝑦2 , 𝜆1 , 𝜆2 , 𝜇1 , 𝜇2 , 𝑎1 , 𝑎2 ≥ 0
Thus the auxiliary linear programming problem for phase-I is
𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑍 = − 𝑎1∗ − 𝑎2∗
Subject to the constraints:
4𝑥1 + 𝜆1 + 𝜆2 − 𝜇1 + 𝑎1 = 2
4𝜆1 + 𝜆2 − 𝜇2 + 𝑎2 = 3
𝑥1 + 4𝑥2 + 𝑦1 = 4
𝑥1 + 𝑥2 + 𝑦2 = 2
𝑥1 , 𝑥2 𝑦1 , 𝑦2 , 𝜆1 , 𝜆2 , 𝜇1 , 𝜇2 , 𝑎1 , 𝑎2 ≥ 0 and 𝜆1 𝑦1 = 𝜆2 𝑦2 = 𝜇1 𝑥1 = 𝜇2 𝑥2 = 0.
The simplex iteration leading to the optimum solution are

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 3
Table for phase-I:
𝑐𝑗 0 0 0 0 0 0 0 0 -1 -1 Constant Min
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜆2 𝜇1 𝜇2 𝑦1 𝑦2 𝑎1 𝑎2 b ratio

-1 𝑎1 (4) 0 1 1 -1 0 0 0 1 0 2 1∗
2
-1 𝑎2 0 0 4 1 0 -1 0 0 0 1 3
0 𝑦1 1 4 0 0 0 0 1 0 0 0 4 4
0 𝑦2 1 1 0 0 0 0 0 1 0 0 2 2
𝑧𝑗∗ − 𝑐𝑗 −4 ↑ 0 -5 -2 1 1 0 0 0 0 𝑧 ∗ = −5
Here 𝑧3∗ − 𝑐3 = −5 is the most negative. But 𝑦1 present in the basic so 𝜆1 cannot enter the basis, next
most negative is 𝑧1∗ − 𝑐1 = −4, 𝑥1 can enter the basis since 𝜇1 = 0. Hence 𝑥1 is entering variable and 𝑎1
is departing variable.
First iteration:
𝑐𝑗 0 0 0 0 0 0 0 0 -1 Constant Min
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜆2 𝜇1 𝜇2 𝑦1 𝑦2 𝑎2 b ratio
0 𝑥1 1 0 1/4 1/4 -1/4 0 0 0 0 1/2
-1 𝑎2 0 0 4 1 0 -1 0 0 1 3
0 𝑦1 0 (4) -1/4 -1/4 1/4 0 1 0 0 7/2 7∗
8
0 𝑦2 0 1 -1/4 -1/4 1/4 0 0 1 0 3/2 3/2
𝑧𝑗∗ − 𝑐𝑗 0 0↑ -4 -1 0 1 0 0 0 𝑧 ∗ = −3
Here 𝑧3∗ − 𝑐3 = −4 and 𝑧4∗ − 𝑐4 = −1 But 𝜆1 and 𝜆2 cannot enter the basis, since 𝑦1 and 𝑦2 present in
the basis. Let 𝑥2 is entering variable and 𝑦1 is departing variable.
Second iteration:
𝑐𝑗 0 0 0 0 0 0 0 0 -1 Constant Min
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜆2 𝜇1 𝜇2 𝑦1 𝑦2 𝑎2 b ratio
0 𝑥1 1 0 1 1 1/2 0 0 0 0 1
-1 𝑎2 0 0 (3) 1 0 -1 0 0 1 1 1∗
3
0 𝑥2 0 1 2/3 -2/3 1/3 0 1/3 0 0 4/3 4/3
0 𝑦2 0 0 -1/2 -1/2 2/3 0 -1/3 1 0 2/3 2
𝑧𝑗∗ − 𝑐𝑗 0 0 −3↑ -1 0 1 0 0 0 𝑧 ∗ = −1
Here 𝑧3∗ − 𝑐3 = −3 is the most negative and 𝑦1 = 0 so 𝜆1 can enter the basis and 𝑎2 is departing variable.

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 4
Final iteration:
𝑐𝑗 0 0 0 0 0 0 0 0 Constant
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜆2 𝜇1 𝜇2 𝑦1 𝑦2 b

0 𝑥1 1 0 0 2/3 -1/2 1/3 0 0 2/3


-1 𝑎2 0 0 1 1/3 0 -1/3 0 0 1/3
0 𝑥2 0 1 0 -4/9 1/3 2/9 1/3 0 14/9
0 𝑦2 0 0 0 -8/9 1/3 4/9 -1/3 1 10/9
𝑧𝑗∗ − 𝑐𝑗 0 0 0 0 0 0 0 0 𝑧∗ = 0
Since all 𝑧𝑗∗ − 𝑐𝑗 ≥ 0 and 𝑧 ∗ = 0 all artificial variables leave the basis. Hence the optimality conditions
2 19 29
are satisfied and optimum solution is 𝑥1 = 3 , 𝑥2 = , 𝑀𝑎𝑥 𝑍 =
9 9

Problem: Use Wolf’s method in solving the following quadratic programming problem.
Minimize 𝒛 = 𝟔 − 𝟔𝒙𝟏 + 𝟐𝒙𝟐𝟏 − 𝟐𝒙𝟏 𝒙𝟐 + 𝟐𝒙𝟐𝟐
Subject to the constraints: 𝒙𝟏 + 𝒙𝟐 ≤ 𝟐
𝒙𝟏 , 𝒙𝟐 ≥ 𝟎
Solution: The given quadratic programming problem is rewritten as
𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 ∗ = −6 + 6𝑥1 − 2𝑥12 + 2𝑥1 𝑥2 − 2𝑥22
Subject to the constraints: 𝑥1 + 𝑥2 ≤ 2
𝑥1 , 𝑥2 ≥ 0
𝑥1 1 4 −2 𝑥1
⇒ Maximize 𝑧 ∗ = (6 0) (𝑥 ) − 2 (𝑥1 𝑥2 ) ( ) (𝑥 )
2 −2 4 2
𝑥1
Subject to the constraints: (1 1) (𝑥 ) ≤ 2
2

and 𝑥1 , 𝑥2 ≥ 0
∴ The Khun-Tucker condition are
4 −2 𝑥1 1 𝜇1 6
( ) (𝑥 ) + ( ) (𝜆1 ) − (𝜇 ) = ( )
−2 4 2 1 2 0
𝑥1
and (1 1) (𝑥 ) + (𝑦1 ) = (2)
2

Where 𝜆1 𝑦1 = 𝜇1 𝑥1 = 𝜇2 𝑥2 = 0 and 𝑥1 , 𝑥2 , 𝜆1 , 𝜇1 , 𝜇2 , 𝑦1 ≥ 0
4𝑥1 − 2𝑥2 + 𝜆1 − 𝜇1 = 6
⇒ −2𝑥1 + 4𝑥2 + 𝜆1 − 𝜇2 = 0} … … … . (∗)
𝑥1 + 𝑥2 + 𝑦1 = 2
𝑥1 , 𝑥2 , 𝜆1 , 𝜇1 , 𝜇2 , 𝑦1 ≥ 0 and 𝜆1 𝑦1 = 𝜇1 𝑥1 = 𝜇2 𝑥2 = 0
Now introducing artificial variables 𝑎1 ≥ 0 and 𝑎2 ≥ 0 the given Q.P.P. is equivalent to
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑍 = 𝑎1 + 𝑎2

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 5
Subject to the constraints:
4𝑥1 − 2𝑥2 + 𝜆1 − 𝜇1 + 𝑎1 = 6
−2𝑥1 + 4𝑥2 + 𝜆1 − 𝜇2 + 𝑎2 = 0
𝑥1 + 𝑥2 + 𝑦1 = 2
𝑥1 , 𝑥2 𝑦1 , 𝑦2 , 𝜆1 , 𝜆2 , 𝜇1 , 𝜇2 , 𝑎1 , 𝑎2 ≥ 0 and 𝜆1 𝑦1 = 𝜇1 𝑥1 = 𝜇2 𝑥2 = 0
Thus the auxiliary linear programming problem for phase-I is
𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑍 ∗ = −𝑎1 − 𝑎2
Subject to the constraints:
4𝑥1 − 2𝑥2 + 𝜆1 − 𝜇1 + 0. 𝜇2 + 0. 𝑦1 + 𝑎1 + 0. 𝑎2 = 6
−2𝑥1 + 4𝑥2 + 𝜆1 + 0. 𝜇1 − 𝜇2 + 0. 𝑦1 + 0. 𝑎1 + 𝑎2 = 0
𝑥1 + 𝑥2 + 0. 𝜆1 + 0. 𝜇1 − 0. 𝜇2 + 𝑦1 + 0. 𝑎1 + 0. 𝑎2 = 2
𝑥1 , 𝑥2 𝑦1 , 𝑦2 , 𝜆1 , 𝜆2 , 𝜇1 , 𝜇2 , 𝑎1 , 𝑎2 ≥ 0 and 𝜆1 𝑦1 = 𝜇1 𝑥1 = 𝜇2 𝑥2 = 0
The simplex iteration leading to the optimum solution are
Table for phase-I
𝑐𝑗 0 0 0 0 0 0 -1 -1 Constant Min
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜇1 𝜇2 𝑦1 𝑎1 𝑎2 b ratio
-1 𝑎1 (4) -2 1 -1 0 0 1 0 6 3∗
2
-1 𝑎2 -2 4 1 0 -1 0 0 1 0
0 𝑦1 -1 1 0 0 0 1 0 0 2 2
𝑧𝑗∗ − 𝑐𝑗 −2 ↑ -2 -2 1 1 0 0 0 𝑧 ∗ = −6
Here 𝑧1∗ − 𝑐1 = −2 = 𝑧2∗ − 𝑐2 = 𝑧3∗ − 𝑐3 is the most negative and 𝜆1 cannot enter the basis because 𝑦1
present in the basis 𝑥1 and 𝑥2 can enter the basis. Let 𝑥1 is entering variable and 𝑎1 is departing variable.
First iteration:
𝑐𝑗 0 0 0 0 0 0 -1 Constant Min
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜇1 𝜇2 𝑦1 𝑎2 b ratio
0 𝑥1 1 -1/2 1/4 -1/4 0 0 0 3/2
-1 𝑎2 0 3 3/2 -1/2 -1 0 1 3 1
0 𝑦1 0 (3/2) -1/4 1/4 0 1 0 1/2 1∗
3

𝑧𝑗∗ − 𝑐𝑗 0 −3 ↑ -3/2 1/2 1 0 0 𝑧 ∗ = −3


Here 𝑧2∗ − 𝑐2 = −3 is the most negative and 𝜇2 = 0 Thus 𝑥2 can enter the basis and 𝑦1 leave the basis.
Second iteration:

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 6
𝑐𝑗 0 0 0 0 0 0 -1 Constant Min
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜇1 𝜇2 𝑦1 𝑎2 b ratio
0 𝑥1 1 0 1/6 -1/6 0 1/3 0 5/3 10
-1 𝑎2 0 0 (2) -1 -1 -2 1 2 1∗
0 𝑥2 0 1 -1/6 1/6 0 2/3 0 1/3
𝑧𝑗∗ − 𝑐𝑗 0 0 -2 1 1 2 0 𝑧 ∗ = −2
Here 𝜆1 is the most negative. Thus 𝜆1 can enter the basis since 𝑦1 = 0 and 𝑎2 leave the basis.
Final iteration:
𝑐𝑗 0 0 0 0 0 0 Constant
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜇1 𝜇2 𝑦1 b
0 𝑥1 1 0 0 -1/12 1/12 1/2 2/3
0 𝜆1 0 0 1 -1/2 -1/ -1 1
0 𝑥2 0 1 0 1/12 2/9 1/2 1/2
𝑧𝑗∗ − 𝑐𝑗 0 0 0 0 0 0 𝑧∗ = 0
Since all 𝑧𝑗∗ − 𝑐𝑗 ≥ 0 and 𝑧 ∗ = 0 all artificial variables leave the basis. Hence the optimality conditions
2 1 11
are satisfied and optimum solution is 𝑥1 = 3 , 𝑥2 = 2 , 𝑀𝑎𝑥 𝑍 ∗ = 2
2 1 11
∴ 𝑥1 = , 𝑥2 = , 𝑀𝑖𝑛 𝑍 = −
3 2 2
Problem: Solve the following quadratic programming problem using Wolf’s method:
Minimize 𝒛 = 𝒙𝟐𝟏 − 𝒙𝟏 𝒙𝟐 + 𝟑𝒙𝟐𝟐 − 𝟒𝒙𝟐 + 𝟒
Subject to the constraints: 𝒙𝟏 + 𝒙𝟐 ≤ 𝟏
𝒙𝟏 , 𝒙𝟐 ≥ 𝟎
Solution: Convert the objective function to maximization type we get
Maximize 𝑧1∗ = −𝑥12 + 𝑥1 𝑥2 − 3𝑥22 + 4𝑥2 − 4
Subject to the constraints: 𝑥1 + 𝑥2 ≤ 1
𝑥1 , 𝑥2 ≥ 0
Matrix form except the constant term -4 of objective function because it has no effect on finding the
solution.
Maximize 𝑧 ∗ = −𝑥12 + 𝑥1 𝑥2 − 3𝑥22 + 4𝑥2 − 4
Subject to the constraints: 𝑥1 + 𝑥2 ≤ 1
𝑥1 , 𝑥2 ≥ 0
𝑥1 𝑥
1
⇒ Maximize 𝑧 = (0 4) (𝑥 ) − 2 (𝑥1 𝑥2 ) (−2 1 ) ( 1 )
2 1 −6 𝑥2

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 7
𝑥1
Subject to the constraints: (1 1) (𝑥 ) ≤ (1) and 𝑥1 , 𝑥2 ≥ 0
2

∴ The Khun-Tucker condition are


−2 1 𝑥1 1 𝜇1 0
( ) (𝑥 ) + ( ) (𝜆1 ) − (𝜇 ) = ( )
1 −6 2 1 2 4
𝑥1
and(1 1) (𝑥 ) + (𝑦1 ) = (1)
2

Where 𝜆1 𝑦1 = 𝜇1 𝑥1 = 𝜇2 𝑥2 = 0 and 𝑥1 , 𝑥2 , 𝜆1 , 𝜆2 , 𝜇1 , 𝜇2 , 𝑦1 , 𝑦2 ≥ 0
2𝑥1 − 𝑥2 + 𝜆1 − 𝜇1 = 0
⇒ −𝑥1 + 6𝑥2 + 𝜆1 − 𝜇2 = 4} … … … . (∗)
𝑥1 + 𝑥2 + 𝑦1 = 1
𝑥1 , 𝑥2 , 𝜆1 , 𝜆2 , 𝜇1 , 𝜇2 , 𝑦1 , 𝑦2 ≥ 0 and 𝜆1 𝑦1 = 𝜇1 𝑥1 = 𝜇2 𝑥2 = 0
Now introducing artificial variables 𝑎1 ≥ 0 and 𝑎2 ≥ 0 the given Q.P.P. is equivalent to
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑍 = 𝑎1 + 𝑎2
Subject to the constraints:
2𝑥1 − 𝑥2 + 𝜆1 − 𝜇1 + 𝑎1 = 0
−𝑥1 + 6𝑥2 + 𝜆1 − 𝜇2 + 𝑎2 = 4
𝑥1 + 𝑥2 + 𝑦1 = 1
𝑥1 , 𝑥2 𝑦1 , 𝑦2 , 𝜆1 , 𝜇1 , 𝜇2 , 𝑎1 , 𝑎2 ≥ 0
Thus the auxiliary linear programming problem for phase-I is
𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑍 ∗ = −𝑎1 − 𝑎2
Subject to the constraints:
2𝑥1 − 𝑥2 + 𝜆1 − 𝜇1 + 0. 𝜇2 + 0. 𝑦1 + 𝑎1 + 0. 𝑎2 = 0
−𝑥1 + 6𝑥2 + 𝜆1 + 0. 𝜇1 − 𝜇2 + 0. 𝑦1 + 0. 𝑎1 + 𝑎2 = 4
𝑥1 + 𝑥2 + 0. 𝜆1 + 0. 𝜇1 + 0. 𝜇2 +𝑦1 + 0. 𝑎1 + 0. 𝑎2 = 1
𝑥1 , 𝑥2 𝑦1 , 𝑦2 , 𝜆1 , 𝜇1 , 𝜇2 , 𝑎1 , 𝑎2 ≥ 0 and 𝜆1 𝑦1 = 𝜇1 𝑥1 = 𝜇2 𝑥2 = 0.
The simplex iteration leading to the optimum solution are
Table for phase-I
𝑐𝑗 0 0 0 0 0 0 -1 -1 Constant Min
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜇1 𝜇2 𝑦1 𝑎1 𝑎2 b ratio
-1 𝑎1 2 -1 1 -1 0 0 1 0 0
-1 𝑎2 -1 6∗ 1 0 -1 0 0 1 4 2∗
3
0 𝑦1 1 1 0 0 0 1 0 0 1 1
𝑧𝑗∗ − 𝑐𝑗 -1 −5 ↑ -2 1 1 0 0 0 𝑧 ∗ = −4

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 8
Here 𝑧2∗ − 𝑐2 = −5 is the most negative, so 𝑥2 can enter the basis since 𝜇2 = 0. Hence 𝑥2 is entering
variable and 𝑎2 is departing variable.
First iteration:
𝑐𝑗 0 0 0 0 0 0 -1 -1 Constant Min
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜇1 𝜇2 𝑦1 𝑎1 𝑎2 b ratio
-1 𝑎1 11/6 0 7/6 -1 -1/6 0 0 1 2/3 4/11
0 𝑥2 -1/6 1 1/6 0 -1/6 0 0 0 2/3
0 𝑦1 (7/6) 0 -1/6 0 1/6 1 1 0 1/3 2/7∗
𝑧𝑗∗ − 𝑐𝑗 −11/6 ↑ 0 -7/6 1 1/6 0 0 0 𝑧 ∗ = −4
11
Here 𝑧1∗ − 𝑐1 = − is the most negative, so 𝑥1 can enter the basis because 𝜇1 = 0, so 𝑦1 leave the basis.
6

Second iteration:
𝑐𝑗 0 0 0 0 0 0 -1 Constant Min
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜇1 𝜇2 𝑦1 𝑎1 b ratio

-1 𝑎1 0 0 10/7∗ -1 -3/7 -11\7 1 1/7 1/10∗


0 𝑥2 0 1 1/7 0 -1/7 1/7 0 5/7 5
0 𝑦1 1 0 -1/7 0 1/7 6/7 0 2/7
𝑧𝑗∗ − 𝑐𝑗 0 0 -10/7 1 3/7 11/7 0 𝑧 ∗ = −1/7
Here 𝑧3∗ − 𝑐3 = −10/7 is the most negative, so 𝜆1 can enter the basis because 𝑦1 = 0 and 𝑎1 is leave the
basis.
Final iteration:
𝑐𝑗 0 0 0 0 0 0 Constant
𝑐𝐵 basis 𝑥1 𝑥2 𝜆1 𝜇1 𝜇2 𝑦1 b
0 𝜆1 0 0 1 -7/10 -7/10 -11\10 1/10
0 𝑥2 0 1 0 -1/10 -1/10 3/10 7/10
0 𝑦1 1 0 0 1/10 -1/10 7/10 3/10
𝑧𝑗∗ − 𝑐𝑗 0 0 0 1 0 0 𝑧 ∗ = −1/7

Since all 𝑧𝑗∗ − 𝑐𝑗 ≥ 0 and 𝑧 ∗ = 0 all artificial variables leave the basis. Hence the optimality conditions
3 7 29
are satisfied and optimum solution is 𝑥1 = 10 , 𝑥2 = 10 , 𝑀𝑎𝑥 𝑍 ∗ = 20
29 51
𝑀𝑎𝑥 𝑧1∗ = −𝑥12 + 𝑥1 𝑥2 − 3𝑥22 + 4𝑥2 − 4 = 20 − 4 = − 20
51
∴ 𝑀𝑖𝑛. 𝑍 = 20

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 9
Question: What is Beale’s method?
Answer: In 1959, E.M.L. Beale has developed a technique of solving the quadratic programming
problem that does not use the Kuhn-Tucker conditions in achieving the optimum solution. His technique
involves partitioning of the variable into basic and non-basic ones and using classical calculus results.
At each iteration, the objective function is expressed in terms of only the non-basic variables.
Question: Describe Beale’s method for solving quadratic programming problem.
Answer: Let us consider the Q.P.P.
1
Optimize 𝑓(𝑥) = 𝑐𝑥 + 2 𝑥 𝑇 𝐷𝑥

Subject to the constraint: 𝐴𝑥 ≤ 𝑏 and 𝑥 ≥ 0.


Where 𝑋 = (𝑥1 , 𝑥2 , … … , 𝑥𝑛 )𝑇 , 𝑐 = (𝑐1 , 𝑐2 , … … . , 𝑐𝑛 ), 𝑏 = (𝑏1 , 𝑏2 , … … 𝑏𝑚 )𝑇 , 𝐴 = (𝑎𝑖𝑗 ) is an 𝑚 × 𝑛
matrix and 𝐷 = (𝑑𝑗𝑘 ) is an 𝑛 × 𝑛 symmetric matrix i.e. 𝑑𝑗𝑘 = 𝑑𝑘𝑗 .
The iterative procedure for solving Q.P.P. by Beale’s method can be summarized as follows:
Step 1: If the problem is of minimization type write it maximization type. Write the problem in standard
form by introducing slack or surplus variables.
Step 2: Now select arbitrarily m variables as basic and the remaining 𝑛 –m variables as non-basic.
Denote the basic and non-basic variables 𝑥𝐵 = (𝑥𝐵1 , 𝑥𝐵2 … . . , 𝑥𝐵𝑚 ) and non-basic variables 𝑥𝑁𝐵 =
(𝑥𝑁𝐵1 , 𝑥𝑁𝐵2 , … … . . , 𝑥𝑁𝐵𝑛−𝑚 ) respectively.
Step 3: Express each basic variable and object function entirely in terms of non-basic variables, using
the constraints.
𝜕𝑓(𝑥)
Step 4: (i) If (𝜕𝑥 ) < 0 for each 𝑘 and 𝑘 = 1, 2, … … . , 𝑛 − 𝑚.
𝑁𝐵𝐾 𝑥𝐵𝑁=0
𝑢=0

𝜕𝑓(𝑥)
And ( ) = 0 for each 𝑖
𝜕𝑢𝑖 𝑥𝑁𝐵=0
𝑢=0

The solution is optimum and go to step-7.


𝜕𝑓(𝑥)
(ii) If (𝜕𝑥 ) > 0 for at least one 𝑘.
𝑁𝐵𝐾 𝑥𝐵𝑁=0
𝑢=0

Then choose most positive one. The corresponding non-basic variable will be basic.
𝜕𝑓(𝑥)
(iii) If (𝜕𝑥 ) < 0 for each ℎ = 1, 2, … … . , 𝑛 − 𝑚.
𝑁𝐵𝐾 𝑥𝐵𝑁=0
𝑢=0

𝜕𝑓(𝑥)
but ( ) ≠ 0 for same 𝑖 = 𝑟
𝜕𝑢𝑖 𝑥𝑁𝐵=0
𝑢=0

then introduce a new non-basic variable 𝑢𝑗 , defined by


1 𝜕𝑓(𝑥)
𝑢𝑗 = 2 𝜕𝑢𝑟

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 10
and treat 𝑢𝑟 as a basic variable (to be ignored later). Go to step-3.
Step-5: Let 𝑥𝑁𝐵𝑖 = 𝑥𝑘 be the entering variable identified in step 4(ii). Compute the minimum ratios.
𝛼 𝛾
𝑀𝑖𝑛 {|𝛼ℎ0 | . |𝛾𝑘0 |}
ℎ𝑘 𝑘𝑘

For all basic variables 𝑥ℎ , where 𝛼ℎ0 is the constant term and 𝛼ℎ𝑘 is the co-efficient of 𝑥𝑘 , in the
expression of basic variable 𝑥ℎ , when expressed in terms of the non-basic variables and 𝛾𝑘0 is the
𝜕𝑓(𝑥)
constant term and 𝛾𝑘 is the co-efficient of 𝑥𝑘 in .
𝜕𝑥𝑘
𝛼ℎ0
(i) If the minimum ratio occurs for some , the corresponding basic variable 𝑥𝑘 , will leave
|𝛼ℎ0 |

the basis (set of basic variables)


𝛾
(ii) If the minimum ratio occurs for some |𝛾𝑘0 |, then introduce an additional non-basic variable,
𝑘𝑘

1 𝜕𝑓(𝑥)
called free variable defined by 𝑢𝑗 = 2 , 𝑢𝑗 is unrestricted in sign.
𝜕𝑥𝑘

Step-6: Go to step-3 and repeat the procedure until an optimal basic solution is obtained.
Step-7: Determine the value of original objective function 𝑓(𝑥) at 𝑥𝐵 and 𝑥𝑁𝐵 = 0 in their expression
obtained in step-3.
Problem: Solve the following quadratic programming problem using Beale’s method.
𝑴𝒂𝒙𝒊𝒎𝒊𝒛𝒆 𝒇(𝒙) = 𝟒𝒙𝟏 + 𝟔𝒙𝟐 − 𝒙𝟐𝟏 − 𝟑𝒙𝟐𝟐
Subject to: 𝒙𝟏 + 𝟐𝒙𝟐 ≤ 𝟒
𝒙 𝟏 , 𝒙𝟐 ≥ 𝟎
Solution: Introducing slack variable 𝑥3 ≥ 0, the standard form of the problem become
𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑓(𝑥) = 4𝑥1 + 6𝑥2 − 𝑥12 − 3𝑥22
Subject to: 𝑥1 + 2𝑥2 + 𝑥3 = 4
𝑥1 , 𝑥2 , 𝑥3 ≥ 0
Let 𝑥3 be the initial basic variable, with 𝑥3 = 4, 𝑥1 = 𝑥2 = 0.
∴ Basic variable 𝑥𝐵 = (𝑥3 ) and Non-basic variables 𝑥𝑁𝐵 = (𝑥1 , 𝑥2 )
Expressing basic variables 𝑥𝐵 and object function 𝑓(𝑥) entirely interms of 𝑥𝑁𝐵 .
We get, 𝑥3 = 4 − 𝑥1 − 2𝑥2
and 𝑓(𝑥) = 4𝑥1 + 6𝑥2 − 𝑥12 − 3𝑥22
𝜕𝑓
(𝜕𝑥 )𝑥1 =0 = (4 − 2𝑥1 )𝑥 1=0 = 4
1 𝑥2 =0
𝑥2 =0
𝜕𝑓
(𝜕𝑥 )𝑥1 =0 = (6 − 2𝑥2 )𝑥 1=0 = 6
2 𝑥2 =0
𝑥2 =0

Since both the partial derivatives are positive at 𝑥𝑁𝐵 = 0, the current solution is not optimum. The most
𝜕𝑓
positive partial derivative gives by 𝜕𝑥 , 𝑥2 must enter the basis.
2

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 11
Now to determine the leaving basic variable, we compute the ratios:
𝛼 𝛾 𝛼 𝛾 4 6
𝑀𝑖𝑛 {|𝛼ℎ0 | . |𝛾𝑘0 |} = 𝑀𝑖𝑛 {|𝛼30 | , 𝛾20 } = 𝑀𝑖𝑛 {|−2| , |−6|} = min{2,1} = 1.
ℎ𝑘 𝑘𝑘 32 22

[ Where 𝛼ℎ0 is the constant term and 𝛼ℎ𝑘 is the co-efficient of new basic 𝑥2 of 𝑥3 = 4 − 𝑥1 − 2𝑥2 . Also
𝜕𝑓
𝛾𝑘0 and 𝛾𝑘𝑘 are the constant term and co-efficient of 𝑥2 in = 6 − 6𝑥2 ]
𝜕𝑥2
𝛾
Which occurs for |𝛾𝑘0 | ; and therefore we introduce a free variable 𝑢1 in 𝑥𝑁𝐵 defined by
𝑘𝑘

1 𝜕𝑓 1
𝑢1 = 2 𝜕𝑥 ⇒ 𝑢1 = 2 (6 − 6𝑥2 ) = 3 − 3𝑥2
2

∴ 𝑥𝐵 = (𝑥2 , 𝑥3 ) and 𝑥𝑁𝐵 = (𝑥1 , 𝑢1 )


Expressing 𝑥𝐵 and 𝑓(𝑥) in terms of 𝑥𝑁𝐵 using 𝑢1 = 3 − 3𝑥2 and given constraint.
1 1
𝑥2 = 1 − 3 𝑢1 and 𝑥3 = 4 − 𝑥1 − 2(1 − 3 𝑢1 )
2
∴ 𝑥3 = 2 − 𝑥1 + 3 𝑢1
1 1 2
∴ 𝑓(𝑢) = 4𝑥1 + 6 (1 − 3 𝑢1 ) − 𝑥12 − 3 (1 − 3 𝑢1 )
2 1
= 4𝑥1 + 6 − 2𝑢1 − 𝑥12 − 3 (1 − 3𝑢 + 9 𝑢12 )
1

1
= 3 + 4𝑢1 − 𝑥12 − 3 𝑢12

Again compute the partial derivatives of 𝑓(𝑥) for 𝑥𝑁𝐵 at 𝑥𝑁𝐵 = 0


𝜕𝑓
(𝜕𝑥 )𝑥1 =0 = (4 − 2𝑥1 )𝑥 1=0 = 4
1 𝑢1 =0
𝑢1 =0
𝜕𝑓 2
(𝜕𝑢 )𝑥1 =0 = (− 3 𝑢1 ) =0
1 𝑥 1=0
𝑢1 =0 𝑢1 =0

𝜕𝑓
Since 𝜕𝑥 > 0, 𝑥1 must enter the basis to determine the leaving basic variable, we compute
1

𝛼 𝛼 𝛾 1 2 4 1
𝑀𝑖𝑛 {|𝛼20 | , |𝛼30 | , 𝛾20 } = 𝑀𝑖𝑛 {0 , |−1| , |−2|} = min {0 , 2, 2} = 2.
21 31 11
𝛾10
Since minimum ratio occurs for , we have to introduce another non-basic free variable 𝑢2 defined
|𝛾11 |
1 𝜕𝑓 1
by 𝑢2 = = (4 − 2𝑥1 )
2 𝜕𝑥1 2

∴ Set of basic variables, 𝑥𝐵 = (𝑥2 , 𝑥3 , 𝑥1 ) and set of non-basic variables 𝑥𝑁𝐵 = (𝑢1 , 𝑢2 )
Expressing the basic variables 𝑥𝐵 and objective function 𝑓(𝑥) entirely in terms of 𝑥𝑁𝐵 , we have
𝑥1 = 2 − 𝑢2
1
𝑥2 = 1 − 3 𝑢1
2 2
𝑥3 = 2 − (2 − 𝑢2 ) + 3 𝑢1 ⇒ 𝑥3 = 𝑢2 + 3 𝑢1
1 1
And 𝑓(𝑥) = 3 + 4(2 − 𝑢2 ) − (2 − 𝑢)2 − 3 𝑢12 = 3 + 8 − 4𝑢2 − 4 + 4𝑢2 − 𝑢22 − 3 𝑢12

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 12
1
= 7 − 𝑢22 − 3 𝑢12
𝜕𝑓 2
Now (𝜕𝑢 )𝑢1 =0 = (− 3 𝑢1 ) =0
1 𝑥 1=0
𝑢2 =0 𝑢1 =0

𝜕𝑓
(𝜕𝑢 )𝑢1 =0 = (−2𝑢2 )𝑥 1=0 = 0
2 𝑢1 =0
𝑢2 =0
𝜕𝑓
Since all 𝜕𝑢 = 0, the current solution is optimal. Hence the optimal solution to the given problem is (by
1

setting 𝑢1 = 𝑢2 = 0 in last expressing of 𝑥𝐵 and f(x))


𝑥1 = 2, 𝑥2 = 1, 𝑥3 = 0 and f(x)=7 [ or 𝑧 = 4.2+6.1-4-3.1=7]
∴ 𝑥1 = 2, 𝑥2 = 1 and 𝑚𝑎𝑥𝑓(𝑥) = 7.
Question: Solve the following Q.P.P. by using Beale’s method
𝑴𝒂𝒙𝒊𝒎𝒊𝒛𝒆 𝒇(𝒙) = 𝟐𝒙𝟏 + 𝟑𝒙𝟐 − 𝒙𝟐𝟏 .
Subject to: 𝒙𝟏 + 𝟐𝒙𝟐 ≤ 𝟒
𝒙 𝟏 , 𝒙𝟐 ≥ 𝟎
Solution: Introducing slack variable 𝑥3 ≥ 0 in the constraints, the standard form of the problem become
𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑓(𝑥) = 2𝑥1 + 3𝑥2 − 𝑥12
Subject to: 𝑥1 + 2𝑥2 + 𝑥3 = 4
𝑥1 , 𝑥2 , 𝑥3 ≥ 0
Since there is only one constraint. Let 𝑥3 be the initial basic variable, with 𝑥3 = 4, 𝑥1 = 𝑥2 = 0.
∴ Basic variable 𝑥𝐵 = (𝑥3 ) and Non-basic variables 𝑥𝑁𝐵 = (𝑥1 , 𝑥2 )
Expressing basic variables 𝑥𝐵 and object function 𝑓(𝑥) entirely interms of 𝑥𝑁𝐵 .
We get, 𝑥3 = 4 − 𝑥1 − 2𝑥2
and 𝑓(𝑥) = 2𝑥1 + 3𝑥2 − 𝑥12
𝜕𝑓
(𝜕𝑥 )𝑥1 =0 = (2 − 2𝑥1 )𝑥 1=0 = 2
1 𝑥2 =0
𝑥2 =0
𝜕𝑓
(𝜕𝑥 )𝑥1 =0 = (3)𝑥 1=0 = 3
2 𝑥2 =0
𝑥2 =0

Since both the partial derivatives are positive at 𝑥𝑁𝐵 = 0, the current solution is not optimum. The most
𝜕𝑓
positive partial derivative gives by 𝜕𝑥 , 𝑥2 must enter the basis.
2

Now to determine the leaving basic variable, we compute the ratios:


𝛼 𝛾 𝛼 𝛾 4 3 3
𝑀𝑖𝑛 {|𝛼ℎ0 | . |𝛾𝑘0 |} = 𝑀𝑖𝑛 {|𝛼30 | , |𝛾20 } = 𝑀𝑖𝑛 {|−2| , 0} = min {2, 0} = 2.
ℎ𝑘 𝑘𝑘 32 22|

𝛼
Which occurs for |𝛼30 | ; 𝑥3 will leave the basis, we have
32

∴ 𝑥𝐵 = (𝑥2 ) and 𝑥𝑁𝐵 = (𝑥1 , 𝑥3 )

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 13
Expressing the new basic variable 𝑥𝐵 = (𝑥2 ) and 𝑓(𝑥) in terms of new non basic 𝑥𝑁𝐵 = (𝑥1 , 𝑥3 )
1 1 𝑥1 𝑥3 𝑥1 3
𝑥2 = 1 − 2 𝑥1 − 2 𝑥3 and 𝑓 = 2𝑥1 + 3 (2 − − ) − 𝑥12 = 6 + − 2 𝑥3 − 𝑥12
2 2 2

Again compute the partial derivatives of 𝑓(𝑥) for 𝑥𝑁𝐵 at 𝑥𝑁𝐵 = 0


𝜕𝑓 1 1
(𝜕𝑥 )𝑥1 =0 = (2 − 2𝑥1 ) =2
1 𝑥 1=0
𝑥3 =0 𝑥3 =0

𝜕𝑓 2 3
(𝜕𝑥 )𝑥1 =0 = (− 3) = −2
3 𝑥 1=0
𝑥3 =0 𝑥3 =0

𝜕𝑓 𝜕𝑓
Since 𝜕𝑥 > 0, the current solution is not optimal and 𝑥1 must enter the basis as 𝜕𝑥 is most positive. To
1 1

determine the leaving basic variable, we compute the ratios


1
𝛼 𝛾 2 1 1
𝑀𝑖𝑛 {|𝛼20 | , , |𝛾10 } = 𝑀𝑖𝑛 { 1
2
, |−2| } = min { 4, 4} = 4.
21 11| |− |
2
𝛾
Which occurs for |𝛾10 |, we introduce a new non-basic free variable 𝑢1 defined by
11

1 𝜕𝑓 1 1 1
𝑢1 = 2 𝜕𝑥 = 2 (2 − 2𝑥1 ) = 4 − 𝑥1
1

∴ New basic variables, 𝑥𝐵 = (𝑥1 , 𝑥2 ) and set of non-basic variables 𝑥𝑁𝐵 = (𝑥3 , 𝑢1 )
Expressing the basic variables 𝑥𝐵 and objective function 𝑓(𝑥) entirely in terms of 𝑥𝑁𝐵 , we have
1
𝑥1 = 4 − 𝑢1
1 1 1 1 15 1 1
𝑥2 = 2 (4 − 𝑥1 − 𝑥3 ) = 2 − 2 (4 − 𝑢1 ) − 2 𝑥3 = + 2 𝑢1 − 2 𝑥3
8

1 15 1 1 1 2 97 3 1
And 𝑓(𝑥) = 2 (4 − 𝑥1 ) + 3 ( 8 + 2 𝑢1 − 2 𝑥3 ) − (4 − 𝑢1 ) = 16 − 2 𝑥3 − 3 𝑢12

Now partial derivatives of 𝑓(𝑥) for 𝑥𝑁𝐵 at 𝑥𝑁𝐵 = 0


𝜕𝑓 2 3
(𝜕𝑥 )𝑥3 =0 = (− 3) = −2
3 𝑥3 =0
𝑢1 =0 𝑢1 =0

𝜕𝑓
(𝜕𝑢 )𝑥3 =0 = (−2𝑢1 )𝑥3 =0 =0
1 𝑢1 =0
𝑢1 =0
𝜕𝑓
Since all ≤ 0, for all 𝑥𝑗 ∈ 𝑥𝑁𝐵 , the current solution is optimal. Find the solution putting 𝑥3 = 𝑢1 = 0
𝜕𝑥𝑗

in last expression of 𝑥𝐵 and f(x). We have


1 15 97
𝑥1 = 4 , 𝑥2 = and 𝑓(𝑥) = 16
8

Hence the optimal basic feasible solution to the given NLPP is


1 15 97
𝑥1 = 4 , 𝑥2 = and 𝑓(𝑥) = 16
8

Question: Solve the following Q.P.P. by using Beale’s method


𝑴𝒂𝒙𝒊𝒎𝒊𝒛𝒆 𝒇(𝒙) = −𝟐𝒙𝟏 + 𝟑𝒙𝟐 − 𝟐𝒙𝟐𝟐 .

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 14
Subject to: 𝒙𝟏 + 𝟐𝒙𝟐 ≤ 𝟒
𝒙 𝟏 + 𝒙𝟐 ≤ 𝟐
𝒙 𝟏 , 𝒙𝟐 ≥ 𝟎
Solution: Introducing slack variable 𝑥3 ≥ 0, 𝑥4 ≥ 0 in the 1st and 2nd constraints, the standard form of
the problem become
𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑓(𝑥) = −2𝑥1 + 3𝑥2 − 2𝑥22
Subject to: 𝑥1 + 2𝑥2 + 𝑥3 = 4
𝑥1 + 𝑥2 + 𝑥4 = 2
𝑥1 , 𝑥2 , 𝑥3 𝑥4 ≥ 0
Let arbitrarily basic variable 𝑥𝐵 = (𝑥3 𝑥4 ) and Non-basic variables 𝑥𝑁𝐵 = (𝑥1 , 𝑥2 ) initial
∴ 𝑥3 = 4, 𝑥4 = 2 𝑎𝑛𝑑 𝑥1 = 𝑥2 = 0.
Expressing basic variables 𝑥𝐵 and object function 𝑓(𝑥) entirely interms of 𝑥𝑁𝐵 .
We get, 𝑥3 = 4 − 𝑥1 − 4𝑥2
𝑥4 = 4 − 𝑥1 − 𝑥2
and 𝑓(𝑥) = 2𝑥1 + 3𝑥2 − 2𝑥22
𝜕𝑓
(𝜕𝑥 )𝑥1 =0 = (−2)𝑥 1=0 = −2
1 𝑥2 =0
𝑥2 =0
𝜕𝑓
( ) = (3 − 4𝑥2 )𝑥 1=0 = 3
𝜕𝑥2 𝑥1 =0 𝑥2 =0
𝑥2 =0
𝜕𝑓
Here at least one derive is positive and 𝜕𝑥 is most positive. So 𝑥2 must enter the basis.
2

Now to determine the leaving basic variable, we compute the ratios:


𝛼 𝛾 𝛼 𝛼 𝛾 4 4 3 3
𝑀𝑖𝑛 {|𝛼ℎ0 | . |𝛾𝑘0 |} = 𝑀𝑖𝑛 {|𝛼30 | , |𝛼40 | , |𝛾20 |} = 𝑀𝑖𝑛 {|−4| , |−1| , |−4|} = 4.
ℎ𝑘 𝑘𝑘 32 42 22
𝛾
Which occurs corresponding to |𝛾20 |. So no basic variable to remove the basis. We introduce a new free
22

variable defined by
1 𝜕𝑓 1 3
𝑢1 = 2 𝜕𝑥 = 2 (3 − 4𝑥2 ) = 2 − 2𝑥2
2

∴ 𝑠𝑒𝑡 𝑜𝑓 𝑏𝑎𝑠𝑖𝑐 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒 𝑥𝐵 = (𝑥3, 𝑥4, 𝑥2 ) and set of non-basic variable 𝑥𝑁𝐵 = (𝑥1 , 𝑢1 )
Expressing the new basic variable 𝑥𝐵 = (𝑥3, 𝑥4, 𝑥2 ) and 𝑓(𝑥) in terms of new non basic 𝑥𝑁𝐵 = (𝑥1 , 𝑢1 )
3 1 1
𝑥2 = 4 − 2 𝑢1 − 2 𝑥3
3 1
𝑥3 = 4 − 𝑥1 − 4 (4 − 2 𝑢1 ) = 1 − 𝑥1 + 2𝑢1
4 1 5 1
𝑥4 = 2 − 𝑥1 (3 − 2 𝑢1 ) = 4 − 𝑥1 + 2 𝑢1

3 𝑢1 3 𝑢1 2 9 3 9 3 𝑢12
and 𝑓 = −2𝑥1 + 3 (4 − ) − 2 (4 − ) = −2𝑥1 + 4 − 2 𝑢1 − 2 (16 − 4 𝑢1 + )
2 2 4

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 15
9 1
= 8 − 2𝑥1 − 2 𝑢12

Again compute the partial derivatives of 𝑓(𝑥) for 𝑥𝑁𝐵 at 𝑥𝑁𝐵 = 0


𝜕𝑓
(𝜕𝑥 )𝑥1 =0 = (−2)𝑥 1=0 = −2
1 𝑢1 =0
𝑢1 =0
𝜕𝑓
(𝜕𝑢 )𝑥1 =0 = (−𝑢1 )𝑥 1=0 = 0
1 𝑢1 =0
𝑢1 =0
𝜕𝑓
Since 𝜕𝑥 ≤ 0 for 𝑥𝑗 ∈ 𝑥𝑁𝐵 . So the current solution is optimal. Optimum solution is
𝑗

3 5 9
𝑥2 = 4 , 𝑥3 = 1, 𝑥4 = 4 , 𝑥1 = 0 and 𝑓(𝑥) = 8

Hence the optimal basic feasible solution to the given NLPP is


3 9
𝑥1 = 0, 𝑥2 = 4 and 𝑓(𝑥) = 8.

Question: Solve the following Q.P.P. by using Beale’s method


𝑴𝒊𝒏𝒊𝒎𝒊𝒛𝒆 𝒛 = −𝟒𝒙𝟏 + 𝒙𝟐𝟏 − 𝟐𝒙𝟏 𝒙𝟐 + 𝟐𝒙𝟐𝟐 .
Subject to: 𝟐𝒙𝟏 + 𝒙𝟐 ≥ 𝟔
𝒙𝟏 − 𝟒𝒙𝟐 ≥ 𝟎
𝒙 𝟏 , 𝒙𝟐 ≥ 𝟎
Solution: Introducing surplus variable𝑠 𝑥3 ≥ 0, 𝑥4 ≥ 0 and converting the objective function to
maximization we have
𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑓(𝑥) = 4𝑥1 − 𝑥12 + 2𝑥1 𝑥2 − 2𝑥22
Subject to: 2𝑥1 + 𝑥2 − 𝑥3 = 6
𝑥1 − 4𝑥2 − 𝑥4 = 0
𝑥1 , 𝑥2 , 𝑥3 𝑥4 ≥ 0
Let 𝑥3 and 𝑥4 are the initial basic variable we get 𝑥𝐵 = (𝑥3 𝑥4 ) and Non-basic variables 𝑥𝑁𝐵 =
(𝑥1 , 𝑥2 )
Expressing basic variables 𝑥𝐵 and object function 𝑓(𝑥) entirely interms of 𝑥𝑁𝐵 .
We get, 𝑥3 = −6 + 2𝑥1 + 𝑥2
𝑥4 = 𝑥1 − 4𝑥2
and 𝑓(𝑥) = 4𝑥1 − 𝑥12 + 2𝑥1 𝑥2 − 2𝑥22
The partial derivatives of 𝑓 w.r.t. the non-basic variables at the point 𝑥𝑁𝐵 = 0
𝜕𝑓
Now (𝜕𝑥 )𝑥1 =0 = (4 − 2𝑥1 + 2𝑥2 )𝑥 1=0 = 4
1 𝑥2 =0
𝑥2 =0
𝜕𝑓
(𝜕𝑥 )𝑥1 =0 = (2𝑥1 − 4𝑥2 )𝑥 1=0 = 0
2 𝑥2 =0
𝑥2 =0

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 16
𝜕𝑓
Here 𝜕𝑥 is most positive. So 𝑥1 must enter the basis.
1

Now to determine the leaving basic variable, we compute the ratios:


𝛼 𝛾 𝛼 𝛼 𝛾 −6 0 4
𝑀𝑖𝑛 {|𝛼ℎ0 | . |𝛾𝑘0 |} = 𝑀𝑖𝑛 {|𝛼30 | , |𝛼40 | , |𝛾20 |} = 𝑀𝑖𝑛 { |2| , |1| , |−2|} = −3
ℎ𝑘 𝑘𝑘 31 41 11

𝑥3 will leave the basis.


Expressing the new basic variable 𝑥𝐵 = (𝑥1, 𝑥4, ) and 𝑓(𝑥) in terms of new non basic variable 𝑥𝑁𝐵 =
(𝑥2 , 𝑥3 )
1 1
𝑥1 = 3 − 2 𝑥2 + 2 𝑥3
3 1
𝑥4 = 3 − 2 𝑥2 + 2 𝑥3
1 1 1 1 2 1 1
and 𝑓(𝑥) = 4 (3 − 2 𝑥2 + 2 𝑥3 ) − (3 − 2 𝑥2 + 2 𝑥3 ) + 2 (3 − 2 𝑥2 + 2 𝑥3 ) 𝑥2 − 2𝑥22
3 13 1
= 9 + 𝑥2 − 𝑥3 + 2 𝑥2 𝑥3 − 𝑥22 − 4 𝑥22
4

Again compute the partial derivatives of 𝑓(𝑥) for 𝑥𝑁𝐵 at 𝑥𝑁𝐵 = 0


𝜕𝑓 3 13
(𝜕𝑥 )𝑥2 =0 = (1 + 2 𝑥3 − 𝑥2 ) =1
2 2 𝑥 2=0
𝑥3 =0 𝑥3 =0

𝜕𝑓 3 1
(𝜕𝑥 )𝑥2 =0 = (−1 + 2 𝑥2 − 2 𝑥3 ) = −1
3 𝑥 2=0
𝑥3 =0 𝑥3 =0

𝜕𝑓
Since at least one > 0 , the current solution is not optimal and it can be improved clearly 𝑥2 will
𝜕𝑥𝑁𝐵

enter the basis.


𝛼 𝛼 𝛾 3 3 1 2
Now 𝑀𝑖𝑛 {|𝛼10 | , |𝛼40 | , |𝛾20 |} = 𝑀𝑖𝑛 { 1 , 9 , 13 } = 13
12 42 22 |− | |− | |− 2 |
2 2
𝛾20
Since the minimum ratio corresponding to . We introduce a non-basic free variable 𝑢1 defined by
|𝛾22 |
1 𝜕𝑓 1 3 13
𝑢1 = 2 𝜕𝑥 = 2 + 4 𝑥3 − 𝑥2
2 4

Thus now we have 𝑥𝐵 = (𝑥1 , 𝑥4 , 𝑥2 ) and 𝑥𝑁𝐵 = (𝑥3, 𝑢1 )


Expressing the new basic variable 𝑥𝐵 = (𝑥1, 𝑥4, 𝑥2 ) and 𝑓(𝑥) in terms of new non basic variable 𝑥𝑁𝐵 =
(𝑥3 , 𝑢1 )
38 3 2
𝑥1 = 13 − 26 𝑥3 + 13 𝑢1
2 3 4
𝑥2 = 13 + 13 𝑥3 − 13 𝑢1
30 27 18
𝑥4 = 13 − 26 𝑥3 + 13 𝑢1
1 3 1 1
and 𝑓(𝑥) = 9 + 13 (2 + 3𝑥3 − 4𝑢1 ) − 𝑥3 + 26 𝑥3 (2 + 3𝑥3 − 4𝑢1 ) − 52 (2 + 3𝑥3 − 4𝑢1 )2 − 4 𝑥32

Again compute the partial derivatives of 𝑓(𝑥) for 𝑥𝑁𝐵 at 𝑥𝑁𝐵 = 0

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 17
𝜕𝑓 3 3 18 6 1
(𝜕𝑥 )𝑥3 =0 = [13 − 1 + 26 (2 − 4𝑢1 ) + 26 𝑥3 − 52 (2 + 3𝑥3 − 4𝑢1 ) − 2 𝑥3 ]𝑥 2=0
3 𝑥3 =0
𝑢1 =0
10 6 12 9
=− + 26 − 52 = − 13
3
𝜕𝑓 4 12 8 4 4
(𝜕𝑢 )𝑥3 =0 = [− 13 − 26 𝑥3 + 52 (2 + 3𝑥3 − 4𝑢1 )] = − 13 + 13 = 0
1 𝑥 3=0
𝑢1 =0 𝑢1 =0

𝜕𝑓
Since 𝜕𝑥 ≤ 0 for 𝑥𝑗 ∈ 𝑥𝑁𝐵 . so the current solution is optimal. The optimum solution is
𝑗

38 2 116
𝑥1 = 13 , 𝑥2 = 13 and 𝑓(𝑥) = 13

Hence the optimal basic feasible solution to the given NLPP is


38 2 116
𝑥1 = 13 , 𝑥2 = 13 and 𝑚𝑖𝑛 𝑧 = − 13
𝜕𝑓(𝑥)
[Note: (i) If (𝜕𝑥 ) < 0 for each 𝑘 and 𝑘 = 1, 2, … … . , 𝑛 − 𝑚.
𝑁𝐵𝐾 𝑥𝐵𝑁=0
𝑢=0

𝜕𝑓(𝑥)
and ( ) = 0 for each 𝑖
𝜕𝑢𝑖 𝑥𝑁𝐵=0
𝑢=0

The solution is optimum and go to step-7.


𝜕𝑓(𝑥)
(ii) If (𝜕𝑥 ) > 0 for at least one 𝑘, then choose most positive one. The corresponding non-basic
𝑁𝐵𝐾 𝑥𝐵𝑁=0
𝑢=0

variable will be basic.


𝜕𝑓(𝑥)
(iii) If (𝜕𝑥 ) < 0 for each ℎ = 1, 2, … … . , 𝑛 − 𝑚.
𝑁𝐵𝐾 𝑥𝐵𝑁=0
𝑢=0

𝜕𝑓(𝑥)
but ( ) ≠ 0 for same 𝑖 = 𝑟 ;then introduce a new non-basic variable 𝑢𝑗 , defined by
𝜕𝑢𝑖 𝑥𝑁𝐵=0
𝑢=0

1 𝜕𝑓(𝑥)
𝑢𝑗 = 2 𝜕𝑢𝑟

#### Solve the auxiliary L.P.P by simplex method until either of the following tree cases do arise:
(i) 𝑀𝑎𝑥 𝑧 ∗ < 0 and at least one artificial variable appears in the optimum basis at positive level.
(ii) 𝑀𝑎𝑥 𝑧 ∗ = 0 and at least one artificial variable appears in the optimum basis at zero level.
(iii) 𝑀𝑎𝑥 𝑧 ∗ = 0 and no artificial variable appears in the optimum basis
In case (i) given L.P.P. does not possesses any feasible solution whereas in cases (ii) and (iii) we proceed
on to phase II. ]

Dr.Md. Syful Islam, Assistant Professor, Dept. of Statistics, Chittagong College (01718558010) 18

You might also like