6 OneD Unconstrained Opt

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

CBB 4333

Process Optimisation

ONE-DIMENSIONAL
UNCONSTRAINED OPTIMISATION
Dr Abd Halim Shah Maulud
Universiti Teknologi PETRONAS
Sept 2013
TOPIC OUTCOMES
By the end of the topic, students are able to:
discuss the concept of one-dimensional
unconstrained optimisation
solve one-dimensional unconstrained optimisation
analytically
via solving f = 0 & determine x*
analytically or numerically
function value-based methods
derivative-based methods
OPTIMALITY
necessary conditions
x* is stationary point
sufficient conditions
max
min
saddle point
ANALYTICAL METHOD
necessary condition optimal solution



problems?
non-linear equations
difficult to solve
need more convenient methods
n i
x
f
i
, , 1 ; 0 = =
c
c

METHODS FOR OPTIMAL SOLUTIONS SEARCH
Function value-based methods
search by compare function values at a sequence of
trial points
Derivative-based methods
involve derivative at each iteration and determine
potential optimum by necessary condition
FUNCTION VALUE-BASED METHODS
general procedure for minimisation
1. Start with an initial value, x
0
2. Calculate f(x
0
)
3. Change x
0
for next stage x
1
4. Calculate f(x
1
)
5. Make sure f(x
k +1
) < f(x
k
) in each stage k
6. Calculations stop when |f(x
k+1
) f(x
k
)|< tol
tol = pre-specified tolerance or
criterion of precision
Note: max = min f(x)
Example 1
Solve min f(x) = x
2
x using function values
Solution:
Assume x
0
= 3;
f(x) = 3
2
3 = 6
-1
0
1
2
3
4
5
6
7
8
9
10
-1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5
x
f
(
x
)

=

x
2

-

x
x* = 0.5
x f(x) = x
2
- x
-1 2
-0.5 0.75
0 0
0.5 -0.25
1 0
1.5 0.75
2 2
2.5 3.75
3 6
3.5 8.75
Narrow down the solution region
issues in one-dimensional problems
exhaustive, wide range
bracketing method
to narrow down the solution region
to avoid excessive search within a wide range
how?
assume an initial bracket, b
0
(contain optimum)
determine reduced bracket, b
k
at stage k
Example
Use bracketing method to solve:
min f(x) = ( x 100 )
2

Solution:
Consider a sequence of x given by
x
k+1
= x
k
+ b 2
(k-1)
Assume b = 1 and x
1
= 0, f(x
1
) = 10000
x
2
= x
1
+ (1) 2
(1-1)
= 0 + 1 = 1, f(x
2
) = 9801
x
3
= x
2
+ (1) 2
(2-1)
= 1 + 2 = 3, f(x
3
) = 9409
x
0 1 3 7 15 31 63 127 255
f(x)
10
4
9801 9409 8649 7225 4761 1369 729 2325
New bracket is 63 < x < 255
Continue search using bracketing method
Bracket bound decreases, b needs to be reduced
Assume b = 0.5 and repeat calculations


Derivative-based method
Newtons method
(Quasi-Newton) Secant method

DERIVATIVE-BASED METHODS
general procedure for minimisation
1. Start with an initial value, x
0
2. Calculate f(x
0
) and derivatives of f(x
0
)
3. Change x
0
for next stage x
1
using the derivatives
4. Calculate f(x
1
) and derivatives of f(x
1
)
5. Make sure f(x
k +1
) < f(x
k
) in each stage k
6. Calculations stop when |Vf(x
k+1
)|< tol
tol = pre-specified tolerance or
criterion of precision
Note: max = min f(x)


Derivative-based method
Newtons method
NEWTONS METHOD
necessary condition for f(x) to have an optimum is
that f(x) = 0
apply Newtons method to solve f(x) = 0
) (
) (
2
1
x f
x f
x x
k
k k
V
V
=
+
Vf(x)
Vf(x)
Vf(x)
Vf(x)
NEWTONS METHOD
approximate f(x) by quadratic function at x
k
using
Taylors expansion
f(x) = f(x
k
) + f(x
k
) (x-x
k
) +
2
f(x
k
) (x-x
k
)
2
stationary point can found by
df(x)/dx = f(x
k
) + 2
2
f(x
k
) (x-x
k
) = 0
x = x
k
( f(x
k
) /
2
f(x
k
) )
Newtons method is equivalent to using quadratic
approximation for a function & applying necessary
condition
Example 1
Solve min f(x) = x
2
x using Newtons method
Solution:
f(x) = 2x 1

2
f(x) = 2
Assume x
0
= 3;
f(x) = 2(3) 1 = 5
Newtons: x
1
= x
0
5/2 = 0.5
Repeat calculation for x
2
with x
1
= 0.5
Found minimum at x = 0.5 (value doesnt improve)
(1 iteration because quadratic with f linear; can be more)
-1
0
1
2
3
4
5
6
7
8
9
10
-1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5
x
f
(
x
)

=

x
2

-

x
x
1
= 0.5
Example 2
Find minimum of f(x) = x
4
x +1, tol = 10
-7

Solution:
f(x) = 4x
3
1,
2
f(x) = 12x
2
Assume x
0
= 3;
x
1
= x
0
(4(3)
3
-1)/12(3
2
)
= 3 107/108
= 2.00926
x
2
= x
1
(4(2.00926)
3
-1)/12(2.00926
2
)
= 2.00926 31.4465/48.4454
= 1.36015
Solution
| x
k+1
x
k
|
x* = 0.6299605
NEWTONS METHOD
this method approximates the function by a
quadratic function
for quadratic functions, the optimum is obtained
in one iteration
both f(x) and
2
f(x) have to be calculated
if f(x) < 0, the method converges slowly


Derivative-based method
(Quasi-Newton) Secant method
SECANT METHOD
starts out by using two points x
p
and x
q

spanning the interval
first derivatives should have opposite signs at x
p

and x
q

f (x
p
) = f(x
q
)
f (x) is approximated by a straight line
interval bounds are updated and narrow down at
each iteration
the procedure terminates when f(x
k
p
) < tol
QUASI-NEWTON (SECANT) METHOD
m
x f
x x
x x
x f x f
m
q
q p
p q
p q
)
(
~
) ( ) (
V
=

V V
=
in next iteration replaces x
p
Remarks:
this method only uses 1
st
order derivatives
p
x
~
Vf(x)
Example
Find minimum of f(x) = x
4
x +1, tol = 10
-7

Solution:
f(x) = 4x
3
1
Assume x
p
= 3, x
q
= 3;
f(x
p
) = 109
f(x
q
) = 107

opposite signs acceptable choices
Example
f(x) = x
4
x +1

The shape of f(x) implies
that a large number of
iterations are needed.


Vf(x)
Vf(x) = 4x
3
-1
Solution (Cont.)
( )
p
f x V
RATE OF CONVERGENCE
three rates of convergence are used to compare the
effectiveness of different search methods
Linear
Slow in practice
Secant method has a linear convergence
Order P
Fastest in practice
Newton method has order p = 2 convergence
Superlinear
fast in practice
large , 1 0
*
*
1
k c c
x x
x x
k
k
< s s

+
large 1 p 0, k c c
x x
x x
p
k
k
,
*
*
1
> > s

+
<

+

k c c
x x
x x
k k
k
k
k
as and or 0 0 lim
*
*
1
For minimization you can use up to three criteria for termination:
(1)
1
1
1
2
except when ( ) 0 ( ) ( )

( ) then use ( ) ( )
k k k
k k k
f x f x f x
f x f x f x
+
+

<e
<e
1
3
1
4
except when 0

then use
k
k k
i i
k k k
i
x
x x
x x x
+
+

<e
<e
5 6
( ) or
k k
i
f x s V <e <e
(2)
(3)
Big change in f(x) but little change
in x. Code will stop if Ax is sole
criterion.
Big change in x but little change
in f(x). Code will stop if Ax is sole
criterion.
f(x)
f(x)
x
x
TERMINATION CRITERIA
RECAP
discuss the concept of one-dimensional
unconstrained optimisation
solve one-dimensional unconstrained optimisation
analytically
via solving f = 0 & determine x*
analytically or numerically
function value-based methods
derivative-based methods
Assignment 3
REFERENCES
1. Edgar T. F. and Himmelblau, Optimization of
Chemical Processes, McGraw Hill, 2001.
2. Biegler, L.T., Grossmann E.I. and Westerberg, A.W.,
Systematic Methods of Chemical Process Design,
Prentice Hall, 1997.

You might also like