Optimization Methods
Optimization Methods
Optimization Methods
1.0. Introduction: In optimization of a design, the design objective could be simply to minimize the cost of production or to maximize the efficiency of production. An optimization algorithm is a procedure which is executed iteratively by comparing various solutions till an optimum or a satisfactory solution is found. With the advent of computers, optimization has become a part of computer-aided design activities. There are two distinct types of optimization algorithms widely used today. (a) Deterministic Algorithms. They use specific rules for moving one solution to other. These algorithms are in use to suite some times and have been successfully applied for many engineering design problems.
2
(b) Stochastic Algorithms. The stochastic algorithms are in nature with probabilistic translation rules. These are gaining popularity due to certain properties which deterministic algorithms do not have. 2.0 Optimal problem formulation: A naive optimal design is achieved by comparing a few (limited up to ten or so) alternative solutions created by using a priori problem knowledge. In this method feasibility of each design solution is first investigated. Thereafter an estimate of underlying objective (cost, profit, etc., ) of each solution is compared and best solution is adopted. It is impossible to apply single formulation procedure for all engineering design problems, since the objective in a design problem and associated therefore, design parameters vary product 3 to product different techniques are used in
different problems. Purpose of formulation is to create a mathematical model of the optimal design problem, which then can be solved using an optimization algorithm. Figure 1 shows an outline of the steps usually involved in an optimal design formulation.
Design variables: The formulation of an optimization problem begins with identifying the underlying design variables, which are primarily varied during the optimization process. A design problem usually involves many design parameters, of which some are highly sensitive to the proper working of the design. These parameters are called design variables in the parlance of optimization procedures. Other (not so important) design parameters usually remain fixed or vary in relation to the design variables. The first thumb rule of the formulation of an optimization problem is to choose as few design variables as possible. The outcome of that optimization procedure may indicate whether to include more design variables in a revised formulation or to replace some previously considered design variables with new design variables.
5
Constraints: The constraints represent some functional relationships among the design variables and other design parameters satisfying certain physical phenomenon and certain resource limitations. The nature and number of constraints to be included in the formulation depend on the user. Constraints may have exact mathematical expressions or not. For example, maximum stress is a constraint of a structure. If a structure has regular shape they have an exact mathematical relation of maximum stress with dimensions. But incase irregular shape, finite element simulation software may be necessary to compute the maximum stress. The following two types of constraints emerge from most considerations: 1. Inequality type constraints. 2. Equality type constraints.
6
Inequality constraints state that the functional relationships among variables are either greater than, smaller than or equal to, a resource value. Example: The stress (x) developed anywhere in a component must be smaller than or equal to the allowable strength (Sallowable) of the material. (x) Sallowable Some constraints may be of greater-than / equal-to type. For example, the natural frequency (f(x)) of a system to be greater than 2 Hz or by notation f(x) 2.
Equality constraints state that functional relationships should exactly match a resource value. Example: The deflection (x) of a point in the component must be exactly equal to 5 mm. Then (x) = 5. It is very difficult to handle the equality constraints in the algorithms. In such cases, equality constraint is relaxed by including two inequality constraints as given below. Example: Previously (x) = 5 Now it is changed to inequality constraints as given below: (x) 4, (x) 6.
8
Objective functions: The next task in the formulation procedure is to find the objective function in terms of the design variables and other problem parameters. The common engineering objectives involve minimization of overall cost of manufacturing or minimization of overall weight of a component or maximization of total life of a product or others. Although most of the objectives can be quantified (expressed in mathematical form), there are some objectives (such as aesthetic aspect of a design, ride characteristics of a car suspension design and reliability of a design) that may not be possible to formulate mathematically. In such a case an approximating mathematical expression is used.
In real world optimization, there could be more than one objective that the designer may want to optimize simultaneously. The multiple objective optimization algorithms are complex and computationally expensive. Therefore the most important objective is chosen as the objective function and the other objectives are included as constraints by restricting their values within a certain range. For example, consider optimal truss structure design problem. The designer may be interested in minimizing the overall weight of the structure and simultaneously be concerned in minimizing the deflection of a specific point in the truss. In the optimal problem formulation, the designer may like to use the weight of the truss (as a function of the cross sections of the members) as the objective function and have a constraint with the deflection of the concerned point to be less than a specific limit.
10
The objective function can be of two types. Either it is to be maximized or it has to be minimized. Usually the optimization algorithms were written for minimization problems or maximization problems. Although in some algorithms, some minor structural changes would enable to perform either minimization (or) maximization; this requires extensive knowledge of the algorithm. The duality principle helps by allowing the same algorithm to be used for minimization or maximization with a minor change in the objective function instead of a change in the entire algorithm. If the algorithm is for solving a minimization problem, it can be easily changed to a maximization problem by multiplying the objective function by 1 and vice versa.
11
Variable bounds: The final task of the formulation procedure is to set the minimum and the maximum bounds on each design variable. Certain optimization algorithms do not require this information. In these problems, the constraints completely surround the feasible region. Other problems require the search algorithm with in these bounds. In general, all N design variables are restricted to lie within the minimum and the maximum bounds as follows.
xi( L ) xi xi(U ) for i = 1, 2, 3, N.
(1)
In any given problem, the determination of the variables bounds xi( L ) and xi(U ) may be difficult. One way to remedy this situation is to make a guess about the optimal solution and set the minimum and maximum bounds so that the optimal solution lies within these 12 two bounds
If any design variable corresponding to the optimal solution is found to lie on or near the minimum or maximum bound, the chosen bound may be adjusted and optimization algorithm may be simulated again. After the above four tasks are completed, the optimization problem can be mathematically written in a special format, known as nonlinear programming (NLP) format. General format: Denoting the design variables as a column vector x = (x1, x2 xN)T-, the objective function as a scalar quantity f(x), J inequality constraints as gj(x) 0 and K equality constraints as hk(x) = 0, we write the NLP problem: Minimize f(x) Subject to, gj(x) 0 hk(x) = 0
xi( L ) xi xi(U )
j = 1, 2, 3, .J; k = 1, 2, 3, .K; i = 1, 2, 3, N.
13
14
Since connectivity of truss is given, the cross-sectional area and material properties of the members are the design parameters. We choose cross sectional area of the members as the design variables. Using the symmetry of the truss, A7 = A1; A6 = A2; A3 = A5 Thus, there are practically four design variables (A1 to A4). Formulation of the constraints: The truss carry the given load P = 2 kN , the tensile and compressive stress generated in each member must not be more than the corresponding allowable strength Syt and Syc of the material. Let us assume, Syt = Syc = 500 MPa and modulus of elasticity E = 200 GPa. Axial forces in each members of the truss are
15
Member AB
0.5 Pcsc ;
Now, the axial stress can be calculated by dividing the axial load by the cross-sectional area of that member. Thus, the first set of constraints can be written as
P csc S yc , 2 A1
P cot S yt , 2 A2
P csc S yt , 2 A3
P (cot + cot ) S yc . 2 A4
16
In the above structure, tan = 1.0 and tan = 2/3. The other set of constraints arises from the stability consideration of the compression members AB, BD, and DE. According to the Euler buckling conditions for the axial load in members AB and BD:
17
In this problem, we are interested in minimizing the weight of the truss structure. Since we assumed the same material for all members, the minimization of the total volume of material will yield the same optimal solution as the minimization of the total weight. Thus, we write the objective function as
Minimize
The next task is to set lower and upper bounds for the four cross sectional areas. We may choose to make all four areas lie between 10 and 500 mm2. Thus the variable bounds are as
In the following, we present the above truss structure problem in NLP format.
18
Minimize
Subject to
S yc
S yt
S yt
P S yc (cot + cot ) 0, 2 A4
19
EA12
1.281l 2
P 0, 2sin
EA42
20
Example: 2
The comfort in riding a car largely depends on the suspension characteristics. The car body is usually supported by a suspension coil spring and a damper at each wheel (Figure 4). In order to formulate the optimal design problem, the first task is to identify the important design variables.
21
Sprung mass ms, Front unsprung mass mfu, Rear unsprung mass mru, Rear damper coefficient r Front damper coefficient f
Front coil stiffness kfs, Rear coil stiffness krs, Front tyre stiffness kft, Rear tyre stiffness krt, Axle-to-axle distance l,
Polar moment of inertia of the car J, As long time is taken for the convergence of the optimization with all parameters as design variables, only four important parameters-front coil stiffness kfs, rear coil stiffness krs, front damper coefficient f , and rear damper coefficient r are considered as design variables. Other design parameters are kept constant: ms = 1000 kg mfu = 70 kg mru = 150 kg kft = 20 kg/mm krt = 20 kg/mm l l1 l2 J = 3.2 m = 1.6 m = 1.6 m = 550 kg-m2
22
Using these parameters, differential equations governing the vertical motion of the unsprung mass at the front axle (q1), the sprung mass (q2), and the unsprung mass at the rear axle (q4), and the angular motion of the sprung mass (q3) are written (Fig. 5):
Fig.5. The dynamic model of the car suspension system. The above model has four degrees-of-freedom (q1 to q4)
23
(9) (10) (11) (12) Where the forces F1 to F6 are calculated as follows:
F1 F4 k= F2 k= F3 f d 2 , ft d1 , fs d 2 , k= F5 = F6 krt d3 . rs d 4 , r d 4,
(13)
The parameters d1, d2, d3, and d4 are the relative deformations in the front tyre, the front spring, the rear tyre, and the rear spring respectively. Figure 5 shows all the four degrees of freedom of the above system (q1 to q4). The relative deformations in springs and tyres can be written as follows:
24
d= q1 f1 (t ), 1 d 2 =q2 + l1q3 q1 , d= q4 f 2 (t ), 3 d4 = q2 l2 q3 q4 .
The time varying functions f1(t) and f2(t) are road irregularities as functions of time. Any function can be used for f1(t). For example, a bump can be modeled as f1(t) = A sin t / T , where A is the amplitude of the bump and T is the time required to cross the bump. When a car is moving forward, the front wheel experiences the bump first, while the rear wheel experiences the same bump a little later, depending upon the speed of the car. Thus, the function f2(t) can be written as f2(t) = f1( t l/v), where l is the axle-to-axle distance and is the speed of the car. For the above bump, f2(t) = A sin( (tl/v)/T).
25
(14)
The coupled differential equations specified in equations (9) to (12) can be solved using a numerical integration technique (for example, a fourth-order Runge-Kutta method can be used) to obtain the pitching and bouncing dynamics of the sprung mass ms. Equations can be integrated for a time range from zero to tmax. After the design variables are chosen, the next task is to formulate the constraints associated with the above car suspension problem. In order to simplify the problem, we consider only one constraint. The jerk (the rate of change of the vertical acceleration of the sprung mass) is a major factor concerning the comfort of the riding passengers. The guideline used in car industries suggests that the maximum jerk experienced by the passengers should not be more than about 18 m/s3. Mathematically,
When the four coupled differential equations (9) to (12) are solved, the above constraint can be computed by numerically differentiating the vertical movement of the sprung mass (q2) thrice with respect to time. The next task is to formulate the objective function. In this problem, the primary objective is to minimize the transmissibility factor which is calculated as the ratio of the bouncing amplitude q2(t) of the sprung mass to the road excitation amplitude A. Thus, we write the objective function as
Minimize max abs q2 (t) A
The above objective function can be calculated from the solution of the four differential equations mentioned earlier. A minimum value of the transmissibility factor suggests the minimum transmission of road vibration to the passengers. This factor is also directly related to the ride characteristics as specified by the ISO standard. 27
Thus, the optimized design of the above car suspension system would provide the minimum transmissibility of the road vibration to the passengers with a limited level of jerk. Finally, a minimum and maximum limit for each design variable can be set. This may require some previous experience with a car suspension design, but the following limits for the above car may include the optimal solution:
0 k fs , krs 2kg / mm, 0 f , r 300kg /(m / s ).
Thus, the above optimal car suspension design problem can be written in NLP form as follows:
Minimize max abs q2 (t) A
Subject to
Fig. 2
Figure 2 shows a typical transit system network. The solid lines represent different routes, the points on the lines represent the stops and the circled intersections of the routes represent the transfer stations. The problem is to determine schedules for the routes such that the transit system provides the best Level of Service (LOS) to its passengers, within the resources available.
29
One of the good measures of the LOS is the amount of time passengers wait during their journey- the lesser the waiting time, the better is the LOS. On any transit network, passengers wait either to board the vehicle at the station of origin or they wait at a transfer station at which they transfer from one vehicle to another. Let Initial Wait Time (IWT), Transit Time (TT) Schedule the vehicles such that (IWT + TT) is minimum. The design variables are: Arrival time :aik k: vehicles i : route.
If the routes are M and vehicle are K The design variables are 2MK.
30
Minimum stopping time: (dik aik) smin Maximum stopping time: (dik aik) smax for all i and k (2) Maximum allowable transfer time: No passenger on the transit network should have to wait more than a certain period of time T at any transfer station. This can be enforced by checking all possible differences between departure and arrival times and limiting those values to T. This constraint can be formulated by introducing a new set of variables ik, ,jl between the k-th vehicle of the i-th route and the l-th vehicle of the j-th route. These variables can take either a zero or one. A value of zero means that the transfer of passengers between those two vehicles is not feasible. A value of one means otherwise.
31
(1)
Consider the arrival and departure times of vehicles in two different routes at a particular station, as shown in Fig.3.
Fig.3. Transfers from k-th vehicle on the i-route to three consecutive vehicles in the j-th route
A passenger from the kth vehicle in the ith route can only transfer to a vehicle in the jth route which is arriving at the station after aik. According to the figure, the transfer of a passengers from the kth vehicle in the ith route is not possible to the (l1)th vehicle in the jth route, because the departure time of the latter vehicle d lj1 is k ,l 1 k i earlier than ai . Thus, the parameter , j takes the value zero, ,l whereas the parameter ik , j takes a value one. In order to simplify the model, we assume that transfers to vehicles departing after lth vehicle in the jth route are also not possible. All parameters ik, ,jq for q = (l+1), (l+2), are also zero. Thus, between any two vehicles, the following condition must be satisfied:
32
for all i, j, k and l. (3) The left side expression of the above condition is zero for those transfers that are not feasible. Since transfers only to the next ,l available vehicle are assumed, only one ik for (l = 1, 2, ) is one ,j and the rest all are zeros for fixed values of i, j, and k. Mathematically, k ,l for all i, j and k. (4) i, j = 1
l
The introduction of the artificial variables i , j makes the formulation easier, but causes a difficulty. Many optimization algorithms cannot handle discrete design variables efficiently. Since the artificial design variables ik, ,jl can only take a zero or one, another set of constraints is added to enforce the binary values:
k ,l
(5)
33
Where M is a large positive number. The above constraint ensures that the variable i , j always takes a value one whenever a transfer
k ,l
is possible and the value zero whenever transfer is not possible. Maximum headway: The headway between two consecutive vehicles should be less than or equal to the policy headway, hi, or
The objective function consists of two terms: the first term represents the total transfer time (TT) over all the passengers and the second term represents the initial waiting time (IWT) for all the passengers. The objective is to minimize the following function:
(d a )w +
k .l i, j l j k i k i, j i j k l i l
aik aik 1
The parameter wik, j is the number of passengers transferring from the k-th vehicle of the i-th route to the j-th route. The first term is obtained by summing the individual transfer time (d lj aik ) over all passengers for all the vehicles for every pair of routes.The parameter vi,k(t) is the number of passengers arriving at the stop for the k-th vehicle in the i-th route at a given time t.Since the arrivaltime for passengers can be anywhere between = t 0 to = t (aik aik 1 )(the headway), the initial waiting time also differs from one passenger to another. For example, a passenger arriving at the stop just after the previous vehicle has left has to wait for the full headway time (aik aik 1 ) before the next vehicle arrives. On the other hand, a passenger arriving at the stop later has to wait for a shorter time. The calculation of the second term assumes that passengers arrive at the stop during the time interval aik 1 to aik
35
according to the known time-varying function vi,k(t), where t is measured from aik 1 .Then the quantity
aik aik 1
vi ,k (t )[(a a
k i
k 1 i
) t ]dt
(7)
gives the sum of the initial waiting times for all passengers who board the k-th vehicle of the i-th route. We then sum it over all the routes and vehicles to estimate the network total of the IWT. Thus, the complete NLP problem can be written as follows: Minimize
i j k l
k .l i, j
(d a ) w +
l j k i k i, j i l
aik aik 1
(8)
36
Subject to, smax -(dik aik) 0 (dik aik) smin 0 for all i, and k, for all i, and k, for all i, j, k and l, for all i, j, k and l, for all i, and k, for all i, j, and k.
taking only a value zero or a one and other variables aik and dik are real-valued
37
3. Optimization Algorithms The formulation of engineering design problems differ from problem to problem. They are (i) (ii) Linear terms for constraints and objective function Non linear terms for constraints and objective function. The terms are not explicit functions of the design variables. No single optimization algorithm which will work in all optimization problems equals efficiently. For the sake of clarity, the optimization algorithms are classified into a number of groups, which are now briefly discussed. (a) Single-variable optimization algorithms. These algorithms are classified into two categories i. ii. Direct methods Gradient based methods
38
Direct methods do not use any derivative information of the objective function; only objective function values are used to guide the search process. However, gradient-based methods use derivative information (first and/ or second order) to guide the search process. Although engineering optimization problems usually contain more than one variable, single-variable optimization algorithms are mainly used as unidirectional search methods in multivariable optimization algorithms. (b) Multi- variable optimization algorithms. These algorithms demonstrate how the search for the optimum point progresses in multiple dimensions. Depending on whether the gradient information is used or not used, these algorithms are also classified into direct and gradient-based techniques.
39
(c) Constrained optimization algorithms. These algorithms use the single variable and multivariable optimization algorithms repeatedly and simultaneously maintain the search effort inside the feasible search region. These algorithms are mostly used in engineering optimization problems. (d) Specialized optimization algorithms. Two of these algorithms - integer programming and geometric programming - are often used in engineering design problems. Integer programming methods can solve optimization problems with integer design variables. Geometric programming methods solve optimization problems with objective functions and constraints written in a special form. (e) Non-traditional optimization algorithms. There are two algorithms which are nontraditional, these are: a) Genetic algorithms b) Simulated annealing.
40
4.0 Single-variable optimization algorithms The algorithms described in this section can be used to solve minimization problems of the following type: Minimize f(x) Where f(x) is the objective function and x is a real variable. The purpose of an optimization algorithm is to find a solution x, for which the function f(x) is minimum. 4.1 Optimality criteria There are three different types of optimal points are: (i) Local Optimal point: A point or solution x* is said to be a local optimal point, if no point in the neighbourhood has a function value smaller than f(x*). (ii) Global Optimal point: A point or solution x** is said to be a global optimal point, if no point in the entire search space has a function value smaller than f(x**).
41
(iii) Inflection point: x* is an inflection point if f(x*) increases locally as x* increases & decreases locally as x* reduces or f(x*) decreases locally as x* increases and increases locally as x* decreases Let the objective function f(x) is the chosen search space f '(x) and f ''(x) are first and second derivatives A point x is a minimum if f'(x) = 0 & f''(x) > 0. If f (x) = 0, the point is either a minimum, a maximum or an inflection point Suppose at point x*, the first derivative is zero and the first non-zero higher order derivative is denoted by n, then If n is odd, x* is an inflection point If n is even, x* is a local optimum (i) If the derivative is +ve, x* is a local minimum (ii) If the derivative is ve, x* is a local maximum
42
From the figure, we can see that point x = 0 is an inflection point as f(x) increases for x 0 decreases for x 0 Using the sufficient conditions 0 f '( = x 0) = 3x 2 x= = 0
0 f ''( = x 0) = 6 x x= = 0 6( Nonzero value) f '''( = x 0) = 6 x= = 0 Third derivative, n =3 is odd, hence x = 0 is an inflection point.
43
From the figure, we can see that the point x = 0 is a minimal point. Using sufficient conditions
44
45
4.2 Bracketing Methods: The minimum of a function is found in two phases. Initially an approximate method is used to find a lower and an upper bound of the minimum. Next, a sophisticated technique is used to search within these two limits to find the optimal solution. (a) Exhaustive search method It is the simplest of all search methods. The optimum of a function is bracketed by calculating the function values at a number of equally spaced points(Fig.7).
Fig.7 The exhaustive search method that uses equally spaced points
46
Usually the search begin from a lower bound on the variable and three consecutive function values are compared at a time based on the assumption of unimodality of the function. Based on the outcome of comparison, the search is either terminated or continued by replacing one of the three points with a new point. Algorithm: Step 1. Set
Step 3 Is
x3 b ? If yes, goto Step 2.
Else no minimum exists in (a,b) or a boundary point (a or b) is the minimum point. Example:6 Minimize f(x) = x2 + 54/x in the interval (0,5) A plot of function is shown in Fig.8
48
According to sufficiency conditions, x =3 is a local minimum. Now consider n = 10 for exhaustive search. Step 1 According to the parameter chosen,
x1= a= 0 and b= 5 x = (5 0) /10 = 0.5
We set
x2 = 0 + 0.5 = 0.5
and
Set
x1 = 0.5 ,
x2 = 1.0,
x3 = 1.5
f (1.5) = 38.25
f ( x1 ) > f ( x2 ) > f ( x3 ) and minimum does not lie between (0.5,1.5)
f ( x1 ) > f ( x2 ) < f ( x3 )
Solution lies between (2.5, 3.5). 2(5 0) /10 = 1.0 The accuracy solution 2(a b) / n = If more accurate solution is required, divide into more number of parts by increasing n. 50
(b) Step 1
then then
is + ve is ve
Else, the minimum lies in the interval ( x ( k 1) , x ( k +1) ) and terminate If is large, accuracy is poor.
51
Choose an initial
54 using bounding phase method x guess x(0) = 0.6 and increment =0.5 set k
= 0.
We observe that 3. 4.
= x (1) + 21 = 1.1 + 2(0.5) = 2.1 3. Next guess: x (2) 4. f= ( x (2) ) 30.124 < f ( x1 ) ... set k = 2 and goto Step 3
= x (2) + 22 = 2.1 + 4(0.5) = 4.1 3. Next guess: x (3) ( x (3) ) 29.981 < f ( x (2) ) ... set k = 3 and goto Step3. 4. f=
3. Next guess: ( x (4) ) 72.277 > f ( x (3) ). Thus terminate with interval (2.1, 8.1) 4. f= with =0.5, the obtained bracketing is poor. Functions evaluated are 7. =0.001, the obtained interval is (1.623, 4.695). Functions evaluated are 15.
53
4.3. Region elimination methods: Once the minimum point is bracketed, a more sophisticated algorithm is used to improve the accuracy of the solution. Region elimination methods are used for this purpose. The fundamental rule for region elimination method is as follows:
Fig.9. A typical single-variable unimodal function with function values at two distinct points
Consider a unimodal function drawn in Fig.9. The two points x1 and x2 lie in the interval (a,b) and satisfy x1 < x2. For minimization, the following conditions apply If f ( x1 ) > f ( x2 ) then the minimum does not lie in (a, x1) If f ( x1 ) < f ( x2 ) then the minimum does not lie in (x2, b) If f ( x1 ) = f ( x2 ) then the minimum does not lie in (a, x1) and (x2, b)
54
Fig.10 Three points x1, xm, and x2 used in the interval halving
Fig.10 shows a region in the interval (a, b). Three points divide the search space into four regions. The fundamental rule for region elimination is used to eliminate a portion of search space based on function values at three chosen points. x1, xm, x2 are three search points. Two of the function values are compared at a time and some region is eliminated. The three possibilities are (i) If f ( x1 ) < f ( xm ) minimum cannot lie beyond xm and reduce the interval from (a, b) to (a, xm) search space is reduced by 50 55 percent.
(ii) If f ( x1 ) > f ( xm ) , minimum cannot lie in the interval (a, x1).The point x1 is in search space, hence reduction is 25 percent. Then compare and to eliminate further 25 percent of search space. Continue the process till small enough interval is found. Algorithm: 1. 2. 3. 4. 5. Choose lower bound a and upper bound b and a small value
= (a + b) / 2, L0 = L= b a. Compute f ( xm )
goto
56
(a) Fibonacci search method: The search interval is reduced according to Fibonacci numbers. = F = 1 Fn Fn 1 + Fn 2 where n=2,3,4,.. and F = 0 1
Algorithm: 1. Choose a lower bound a and an upper bound b. Set L = b-a. Assume the desired number of function evaluations to be n. Set k=2 * * * = (F x = a + L x = b L ) L F L 2. Compute k nk +1 n+1 Set 1 and 2 k k
57
3. Compute one of f ( x1 ) o r f ( x2 ) ,which was not evaluated earlier. Use the fundamental region elimination rule to eliminate a region. Set new a and b. 4. Is k = n? If no, set k = k+1 and goto Step 2. Else terminate. ( c ) Golden section search method Difficulties with Fibonacci search method: (i) Fibonacci numbers have to be calculated and stored. (ii) At every iteration the proportion of the eliminated region is not the same. In golden section search method, the search space (a, b) is first linearly mapped to a unit interval search space (0,1). Two points at from either end of search space are chosen so that at every iteration the eliminated region is (1 ) to that in the previous iteration (Fig.12). This can be achieved by equating This yields the golden number = 0.618
58
Fig.12. The points (x1 and x2) used in the golden section
Algorithm: 1. Choose a lower bound a and an upper bound b. Choose a small value .Normalize the variable x by using the equation w = ( x a) (b a) . aw 0, = bw 1, and = Lw 1., Set k=1. Thus =
bw (0.618) Lw . Compute f ( w1 ) o r f ( w2 ) , Set w 2 = aw + (0.618) Lw and w= 1 depending on whichever of the two was not evaluated earlier. Use the fundamental region elimination rule to
2.
4.4 Gradient based methods Despite the difficulty of finding the derivatives, they are popular because of their effectiveness. (a) Newton- Raphson method
(n) f x '( ) ( n +1) (n) Considering Taylor series expansion x = x f ''( x ( n ) )
Algorithm:
(1) 1. Choose initial guess x(1)and small value . Set k = 1.Compute f '( x ).
(k ) f ''( x ). 2. Compute
3. Calculate x 4. If
( k +1)
= x
(k )
f '( x ( k +1) ).
Convergence will depend on initial guess value and nature of the objective function.
60
Example 8 54 Minimize f ( x= using Newton-Raphson method. ) x2 + x 3 = 10 = , k 1 1. Choose initial guess x (1) = 1. and
f ( x ( n ) + x ( n ) ) f ( x ( n ) x ( n ) ) f '( x ) = 2x ( n ) f ( x ( n ) + x ( n ) ) 2 f ( x ( n ) ) + f ( x ( n ) x ( n ) ) (n) f ''( x ) = (x ( n ) ) 2
(n)
f '( x ( n ) ) = 52.005
f ''( x ( n ) ) = 110.011
61
2.
x(k )
f '( x ( k ) )
-52.005 -21.944 -8.239 -2.167
f ''( x ( k ) )
110.011 35.796 13.899
62