Modified Differential Evolution (MDE) For Optimization of Non-Linear Chemical Processes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Computers and Chemical Engineering 30 (2006) 989–1002

Modified differential evolution (MDE) for optimization


of non-linear chemical processes
B.V. Babu ∗ , Rakesh Angira 1
Department of Chemical Engineering, Birla Institute of Technology & Science, Pilani 333 031, India
Received 13 May 2005; received in revised form 9 November 2005; accepted 27 December 2005
Available online 15 March 2006

Abstract
In recent years, evolutionary algorithms (EAs) are gaining popularity for finding the optimal solution of non-linear multimodal problems
encountered in many engineering disciplines. Differential evolution (DE), one of the evolutionary algorithms, is a novel optimization method
capable of handling nondifferentiable, non-linear and multimodal objective functions. Previous studies have shown that differential evolution is
an efficient, effective and robust evolutionary optimization method. Still, DE takes large computational time for optimizing the computationally
expensive objective functions. And therefore, an attempt to speed up DE is considered necessary. This paper introduces a modification to original
DE that enhances the convergence rate without compromising on solution quality. Our modified differential evolution (MDE) algorithm utilizes
only one set of population as against two sets in original DE at any given point of time in a generation. Such an improvement reduces the memory
and computational efforts. The proposed MDE is applied to benchmark test functions and five non-linear chemical engineering problems. Results
obtained are compared with those obtained using DE by considering the convergence history (CPU time and the number of runs converged to
global optimum) and the established statistical techniques, taking into account the variability in the results, such as t-test. As compared to DE,
MDE is found to perform better in locating the global optimal solution for all the problems considered.
© 2006 Elsevier Ltd. All rights reserved.

Keywords: Optimization; Chemical processes; Systems engineering; Design; Evolutionary algorithm; Non-linear programming problems; Differential evolution;
Modified differential evolution; Dynamic optimization

1. Introduction optimum depending upon the degree of non-linearity and initial


guess. Unfortunately, none of the traditional algorithms are guar-
Many engineering optimization problems contain multiple anteed to find the global optimal solution, but population based
optimal solutions, among which one or more may be the abso- search algorithms are found to have a better global perspective
lute minimum or maximum solutions. These absolute optimum than the traditional methods (Onwubolu & Babu, 2004). In the
solutions are known as global optimal solutions and other opti- recent past, non-traditional search and optimization techniques
mum solutions are known as local optimal solutions. Ideally, based on natural phenomenon (evolutionary computation) such
we are interested in the global optimal solutions because they as genetic algorithms (GA) (Holland, 1975; Goldberg, 1989),
correspond to the absolute optimum objective function value. evolution strategies (ESs) (Schwefel, 1981), simulated anneal-
Most of the traditional optimization algorithms based on gra- ing (SA) (Kirkpatrick et al., 1983), and differential evolution
dient methods have the possibility of getting trapped at local (DE) (Price & Storn, 1997) to name a few, have been devel-
oped to overcome the problems. Among their advantages are:
(1) they do not require the objective function to be continuous
∗ Corresponding author. Tel.: +91 1596 245073x205/224 (work)/ and/or differentiable, (2) they do not require extensive problem
+91 1596 244977 (home); fax: +91 1596 244183. formulation (in case of traditional methods such as Integer pro-
E-mail addresses: [email protected] (B.V. Babu), gramming, geometric programming, branch and bound methods,
[email protected] (R. Angira).
URL: http://discovery.bits-pilani.ac.in/discipline/chemical/bvb/,
etc., special mathematical formulation is required for solving a
http://discovery.bits-pilani.ac.in/discipline/chemical/rangira/. problem), (3) they are not sensitive to starting point, (4) they usu-
1 Tel.: +91 1596 245073x205/216 (work)/+91 1596 245665 (home). ally do not get stuck into so called local optima, and (5) they are

0098-1354/$ – see front matter © 2006 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compchemeng.2005.12.020
990 B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002

niques, taking into account the variability in the results, such as


Nomenclature t-test.

Ci frequency factors 2. Differential evolution (DE) in brief


CPU-timen time taken by CPU per execution (average of
‘n’ executions) Differential evolution (DE), a recent optimization technique,
CR crossover constant is an exceptionally simple and easy to use evolution strategy,
df fractional difference which is significantly faster and robust at numerical optimiza-
DE differential evolution tion and is more likely to find a function’s true global optimum
Ei activation energies (cal/mol) (Price & Storn, 1997). Simple GA uses a binary coding for rep-
f objective function resenting problem parameters whereas DE uses real coding of
fbest best value of objective function in all ‘n’ experi- floating point numbers. Among the DE’s advantages are its sim-
ments using MDE and DE ple structure, ease of use, speed and robustness. It can be used for
F scaling factor optimizing functions with real variables and many local optima.
HEND heat exchanger network design problem Original DE dealt with a single strategy (Price & Storn, 1997).
lower(xi ) lower bound of the ith dimension Later on 10 different strategies have been suggested by Price
MDE modified differential evolution and Storn (2005). A strategy that works out to be the best for
NFEn number of function evaluations (average of ‘n’ a given problem may not work well when applied for a differ-
executions/experiments) ent problem. The strategy to be adopted for each problem is to
NP population size be determined separately by trial and error. The details of DE
NRCn number of runs converged to global optima out of algorithm and pseudo code are available in literature (Babu &
‘n’ executions Sastry, 1999; Price & Storn, 2005).
randi [0,1] random number for ith dimension
R gas constant (1.9872 cal/mol/K) 3. Improvements on DE
RND reactor network design problem
upper(xi ) upper bound of the ith dimension When using any population based search algorithm in gen-
j
xi jth individual of dimension (i = 1, 2, . . ., D) in eral and DE in particular to optimize a function, an acceptable
population P trade-off between convergence rate (with reference to locating
Xi,j,G jth individual with ith dimension in Gth genera- optimum) and robustness (with reference to not missing the
tion global optima) must generally be determined. Convergence rate
implies a fast convergence although it may be to a local optimum.
On the other hand, robustness guarantees a high probability of
more likely to find out a function’s true global optimum. These obtaining the global optimum. A few attempts have already been
advantages enhance their application to various fields. They have made to achieve this trade-off (Chiou & Wang, 1999; Chiou et
been successfully applied in many engineering design problems al., 2004; Hendtlass, 2001; Lee et al., 1999; Tasoulis, Pavlidis,
(Androulakis & Venkatasubramanian, 1991; Angira & Babu, Plagianakos, & Vrahatis, 2004; Wang & Chiou, 1997). Wang
2003; Babu, 2004; Babu & Angira, 2002a,b; Babu, Pallavi, & and Chiou (1997) suggested adjustment of the scaling factor F
Syed Mubeen, 2005; Babu & Sastry, 1999; Chiou, Chang, & Su, of the mutation operator and the number of individuals NP in
2004; Chiou & Wang, 1999; Deb, 1996, 2001; Hendtlass, 2001; DE. If NP is increased while simultaneously slightly lowering
Lee, Han, & Chang, 1999; Lu & Wang, 2001; Storn, 1995, etc.) its F value, DE becomes increasingly robust. However, by doing
to name a few. Recently, Onwubolu and Babu (2004) compiled so, much computation time should be expended to evaluate the
new techniques and their applications to various disciplines of objective function. This fact is particularly relevant when using
engineering and management. DE to solve optimal control problems due to the large amount of
Previous studies have shown that DE is an efficient, effec- CPU time required for solving the differential equations. Chiou
tive and robust evolutionary optimization method. Still, DE and Wang (1999) embedded accelerated phase and migration
takes large computational time for optimizing the computation- phase into the original algorithm of DE. These two phases are
ally expensive objective functions. And therefore, an attempt used to improve the convergence speed without decreasing the
to speed up DE is considered necessary. In this paper, modi- diversity among individuals. Also, several alternative strategies
fied differential evolution (MDE), an evolutionary optimization are compared. Lee et al. (1999) introduced a local search opera-
technique, is proposed that utilizes only one set of popula- tion in DE to enhance the convergence speed. An algorithm that
tion as against two sets in original DE at any given point is a combination of the particle swarm and differential evolu-
of time in a generation. Further, it is applied to nine bench- tion algorithms is introduced by (Hendtlass, 2001). It was shown
mark test functions and five non-linear chemical processes. Also that the combined algorithm outperforms both of the component
the performance of MDE is compared with that of differential algorithms under most conditions; in both absolute and compu-
evolution (DE). The comparison is made by considering the tational load weighted terms. Chiou et al. (2004) proposed an
convergence history (CPU time and the number of runs con- ant direction hybrid differential evolution (ADHDE) that utilizes
verged to global optimum) and the established statistical tech- the concept of ant colony search to search the proper mutation
B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002 991

operator to accelerate searching out the global solution. They


found the performance of the ADHDE method better than the
other methods. Tasoulis et al. (2004) explored how differential
evolution can be parallelized in a virtual parallel environment so
as to improve both the speed and the performance of the method.
In this paper an attempt has been made to increase the conver-
gence speed of DE without compromising with the robustness
(possibility of obtaining the global optimum). A modified dif-
ferential evolution (MDE) is proposed in the present work to
achieve this trade-off and its performance is compared with that
of original DE.

4. Modified differential evolution (MDE)


Fig. 2. Schematic of MDE.
The principle of modified DE is same as DE. The major dif-
ference between DE and MDE is that MDE maintains only one
array (Figs. 1 and 2). The array is updated as and when a bet-
and the robustness. Thus it is possible to increase the conver-
ter solution is found. Also, these newly found better solutions
gence rate of the differential evolution algorithm and thereby
can take part in mutation and crossover operation in the cur-
obtain an acceptable solution with a lower number of objective
rent generation itself as opposed to DE (where another array
function evaluations. Such an improvement can be advanta-
is maintained and these better solutions take part in mutation
geous in many real-world problems where the evaluation of
and crossover operations in next generation). Updating the sin-
a candidate solution is a computationally expensive operation
gle array continuously enhances the convergence speed leading
and consequently finding the global optimum or a good sub-
to less function evaluations as compared to DE. However, DE
optimal solution with the original differential evolution algo-
maintains two arrays consuming extra memory and CPU-time
rithm is too time-consuming, or even impossible within the
(more function evaluations). The schematic diagram of MDE
time available. The pseudo code of the proposed MDE is given
algorithm is shown in Fig. 2.
below:
Fig. 2 clearly shows the use of single array in MDE instead
of double array in DE. This modification enables the algo-
rithm to get a better trade-off between the convergence rate Let P a population of size NP,
and xj the jth individual of dimension D in population P,
and CR denotes the crossover probability
input D, NP ≥ 4; F ∈ (0, 1+); CR ∈ [0, 1], and initial
bounds: lower(xi ); upper(xi ); i = 1, . . ., D
initialize P = {x1 , . . . , xNP } as
for each individual j ∈ P
j
xi = lower(xi ) + randi [0, 1] × (upper(xi ) − lower(xi ));
i = 1, . . ., D
end for each
evaluate P
while the stopping criterion is not satisfied do
forall j ≤ NP
randomly select r1 , r2 , r3 ∈ (1, . . ., NP),
j = r1 =r2 =r3
randomly select irand ∈ (1, . . ., D)
forall i ≤ D, ⎧ r3
⎪ xi + F × (xir1 − xir2 )


⎨ if (random[0, 1) < CRΛi = irand
xi = j

⎪ xi


otherwise
end forall
if f (x ) ≤ f (xj )
Then, xj = x ; f (xj ) = f (x );
end forall
end while
Fig. 1. Schematic of DE. Print the results.
992 B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002

Table 1 5.4. Test function-4, 5, and 6


Details of global minimum
Function D Global minimum Remarks The objective is to minimize the function (RD ).
GP2 2 3 at x = {0, −1} Four local minima D−1

ES2 2 −1 at x = {π, π} Several local minima f = [100(xi2 − xi+1 )2 ] + [xi − 1]2
H3 3 −3.86278 at Four local minima i=1
x = {0.114624,
0.555649, 0.852547}
RD 2, 5, and 10 0 at x = {1, . . ., 1} Several local minima
Subject to − 5 ≤ xi ≤ 10; i = 1, . . . , D − 1.
ZD 2, 5, and 10 0 at x = {0, . . ., 0} Several local minima
5.5. Test function-7, 8, and 9
The modified differential evolution algorithm is demon-
strated by applying them to several benchmark test functions, The objective is to minimize the function (ZD ).
and is further examined with selected chemical engineering
D
D 2
D 4
  
problems. The obtained numerical simulation results are pro- f = xi +
2
0.5ixi + 0.5ixi
viding empirical evidences on the efficiency and effectiveness i=1 i=1 i=1
of the proposed modified differential evolution algorithm.
Subject to − 5 ≤ xi ≤ 10, i = 1, . . . , D.
5. Bench mark test functions
6. Optimization of selected non-linear chemical
The reliability and efficiency of DE and MDE are tested and
processes
compared for several multimodal functions, which were used in
earlier literature (Teh & Rangaiah, 2003). The selected bench-
The optimization of non-linear constraint problems is rele-
mark functions are briefly described below and details of the
vant to chemical engineering practice (Salcedo, 1992; Floudas,
global minimum are summarized in Table 1.
1995). Non-linearities are introduced by process equipment
design relations, by equilibrium relations and by combined heat
5.1. Test function-1
and mass balances. There are many chemical processes which
are highly non-linear and complex with reference to optimal
The objective is to minimize the function (GP2 ).
operating conditions with many equality and inequality con-
f = [1 + (A1 A2 )][30 + (A3 A4 )] straints. In this paper the following processes are considered for
applying DE and MDE: (1) optimal operation of alkylation unit,
where, (2) optimal design of heat exchanger network, (3) reactor net-
A1 = (x1 + x2 + 1)2 ; work design, (4) optimization of an isothermal CSTR, and (5)
dynamic optimization of a batch reactor.
A2 = (19 − 14x1 + 3x12 − 14x2 + 6x1 x2 + 3x22 );
A3 = (2x1 − 3x22 ); 6.1. Optimal operation of alkylation unit

A4 = (18 − 32x1 + 12x12 + 48x2 − 36x1 x2 + 27x22 ); Alkylation process is common in the petroleum industry.
A simplified process flow diagram of an alkylation process is
Subject to − 2 ≤ x1 , x2 ≤ 2. shown in Fig. 3. The process model was described and solved
5.2. Test function-2

The objective is to minimize the function (ES2 ).


f = −cos(x1 ) cos(x2 ) exp[−((x1 − π)2 + (x2 − π)2 )]

Subject to − 100 ≤ x1 , x2 ≤ 100.

5.3. Test function-3

The objective is to minimize the function (H3 ).


⎡ ⎤
 4  3
f = − ci exp ⎣− aij (xj − pij )2 ⎦
i=1 j=1

Subject to 0 ≤ xj ≤ 1. Fig. 3. Simplified alkylation process flow sheet.


B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002 993

Table 2 as follows:
Variables and their bounds
Symbol Variable Lower Bound Upper Bound Max profit = 1.715x1 + 0.035x1 x6 + 4.0565x3
x1 Olefin feed rate (barrels/day) 1500 2000 +10.0x2 − 0.063x3 x5
x2 Acid addition rate (thousands 1 120
of pounds/day) Subject to
x3 Alkylate yield (barrels/day) 3000 3500
x4 Acid strength (wt.%) 85 93 0.0059553571x62 x1 + 0.88392857x3 −0.1175625x6 x1 −x1 ≤0,
x5 Motor octane no. 90 95
x6 External isobutane-to-olefin 3 12 1.1088x1 + 0.1303533x1 x6 − 0.0066033x1 x62 − x3 ≤ 0,
ratio
x7 F-4 performance no. 145 162 6.66173269x62 + 172.39878x5 − 56.596669x4
−191.20592x6 − 10000 ≤ 0,
by Sauer, Coville, and Burwick (1964) using successive lin-
1.08702x6 + 0.32175x4 − 0.03762x62 − x5 + 56.85075 ≤ 0,
ear programming. The process model seeks to determine the
optimum set of operating conditions for the process, based on 0.006198x7 x4 x3 + 2462.3121x2 − 25.125634x2 x4 − x3 x4 ≤ 0,
a mathematical model, which allowed maximization of profit.
161.18996x3 x4 + 5000.0x2 x4 − 489510.0x2 − x3 x4 x7 ≤ 0,
Bracken and McCormick (1968) formulated the problem as a
direct non-linear programming model with mixed non-linear 0.33x7 − x5 + 44.333333 ≤ 0,
inequality and equality constraints and a non-linear profit func-
0.022556x5 − 0.007595x7 − 1.0 ≤ 0,
tion to be maximized. They used Sequential Unconstrained
Minimization Technique (SUMT) for solving the same. Later, 0.00061x3 − 0.0005x1 − 1.0 ≤ 0,
Dembo (1976) transformed the NLP problem with 10 variables
0.819672x1 − x3 + 0.819672 ≤ 0,
which Bracken and McCormick (1968) derived, into a prob-
lem with seven variables. All equality constraints are eliminated 24500.0x2 − 250.0x2 x4 − x3 x4 ≤ 0,
and the problem has been formulated as a signomial optimiza-
1020.4082x4 x2 + 1.2244898x3 x4 − 100000x2 ≤ 0,
tion problem. This problem involves seven variables subject
to 12 non-linear and 2 linear inequality constraints. Edgar and 6.25x1 x6 + 6.25x1 − 7.625x3 − 100000 ≤ 0,
Himmelblau (2001) used sequential quadratic programming to
1.22x3 − x6 x1 − x1 + 1.0 ≤ 0,
solve the problem as formulated by Bracken and McCormick
(1968). Maranas and Floudas (1997) used generalized geometric The maximum profit as reported in Adjiman et al. (1998)
programming to solve the seven variables problem as formulated is: $1772.77 per day, and the optimal variable values are
by Dembo (1976). Adjiman, Dallwig, Floudas, and Neumaier x1 = 1698.18, x2 = 53.66, x3 = 3031.3, x4 = 90.11, x5 = 95.0,
(1998) used ␣BB algorithm (for general twice-differentiable x6 = 10.50, x7 = 153.53.
constraint NLPs) for solving this problem. In the present study,
the problem formulation is same as that of Maranas and Floudas
(1997) and Adjiman et al. (1998). The problem is briefly dis- 6.2. Heat exchanger network design (HEND)
cussed below.
As shown in Fig. 3, an olefin feed (100% butane), a pure This problem addresses the design of a heat exchanger net-
isobutane recycle and a 100% isobutane make-up stream are work as shown in Fig. 4. It has been taken from Floudas and
introduced in a reactor together with an acid catalyst. The reac- Pardalos (1990). Also, it has been solved by Adjiman et al.
tor product stream is then passed through a fractionator where (1998) using ␣BB algorithm. One cold stream must be heated
the isobutane and the alkylate product are separated. The spent from 100 ◦ F (37.78 ◦ C) to 500 ◦ F (260 ◦ C) using three hot
acid is also removed from the reactor. The variables are defined streams with different inlet temperatures. The goal is to min-
as shown in Table 2 along with the upper and lower bounds imize the overall heat exchange area.
on each variable. The bounds represent economic, physical and
Min f = x1 + x2 + x3
performance constraints.

6.1.1. Profit function


The objective is to improve the octane number of some olefin
feed by reacting it with isobutane in the presence of acid. The
product of the reaction is distilled and the un-reacted is recycled
back to the reactor.
The objective function was defined in terms of alkylate prod-
uct, or output value minus feed and recycle costs. Operating
costs were not reflected in the function. The total profit ($
per day), to be maximized (Adjiman et al., 1998), is given Fig. 4. Heat exchanger network design problem.
994 B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002

Subject to

x1 + k1 x1 x5 = 1, x2 − x1 + k2 x2 x6 = 0,
x3 + x1 + k3 x3 x5 = 1, x4 − x3 + x2 − x1 + k4 x4 x6 = 0,
x50.5 + x60.5 ≤ 4,
(0, 0, 0, 0, 10−5 , 10−5 ) ≤ (x1 , x2 , x3 , x4 , x5 , x6 )
≤ (1, 1, 1, 1, 16, 16).

where k1 = 0.09755988, k2 = 0.99k1 , k3 = 0.0391908, and


Fig. 5. Reactor network design problem. k4 = 0.9k3 .
The global optimum is (x1 , x2 , x3 , x4 , x5 , x6 ; f) = (0.771462,
0.516997, 0.204234, 0.388812, 3.036504, 5.096052;
Subject to
−0.388812).
0.0025(x4 + x6 ) − 1 = 0, 0.0025(−x4 + x5 + x7 ) − 1 = 0, Ryoo and Sahinidis (1995) solved globally the same problem
by convex lower bounding the bilinear terms for a different set
0.01(−x5 + x8 ) − 1 = 0, of reaction constants. A number of variations of the main algo-
100x1 − x1 x6 + 833.33252x4 − 83333.333 ≤ 0, rithm were considered and the best one yielded a CPU-time of
23 s on SPARC2. Maranas and Floudas (1997) used geometric
x2 x4 − x2 x7 − 1250x4 + 1250x5 ≤ 0, programming approach for the above problem and obtained the
x3 x5 − x3 x8 − 2500x5 + 1250000 ≤ 0, global optimum after 299 iterations and 20 s of CPU-time.
This example constitutes a very difficult test problem, as it
100 ≤ x1 ≤ 10000, 1000 ≤ x2 , x3 ≤ 10000, possesses a local minimum with an objective function value that
10 ≤ x4 , x5 , x6 , x7 , x8 ≤ 1000 is very close to that of the global solution. The local solutions are
with f = −0.37461 and f = −0.38808. Interestingly enough, the
The global optimum is: (x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 ; f) = (579.19, two local solutions utilize only one of the two reactors whereas
1360.13, 5109.92, 182.01, 295.60, 217.9, 286.40, 395.60; the global solution makes use of both reactors. This prob-
7049.25). lem can be reformulated by eliminating equality constraint as
The above problem can be reformulated by eliminating equal- follows:
ity constraint as given below:
k2 x6 (1 + k3 ) + k1 (1 + k2 x6 )
Max f =
Min f = x1 + x2 + x3 (1 + k1 x5 )(1 + k2 x6 )(1 + k3 x5 )(1 + k4 x6 )

Subject to Subject to

100x1 − x1 (400 − x4 ) + 833.33252x4 − 83333.333 ≤ 0, x50.5 + x60.5 ≤ 4


x2 x4 − x2 (400 − x5 + x4 ) − 1250x4 + 1250x5 ≤ 0,
(10−5 , 10−5 ) ≤ (x5 , x6 ) ≤ (16, 16). Global optimum is same after
x3 x5 − x3 (100 + x5 ) − 2500x5 + 1250000 ≤ 0, reformulation.
100 ≤ x1 ≤ 10000, 1000 ≤ x2 , x3 ≤ 10000,
10 ≤ x4 , x5 ≤ 1000; 6.4. Optimization of an isothermal continuous stirred tank
reactor (CSTR)
Global optimum is same after reformulation.
For a reaction scheme shown below, a problem of finding
6.3. Reactor network design (RND) the best isothermal yield of a product C in a CSTR was chosen.
This problem is taken from Rosenbrock and Storey (1966). Later,
This example, taken from Ryoo and Sahinidis (1995), is a Umeda and Ichikawa (1971) solved this problem using modified
reactor network design problem, describing the system shown in complex method. In this reaction scheme, A is the feed material,
Fig. 5. It involves the design of a sequence of two CSTR reactors B is a transient intermediate, C is the required product, and D is
where the consecutive reaction A → B → C takes place. The an unwanted side product. The reaction scheme is as follows:
goal is to maximize the concentration of product B (x4 = CB2 ) in
the exit stream. This problem is known to have caused difficulties
for other global optimization methods.

Min f = −x4
B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002 995

Table 3 Table 4
Values of Ci and Ei Results reported in literature
1 2 3 4 5 Methods Yield of C Temperature (T) Holding
(f = x3 ) time (t)
Ci 1.02 0.93 0.386 3.28 0.084
Ei 16000 14000 15000 10000 15000 Hill climbing (Rosenbrock 0.42308 978.96 0.0781
& Storey, 1966)
Modified complex method 0.42308 983.3 0.07572
(Umeda & Ichikawa,
The rate constants are given by the equation, 1971)

−Ei 1 1
ki = Ci exp −
R T 658
The dynamics are given by the following design equations:
where Ci is the frequency factor, Ei activation energy (cal/mol), dCA dCB
R the gas constant (1.9872 cal/mol/K), and T is absolute temper- = −k1 CA2
; = k1 CA
2
− k 2 CB
dt dt
ature in K. The values of Ci and Ei for five rate constants are
given in Table 3. where
 
Mathematical formulation: −2500 −5000
k1 = 4000 exp ; k2 = 620000 exp
The kinetic equations for the reaction scheme are given as, T T
dx1 and 298 K ≤ T ≤ 398 K; with the initial conditions CA (0) = 1.0,
= −(k1 + k2 + k3 )x1 ;
dt CB (0) = 0.0, and the objective function is f = Maximize CB . We
dx2 dx3 need to find out optimal temperature profile, which gives max-
= k1 x1 − k4 x2 ; = k4 x2 − k5 x3 imum intermediate product concentration. This problem has
dt dt
been solved by Renfro, Morshedi, and Osbjornsen (1987) using
with the initial conditions x1 (0) = 1, x2 (0) = 0 and x3 (0) = 0; piecewise constant controls. Dadebo and Mcauley (1995) used
where x1 , x2 , and x3 are the concentration of material A, B, and dynamic programming for solving this problem and reported
C, respectively. For constant temperature T, the kinetic equations results for different number of stages.
stated above can be integrated analytically to give the following In this problem our parameters are temperatures at different
expression for C (f = x3 ). time intervals. With these temperatures we need to find out the
   optimal final concentration of B (by solving the above model
k1 k4 e−k5 t 1 − e−(k4 −k5 )t
x3 = f = equations along with DE and MDE).
k − k4 k4 − k 5
 7. Constraint handling
1 − e−(k−k5 )t
− ;
k − k5 In Sections 5 and 6, we have discussed the various test func-
tions and selected chemical engineering problems. Most of the
where k = k1 + k2 + k3 ; engineering optimization problems are constrained. The diffi-
culty of using EAs in the constrained optimization is that the
Hence the maximizing function is given by
evolutionary operators used to manipulate the individuals of the
Max x3 = f population often produce solutions which are unfeasible. There
are many methods to handle it. The handling of constraint in the
Subject to the following constraints, present study is discussed below.
Bound violations (whether upper or lower) may occur after
0 ≤ t ≤ 10; reaction time (s)
mutation step of DE and MDE. This can be repaired by one
of the following methods: (1) if there is bound violation for a
200 ≤ T ≤ 2000; reaction temperature (K) parameter, then assign the upper or lower bound value if upper or
In this problem, the best isothermal yield of C (f) is found by lower bound is violated (we call it ‘relocating to boundary’) and
maximizing this expression with respect to the contact time (t) (2) if there is bound violation for a parameter, then that parameter
and the temperature (T). The results reported in literature are is again generated randomly between given lower and upper
shown in Table 4. bound (we call it ‘relocating to interior’) using the following
equation:
j
6.5. Dynamic optimization of a batch reactor xi = lower(xi ) + randi [0, 1]

In this problem, we consider the consecutive reaction: ×(upper(xi ) − lower(xi )); i = 1, . . . , D


k1 k2
A−→B−→C in a batch reactor (Ray, 1981). The objective is where D is the number of parameters. The above first and second
to obtain the optimal reactor temperature progression, which methods (Relocating to Boundary and Relocating to Interior) are
maximizes the intermediate product B for a fixed batch time. abbreviated as RB and RI, respectively.
996 B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002

The penalty function methods are one of the most popular


techniques in EAs to handle constraints. The techniques trans-
form the constrained problem into an unconstrained problem by
penalizing unfeasible solutions. In addition, the penalty function
methods are easy to implement and considered efficient. In the
present study, an absolute value of constraint violation is mul-
tiplied with a high penalty and added/subtracted to objective
function depending upon the type of problem, i.e. minimiza-
tion/maximization. In case of more than one constraint, all such
absolute violations are first multiplied with high penalty and then
added to objective function value (for minimization problem)
or subtracted from objective function value (for maximization
problem).

8. Results and discussion

In Section 7, we have discussed the methods used for handling


bound violation and constraints in the present study. The follow-
ing subsections discuss the results obtained using DE and MDE
for test functions followed by selected optimization problems
from chemical engineering. Extensive computational compar-
isons have been made for all the chemical engineering problems
considered using standard statistical hypothesis testing methods
such as t-test.

8.1. Bench mark test functions

Table 5 shows the results obtained using DE and MDE. Both


the methods are coded in C++ using BorlandC++ on PC with
Pentium III, 500 MHz/128 MB RAM. The results are compared
on the basis of average CPU-time and success rate in locat- Fig. 6. Convergence history of GP2 and ES2 .
ing the global optimum for the given tolerance, in an overall
10 executions implemented. The termination criterion used is There is a saving in CPU-time in all test functions using MDE
|OFcal − OFAnal | ≤ 1 × 10−6 (where OFcal is the objective func- except for one test function, named R2 . It is evident from Table 5
tion value at the best point found in each successful run and that the saving is significant (ranges from 5.72 to 27.27%) in ES2 ,
OFAnal is the known global minimum) for both DE and MDE GP2 , Z2 , H3 , and Z5 as compared to R5 and R10 (0.88 and 0.76%,
for comparison purposes. It is clear that MDE takes less CPU- respectively). Fig. 6a and b show the convergence history of a
time and hence number of function evaluations. Also, the success typical experiment using DE and MDE for the test functions
rate is 100% for all the test functions using DE and MDE. The named GP2 and ES2 . Error is the difference between average
strategy used is DE/rand/1/bin and the key parameters used are cost of population in a generation and cost or objective function
NP = 10D, CR = 0.5, F = 0.8. All bound violations are repaired value corresponding to known global optimum.
using second method, i.e., relocating to interior, as mentioned Error variation (Fig. 6a) indicates that the error is more in
earlier. case of MDE than DE till third generation. After that it reduces

Table 5
Results of DE and MDE for test functions
S. No. DE NFE10 (CPU-timea ) MDE NFE10 (CPU-timea ) Percentage time saving SR (%)

ES2 3052 (0.094) 2512 (0.083) 11.7 100


GP2 1222 (0.020) 1024 (0.016) 20.0 100
R2 2056 (0.055) 2042 (0.055) Nil 100
Z2 716 (0.022) 704 (0.016) 27.27 100
H3 1704 (0.066) 1563 (0.055) 16.67 100
R5 50020 (2.033) 49525 (2.015) 0.88 100
Z5 10370 (0.418) 9525 (0.374) 10.53 100
R10 417510 (26.962) 417280 (26.758) 0.76 100
Z10 139530 (8.654) 136530 (8.159) 5.72 100
a CPU-time on PC with Pentium PIII, 500 MHz/128 MB RAM with strategy DE/rand/1/bin.
B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002 997

Table 6
Comparison of various methods
S. No. Methods Objective CPU-time (s)
function (f)

1 SUMT (Bracken & McCormick, 1769 Not reported


1968)
2 NPSOL (Edgar & Himmelblau, 1768.75 Not reported
2001)
3 GGP in GAMS (Maranas & 1773 30a
Floudas, 1997)
4 ␣BB algorithm (Adjiman et al., 1772.77 13.6b
1998)
5 DE (present study) 1766.36 5.12c
a CPU-time on HP-730 workstation.
b HP9000/730, using scaled Gerschgorin theorem method.
c CPU-time on PC with Pentium PIII, 500 MHz/128 MB RAM.
Fig. 7. Convergence history of alkylation process problem.

at a faster rate and becomes much less than that in DE (at 22nd
generation, error using DE is 11.22 while using MDE it is 0.841). optimum, and the average CPU time per execution (key param-
Similarly, as shown in Fig. 6b, error reduces to 0.0014 after 52 eters used are NP = 10D, CR = 0.8, F = 0.5). Both DE and MDE
generations (for MDE) while for DE it reduces to 0.00167 only are able to locate the global optimum in all the experiments, as
even after 73 generations. This clearly explains the higher speed NRC100 is 100%. MDE takes 2.47% less CPU-time than DE in
of MDE to attain global optimum. case of relocating to interior, while it is 7.33% in case of relocat-
ing to boundary. This is due to the fact that some of the optimum
values of variables are lying on the extreme, i.e., either on upper
8.2. Selected non-linear chemical processes
or on lower bound. Fig. 7 shows the convergence history of alky-
lation problem using DE and MDE. Fig. 7 presents the averaged
8.2.1. Optimal operation of alkylation unit
results of 100 executions/experiments. After 160th generation,
Table 6 presents the comparison of results obtained in ear-
the value of the objective function is 1734.814 using MDE and
lier studies and, those obtained using DE in present study.
1719.956 using DE. This indicates that MDE is faster than DE
The optimal solution obtained using DE is: x1 = 1698.256922;
and takes less CPU-time to locate the global optimum solution
x2 = 54.274463; x3 = 3031.357313; x4 = 90.190233; x5 = 95.0;
(Table 7).
x6 = 10.504119; x7 = 153.535355; and the value of objective
The results of the 100 different experiments are plotted in
function is 1766.36. This solution satisfies all the inequality con-
Fig. 8 as fractional differences of optimal objective function
straints to six decimal places while solution reported by Maranas
value from its best-found optimal value (fbest , of all 100 experi-
and Floudas (1997) and Adjiman et al. (1998) violates the first,
ments using DE and MDE algorithms) versus experiment num-
third and sixth constraints. The values of first, third and sixth con-
ber. The fractional difference (df ) is given by df = 1 − f/fbest .
straints are found to be 0.016501, 4.752125 and 1727.867362,
Thus, in the figure, a point closer to abscissa denotes a more accu-
respectively instead of 0 or less than 0. Hence the global optimal
rate result. The best optimal value of 1766.365179 was obtained
solution is 1766.36 that satisfy all the constraints to six decimal
places (i.e. 0.0000001). Table 7 shows the results obtained using
DE and MDE and their comparison in terms of the number of
objective function evaluations, CPU-time and proportion of con-
vergences to the optimum. The termination criterion used is an
accuracy value of 1 × 10−6 . NFE100 , NRC100 and CPU-time100
in Table 7 represents respectively the mean number of objective
function evaluations over all the 100 executions (with differ-
ent seed values), the percentage of convergences to the global

Table 7
Results of DE and MDE for alkylation problem

Methodsa NFE100 NRC100 (%) CPU-time100 (s)b

DE (RI) 114895 100 5.81


MDE (RI) 108103 100 5.67
DE (RB) 100126 100 5.12
MDE (RB) 92287 100 4.77
a Penalty used is 105 .
b CPU-time on PC with Pentium PIII, 500 MHz/128 MB RAM. Fig. 8. Fractional difference variation for alkylation process problem.
998 B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002

Table 8
Results of DE and its comparison with ␣BB algorithm
Problem DE CPU-time100 /NFE100 / DE CPU-time100 /NFE100 / Key parameters of ␣BB algorithm
NRC100 (RB) NRC100 (RI) DE (NP/CR/F) (CPU-time) (s)

HEND 1.513a /38824/89 1.477a /37810/100 50/0.8/0.5 54.4b


RND 0.049a /1605/10 0.041a /1468/57 20/0.8/0.5 5.5b
a CPU-time obtained using Pentium-III (500 MHz) with convergence tolerance of 0.00001 (present study).
b CPU-time obtained using HP9000/730 (66 MHz) with convergence tolerance of 0.001 (Adjiman et al., 1998).

Table 9
Results of MDE and its comparison with DE
Problem (MDE) CPU-time100 /NFE100 / (MDE) CPU-time100 /NFE100 / (DE) CPU-time100 /NFE100 / (DE) CPU-time100 /NFE100 /
NRC100 (RB) NRC100 (RI) NRC100 (RB) NRC100 (RI)

HEND 1.292a /33146/88 1.236a /31877/100 1.513a /38824/89 1.477a /37810/100


RND 0.041a /1289/08 0.034a /1253/44 0.049a /1605/10 0.041a /1468/57
a CPU-time obtained using Pentium-III (500 MHz).

from the 100 experiments. The overall accuracy of the 100 val- Table 9 shows the results obtained using MDE (relocating
ues of df is given by the mean of 5.66 × 10−10 and 6.06 × 10−10 to boundary/relocating to interior) and its comparison to DE.
respectively for MDE and DE. In both DE and MDE the value The parameters used are CR = 0.8, F = 0.5 and NP = 10D. The
of this mean is quite low. The precision of 100 values of df is performance of MDE, as is evident from results presented in
expressed in terms of low standard deviation of 4.02 × 10−10 Table 9, is better than that of DE (a saving of 16.32% CPU-time).
and 4.85 × 10−10 respectively for MDE and DE. The reliability of DE and MDE is nearly same as indicated by
NRC100 , which is same in both the cases (100% for relocating
to interior and 88% for relocating to boundary method). The
8.2.2. Heat exchanger network design (HEND)
convergence history of HEND problem is shown in Fig. 9. It
Table 8 shows the results obtained using DE with variable
represents the averaged results of 100 experiments. The error
violations relocating to boundary/relocating to interior, and the
obtained after 200 generations is 0.986 for MDE while it is
comparison of DE performance with that of ␣BB algorithm.
2.344 for DE. MDE is able to locate the global optimal solution
The stopping criteria adopted for DE is to terminate the search
for the test problem considered faster than DE.
process when one of the following conditions is satisfied: (1)
The results of 100 different experiments are plotted in Fig. 10
the maximum number of generations is reached (assumed 5000
k − f k | < 10−5 where f is the value of as fractional differences of optimal objective function value from
generations), (2) |fmax min its best-found optimal value (in all 100 experiments) versus
objective function for kth generation. In Tables 8 and 9, NFE100
experiment number. Thus, in the figure, a point closer to 0 value
and NRC100 represent respectively, the mean number of objec-
of df denotes a more accurate result. The best optimal value
tive function evaluations and the percentage of runs converged
(fbest ) of 7049.248022 was obtained from the 100 experiments
to the global optimum in all the 100 executions (with different
using DE and MDE algorithms. The overall accuracy of the 100
seed values).
In this problem, NFE100 in case of relocating to interior
method is 2.68% less than NFE100 with relocating to boundary
method (Table 8). Also, the NRC100 is 100 and 89% respectively
for relocating to interior and relocating to boundary methods.
Convergence tolerance in the present case is 10−5 as compared
to 10−3 in Adjiman et al. (1998).
The time taken by DE is much less than that of ␣BB algo-
rithm. Of course the CPU-times cannot be compared directly
because different computers are used. However, a comparison
can be made after considering a factor of 10 (high enough), i.e.
had the same problem would have been solved on HP9000/730
(66 MHz) using DE it might have taken 10 times more of CPU-
time than by Pentium-III, 500 MHz. As we know processor
speed of HP9000/730 machine is 66 MHz while for P-III used
in present study is 500 MHz, therefore a factor of 10 seems to
be appropriate for comparison purpose. Even then the CPU-time
using DE is 73.44% less for the HEND problem and 92% less for
RND problem than that in ␣BB algorithm respectively (Table 8). Fig. 9. Convergence history of HEND problem.
B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002 999

Fig. 10. Fractional difference variation for HEND problem. Fig. 11. Fractional difference variation for RND problem.

values of df is given by low mean value of −1.16 × 10−9 and NRC100 although the computational time is 46 and 53% more
−5.61 × 10−9 respectively for MDE and DE. The precision of than that shown in Table 9 respectively for DE (RI) and MDE
100 values of df is expressed in terms of low standard deviation (RI) algorithms. Further, it is clear from Table 10 that MDE is
of 1.45 × 10−9 and 2.12 × 10−9 respectively for MDE and DE. taking 13.33% less CPU-time as compared to DE. Therefore,
MDE seems to be computationally efficient.
8.3. Reactor network design (RND) The results of 100 different experiments carried out are plot-
ted in Fig. 11 as fractional differences of optimal objective
RND is a difficult problem as mentioned already having two function value from its best-found optimal value (in all 100
local optima near global optimum. Still MDE and DE are able to experiments) versus experiment number. Thus, in the figure, a
locate the global optimum although the success rate is 44–57% point closer to 0 value of fractional difference denotes a more
(Table 9). For RND problem, the first termination condition is the accurate result. The best optimal value (fbest ) of 0.388811 was
k − f k | < 10−6 . In
same as above but second condition is |fmax obtained from the 100 experiments using DE and MDE algo-
min
this problem, NFE100 with relocating to boundary method is 2.87 rithms. The overall accuracy of the 100 values of df is given by
and 9.33% (for MDE and DE, respectively) more than NFE100 low mean value of 5.97 × 10−7 and 4.89 × 10−7 respectively for
for relocating to interior method (Table 9). It is important to MDE and DE. The precision of 100 values of df is expressed in
note that in this problem, NRC100 with relocating to boundary terms of low standard deviation of 2.00 × 10−6 and 1.30 × 10−6
method is just 8 and 10% respectively for MDE and DE (Table 9). respectively for MDE and DE.
It is because when the upper limit of variable is violated, the
value of variable is relocated to the upper limit that resulted in 8.4. Optimization of an isothermal continuous stirred tank
convergence to non-optimal solution. This happened as one of reactor (CSTR)
the local solutions near to global optimum is lying on the bound,
and hence trapped at local optimum. This problem has been taken from Rosenbrock and Storey
Further, in order to enhance the robustness of the DE and (1966). Later, Umeda and Ichikawa (1971) solved this prob-
MDE algorithms, CR value is increased. And it is found that lem using a modified complex method. Neither Rosenbrock
CR alone is not affecting the value of NRC. An increase in F and Storey (1966) nor Umeda and Ichikawa (1971) obtained
value significantly affects the NRC value. Table 10 shows the the global optimum solution for this problem. In the current
comparison of DE, and MDE algorithm for the increased values study, DE and MDE are able to locate the global optima. The
of CR and F. Also, it has been found out that at CR = 1.0 and computational results are shown in Table 11 where NFE100 ,
F = 0.9, both DE and MDE are able to achieve almost 100% NRC100 , and CPU-time100 are average of 100 executions. The
optimal result obtained by hill climbing method (Rosenbrock
and Storey, 1966) is x3 = 0.42308, T = 978.96 and t = 0.0781 at
Table 10
Comparison of DE, and MDE for RND problem
the 146th iteration for the initial point (T, t) = (1073.0, 0.5).
They reported that it was not always possible to find the opti-
Methods CPU-time100 (s)a NFE100 /NRC100 Key parameters mal point because of the existence of a ridge (Fig. 12a and b).
(CR/F)
Fig. 12a and b shows the variation of the objective function
DE (RI) 0.060 2074/100 1.0/0.9 with temperature and time. Umeda and Ichikawa (1971) reported
MDE (RI) 0.052 1860/99 1.0/0.9 slightly higher value for temperature (983.3 K) and lesser value
a CPU-time obtained using Pentium-III (500 MHz). for holding time (0.07572 s) as compared to Rosenbrock and
1000 B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002

Table 11
Computational results using DE and MDE for isothermal reactor
Methods CPU-time100 /NRC100 / CPU-time100 /NRC100 / Yield of C (f = x3 ) Temparature (T) Holding time (t)
NFE100 (RB) NFE100 (RI)

DE 0.033a /100/8600 0.032a /100/7996 0.423084 983.2028 0.075901


MDE 0.027a /100/7351 0.029a /100/7685 0.423084 983.2028 0.075901
Hill climbing – – 0.423080 978.96 0.0781
Modified complex method – – 0.423083b 983.3 0.07572
a On Pentium-4/2.4 GHz/256 MB RAM.
b Calculated value using T = 983.3 K and t = 0.07572 s.

Storey (1966) using hill climbing method although the value of


the objective function is reported to be same in both the cases,
i.e., 0.42308. However, as shown in Table 11, DE and MDE
are able to find global optimum (f = x3 = 0.423084, with tem-
perature, T = 983.2028, and the holding time, t = 0.075901). DE
takes 10.34 and 22.22% more CPU-time than that of MDE in
case of method relocating to interior and relocating to bound-
ary method, respectively. Fig. 13 shows the convergence history

Fig. 13. Convergence history of isothermal CSTR problem.

of isothermal reactor problem. Fig. 13 represents the averaged


results of 100 experiments. Though from naked eye, objective
function value appears to be the same for DE and MDE after 60
generations, actually it is not so. After 82 generations the value
of the objective function is 0.423074 for DE and 0.423084 for
MDE while DE reaches a value of 0.423084 in 96th generation.
This shows that MDE is able to locate global optimum faster
than DE.
The results of the 100 different experiments are plotted in
Fig. 14 as fractional differences of optimal objective function
value from its best-found optimal value (in all 100 experiments)
versus experiment number. Thus, in the figure, a point closer to
0 value of fractional difference denotes a more accurate result.
The best optimal value (fbest ) of 0.423084 was obtained from the
100 experiments using DE and MDE algorithms. The overall
accuracy of the 100 values of df is given by low mean value of
0.0 and 0.0 respectively for MDE and DE. The precision of 100
values of df is expressed in terms of low standard deviation of
0.0 and 0.0 respectively for MDE and DE.

8.5. Dynamic optimization of a batch reactor

In this problem, we need to find out optimal temperature pro-


file, which gives maximum intermediate product concentration.
This problem has been solved by Renfro et al. (1987) using
piecewise constant controls. They reported a value of 0.61 for
Fig. 12. Variation of f with time and temperature. the objective function. Logsdon and Biegler (1989) obtained a
B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002 1001

Table 12
Results of DE and MDE for 10 intervals
Methods Yield Optimal initial CPU-time20 (s)a NRC20 (%)
temperature T (0)

DE 0.610079 361.4 151.28 100


MDE 0.610079 361.4 130.55 100
a On Pentium-4/2.4GHz/256 MB RAM using R–K method of fourth order

with step size of 0.0001.

value of 0.610767. Dadebo and Mcauley (1995) used dynamic


programming for solving this problem. They reported results for
different number of stages. Dadebo and Mcauley (1995) reported
a yield of 0.610070 for 10 stages which is same as shown in
Table 12 for DE and MDE. In Table 12, CPU-time20 is average
of 20 executions. It is evident that MDE takes about 13.7% (20 s)
of CPU-time less than that of DE. Also, it is to be noted that time
saving is highly desirable in this type of problems, as though the Fig. 15. Fractional difference variation for dynamic optimization of a batch
reactor.
percentage is 13.7 which appears to be less, but the amount is
20 s, which is significant amount as compared to other problems
where the saving is of order of 1.0 s or less. Also, the NRC20 is
100% using both MDE and DE algorithms.
The results of the 100 different experiments are plotted in
Fig. 15 as fractional differences of optimal objective function
value from its best-found optimal value (in all 100 experiments)
versus experiment number. Thus, in the figure, a point closer to
0 value of fractional difference denotes a more accurate result.
The best optimal value (fbest ) of 0.610079 was obtained from the
100 experiments using DE and MDE algorithms. The overall
accuracy of the 100 values of df is given by low mean value of
0.0 and 0.0 respectively for MDE and DE. The precision of 100
values of df is expressed in terms of low standard deviation of
0.0 and 0.0 respectively for MDE and DE.
Fig. 16 shows the optimal temperature profile obtained using
DE and MDE for 10 intervals of total time (i.e. for 10 stages).
Both the profiles are exactly same. Having obtained the same Fig. 16. optimal temperature profile for 10 intervals.
profile in lesser CPU-time establishes the fact that MDE is able
to find the optimal temperature profile faster than DE.
9. Conclusions

The Modified Differential Evolution (MDE) algorithm has


been introduced and compared to Differential Evolution (DE)
for global optimization of benchmark test functions and selected
non-linear chemical processes. Extensive computational com-
parisons have been made for all the chemical engineering prob-
lems considered using standard statistical hypothesis testing
methods such as t-test. The results stated above clearly show
the improvement upon the performance characteristics of DE
with regard to the number of function evaluations (NFE)/CPU-
time required to find the global optimum. The enhancement was
accomplished by using single array in MDE as compared to two
arrays in DE. The second method, i.e., relocating to interior is
found to be better than the relocating to boundary, as it avoids
trapping at local optimal solution if they are present at extreme
(upper or lower bound). RND problem is an example of such a
situation.
Fig. 14. Fractional difference variation for Isothermal CSTR optimization prob- In this paper, we focused applying DE and MDE to non-
lem. linear unconstrained and constrained problems encountered in
1002 B.V. Babu, R. Angira / Computers and Chemical Engineering 30 (2006) 989–1002

chemical engineering. An interesting area of future study would Goldberg, D. E. (1989). Genetic algorithms in search, optimisation, and
involve the application of MDE to other numeric domains such machine learning. Reading, MA: Addison-Wesley.
as mixed integer problems or other problem domains such as Hendtlass, T. (2001). A combined swarm differential evolution algorithm for
optimization problems. Lecture notes in computer science: vol. 2070.
multi-criteria optimization. Springer-Verlag, pp. 11–18, ISSN 0302-9743.
Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann
References Arbor, Michigan: The University of Michigan Press.
Kirkpatrick, S., Gelatt, C. D., & Vechhi, M. P. (1983). Optimization by
Adjiman, C. S., Dallwig, S., Floudas, C. A., & Neumaier, A. (1998). A global simulated annealing. Science, 220(4598), 671–680.
optimization method, ␣BB, for general twice-differentiable constrained Lee, M. H., Han, C., & Chang, K. S. (1999). Dynamic optimization of a
NLPs. I. Theoretical advances. Computers and Chemical Engineering, continuous polymer reactor using a modified differential evolution algo-
22(9), 1137–1158. rithm. Industrial and Engineering Chemistry Research, 38(12), 54825–
Androulakis, I. P., & Venkatasubramanian, V. (1991). A genetic algorithm 54831.
framework for process design and optimization. Computers and Chemical Logsdon, J. S., & Biegler, L. T. (1989). Accurate solution of differential
Engineering, 15(4), 217–228. algebraic equations. Industrial and Engineering Chemistry Research, 28,
Angira, R., & Babu, B. V. (2003). Evolutionary computation for global opti- 1628–1639.
mization of non-linear chemical engineering processes. In Proceedings Lu, J. C., & Wang, F. S. (2001). Optimization of low pressure chemical
of the international symposium on process systems engineering and con- vapour deposition reactors using hybrid differential evolution. Canadian
trol (ISPSEC’ 03) —for productivity enhancement through design and Journal of Chemical Engineering, 79(2), 246–254.
optimization, Paper No. FMA2. Maranas, C. D., & Floudas, C. A. (1997). Global optimization in generalized
Babu, B. V. (2004). Process plant simulation. New York: Oxford University geometric programming. Computers and Chemical Engineering, 21(4),
Press. 351–370.
Babu, B. V., & Angira, R. (2002a). Optimization of non-linear chemical pro- Onwubolu, G. C., & Babu, B. V. (2004). New optimization techniques in
cesses using evolutionary algorithm. In Proceedings of the international engineering. Heidelberg, Germany: Springer-Verlag.
symposium and 55th annual session of IIChE (CHEMCON-2002). Price, K., & Storn, R. (1997). Differential evolution—A simple evolu-
Babu, B. V., & Angira, R. (2002b). A differential evolution approach for tion strategy for fast optimization. Dr. Dobb’s Journal, 22, 18–24 and
global optimization of MINLP problems. In Proceedings of fourth Asia 78.
Pacific conference on simulated evolution and learning (SEAL-2002), vol. Price, K., & Storn, R. (2005). Home page of differential evolution as on
2 (pp. 880–884). April 25. URL: http://www.ICSI.Berkeley.edu/∼storn/code.html.
Babu, B. V., Pallavi, G. C., & Syed Mubeen, J. H. (2005). Multiobjective dif- Ray, W. H. (1981). Advanced process control. New York: McGraw-Hill.
ferential evolution (MODE) for optimization of adiabatic styrene reactor. Renfro, J. G., Morshedi, A. M., & Osbjornsen, O. A. (1987). Simultaneous
Chemical Engineering Science, 60(17), 4822–4837. optimization and solution of systems described by differential/algebraic
Babu, B. V., & Sastry, K. K. N. (1999). Estimation of heat-transfer param- equations. Computer and Chemical Engineering, 11, 503–517.
eters in a trickle-bed reactor using differential evolution and orthogonal Rosenbrock, H. H., & Storey, C. (1966). Computational techniques for chem-
collocation. Computers and Chemical Engineering, 23, 327–339. ical engineers. London: Pergamon Press.
Bracken, J., & McCormick, G. P. (1968). Selected applications of non-linear Ryoo, H. S., & Sahinidis, N. V. (1995). Global optimization of nonconvex
programming. New York: John Wiley & Sons, Inc. NLPs and MINLPs with applications in process design. Computers and
Chiou, J. P., Chang, C. F., & Su, C. T. (2004). Ant direction hybrid dif- Chemical Engineering, 19(5), 551–566.
ferential evolution for solving large capacitor placement problems. IEEE Salcedo, R. L. (1992). Solving nonconvex non-linear programming prob-
Transactions on Power Systems, 19(4), 1794–1800. lems with adaptive random search. Industrial and Engineering Chemistry
Chiou, J. P., & Wang, F. S. (1999). Hybrid method of evolutionary algorithms Research, 31, 262.
for static and dynamic optimization problems with application to a fed- Sauer, R. N., Coville, A. R., & Burwick, C. W. (1964). Computer points way
batch fermentation process. Computers and Chemical Engineering, 23(9), to more profits. Hydrocarbon Processing Petroleum Refiner, 43, 84.
1277–1291. Schwefel, H. P. (1981). Numerical optimization of computer models. New
Dadebo, S. A., & Mcauley, K. B. (1995). Dynamic optimization of con- York: John Wiley & Sons.
strained chemical engineering problems using dynamic programming. Storn, R. (1995). Differential evolution design of an IIR-filter with require-
Computers and Chemical Engineering, 19, 513–525. ments for magnitude and group delay. International Computer Science
Deb, K. (1996). Optimization for engineering design: Algorithms and exam- Institute, TR-95-018.
ples. New Delhi: Prentice-Hall. Tasoulis, D. K., Pavlidis, N. G., Plagianakos, V. P., & Vrahatis, M. N.
Deb, K. (2001). Multi-objective optimization using evolutionary algorithms. (2004). Parallel differential evolution. Available at: http://www.math.
Singapore: John Wiley. upatras.gr/∼dtas/papers/TasoulisPPV2004.pdf.
Dembo, R. S. (1976). A set of geometric programming test problems and Teh, Y. S., & Rangaiah, G. P. (2003). Tabu search for global optimization of
their solutions. Mathematical Programming, 10, 193–213. continuous functions with application to phase equilibrium calculations.
Edgar, T. F., & Himmelblau, D. M. (2001). Optimization of chemical pro- Computers and Chemical Engineering, 27, 1665–1679.
cesses. Singapore: McGraw-Hill, Inc, pp. 492–495. Umeda, T., & Ichikawa, A. (1971). A modified complex method for opti-
Floudas, C. A. (1995). Non-linear and mixed-integer optimization. New York: mization. Industrial and Engineering Chemistry Process Design and
Oxford University Press. Development, 10, 229–236.
Floudas, C. A., & Pardalos, P. M. (1990). A collection of test problems for Wang, F. S., & Chiou, J. P. (1997). Optimal control and optimal time loca-
constrained global optimization algorithms. Lecture notes in computer tion problems of differential-algebraic systems by differential evolution.
science: vol. 455. Berlin, Germany: Springer. Industrial and Engineering Chemistry Research, 36, 5348–5357.

You might also like