A Variant of Evolution Strategies For Vector Optimization: Frank Kursawe

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

A Variant of Evolution Strategies

for Vector Optimization


Frank Kursawe
University of Dortmund, Department of Computer Science XI
D-44221 Dortmund, Germany

1 Introduction
It has become increasingly obvious that the optimization under a single scalar-valued
criterion - often a monetary one - fails to reect the variety of aspects in a world getting
more and more complex. Although the mathematical foundations were already laid by
V. Pareto 1] about a hundred years ago the existing tools for multiple criteria decision
making (MCDM) are still too di cult to be used by non-mathematicians.
In order to overcome these di culties a new method based on evolution strategies has
been developed which is capable of giving a good insight into the structure of the Pareto
set by computing a nite number of e cient solutions. (A vector f1 is called e cient if
no f2 exists which is 'better' than f1 in all its components.)

2 Shortcomings of Conventional Methods


Numerous methods have been developed for vector optimization, e.g. linear vector optimization, game theory, one global replacement criterion and others 2, 3]. Many of these
algorithms reduce the problem formulation to a scalar one. This seems tempting for two
reasons:
In contrast to the original problem the reduced one (hopefully) has one distinct
solution.
The decision maker may choose from a variety of approved procedures from the
domain of single criterion optimization.
Like many other approaches, however, this idea reduces the decision space prematurely,
i.e. before enough information is available.
Despite of the large number of numerical tools for vector optimization problems several
questions still remain:

If a utility function is assumed, does it exist? And if so, has it been understood
correctly? How does the chosen utility function inuence the solution? How can
correlated objectives be dealt with? Is the solution obtained e cient? Furthermore,
does it make sense to obliterate the information obtainable from K objectives?
If aspiration levels are demanded from the decision maker, how can one measure
their inuence on the solution? How can one deal with the fact that objectives
with small absolute values are discriminated by any metric? How does the solution
depend on the chosen metric?
What is there to do if the objectives are not commensurable?

3 Evolution Strategies ...


3.1 ... for Single Criterion Optimization
Evolution can be regarded as a sequence of self-organization steps, i.e. as the underlying
universal principle of any kind of self-organization. Modern research has proved nature's
strategies worth copying for technical or numerical optimization. Pioneering work in this
direction has been done by Holland 4], Bremermann 5], Rechenberg 6] and Schwefel 7].
Multi-membered evolution strategies were proposed by Schwefel 7, 8] as a robust, general
purpose optimization algorithm being very modest in terms of prerequisite assumptions.
As well as genetic algorithms (GA's) they have shown to be capable of searching for
the global optimum in parameter spaces which cause di culties for gradient algorithms.
Their range of application also covers NP-hard problems such as the Traveling-SalesmanProblem or problems with an optimum changing over time. Even for pattern matching
which does not belong to the classical domain of optimization evolution strategies turned
out to be useful. In fact, all problems which provide some criterion (environment) evaluating an individual's tness can be treated.
In order to solve these single criterion optimization problems it proved to be su cient to
imitate the following principles of nature:
population
haploid individuals
synchronous generations
sexual propagation with recombination/crossing-over
random mating
mutation ('driving force')
selection ('steering wheel')

Generally, one can understand mutation as a process of varying or generating propositions.


The selection step then evaluates their usefulness.

3.2 ... for Multiple Criteria Optimization


For multiple criteria problems, however, two of the principles mentioned above have to
be modied:
Since the environment now consists of K objectives the selection step is provided
with a xed user-denable vector that determines the probability of each objective to
become the sorting criterion in the K iterations of the selection loop. Alternatively,
this vector may be allowed to change randomly over time.
Furthermore, the extension of an individual's genes by recessive information turned
out to be necessary in order to maintain the population's capability of coping with
a changing environment. The recessive genes enable a fast reaction after a sudden
variation of the probability vector. One can also observe this behaviour in nature:
The younger the environment the higher the portion of polyploid organisms. Figure
1 illustrates the interior structure of a diploid individual:

Figure 1: Genotype / Phenotype of an Individual

The x are called decision (object) variables in contrast to the stepsize (strategy) variables
.
By presenting a number of solutions the program provides the user with an idea of the
tradeos between the objectives. It should be noted that e cient solutions in one generation may become dominated by individuals emerging in a later generation. This explaines
the non-e cient points in gure 2 (left). For e ciency reasons the 'parents' of the next
generation are stored provisionally in an array that is cleaned out if there is not enough
space left for further individuals. If this operation does not result in enough free space
solutions 'close' to each other are deleted. As an important side eect the elements of
the Pareto set are forced apart thus allowing a good survey with only a nite number of
solutions. Figure 2 (right) displays the situation after tidying up.
i

X
f (x) = (;10 e;
n

=1

0:2

2 +x2
i
i+1

)

X
f (x) = (jx j
n

=1

0:8

+ 5 sin(x )3)
i

Figure 2: Graphical Output of the Program

4 Results
When working with diploid individuals the inclusion of the recessive genes in the selection
step turns out to be vital. Otherwise, undisturbed by the outside world they lead such a
life of their own that an individual whose dominant genes have been freshened up with
recessive material has no chance of surviving the next selection step. The best results were
achieved with a probability of about 1=3 for exchanging dominant and recessive genes.
This value also serves as a factor when putting together the overall tness vector. Only in
this way the additional recessive material can serve as a stock of variants. From further
test runs one can also conclude that diploid or, in general, polyploid individuals are not
worth the additional computing time in a static environment.
Since covering the Pareto set as good as possible is the algorithm's objective a probability
distribution forcing certain minimum changes during the mutation step ought to yield
better results. Indeed, the (symmetric) Weibull distribution turned out to be better than
the Gaussian distribution. This result also holds for the single criterion case.
The stochastic approach towards vector optimization problems via evolution strategies
leads to one major advantage: In contrast to other methods no subjective decisions are
required during the course of the iterations. Instead of narrowing the control variables
space or the objective space by deciding about the future direction of the search from an
'information vacuum' 2] the decision maker can collect as much information as needed
before making a choice which of the alternatives should be realized. Moreover, using a

population while looking for a set of e cient solutions seems to be more appropriate than
just trying to improve one 'current best' solution.

5 Outlook
In future research it will be interesting to see whether further principles of nature are
worth copying, such as aging, fertility rates depending on the relative tness or parallel
(sub-)populations. For example, the selection loop could be modied in the following
way: Each time the appropriate fraction of the next generation is selected according to all
elements of the tness vector one after another. By doing so one guarantees the survival
of the best individuals on each objective and, simultaneously, enhances the reproduction
probability of those individuals selected more than once, i.e. those that are better than
average on more than one objective.
'Mutations' of the algorithm, however, should be carried out carefully and only if the
underlying natural principle has been fully understood.

References
1] Pareto, V.: Cours d'Economie Politique , Rouge, Lausanne, 1896
2] Hwang, C.-L., Masud, A. S. M.: Multiple Objective Decision Making - Methods
and Applications, Springer, Berlin, 1979
3] Peschel, M.: Modellbildung und Steuerung mit Hilfe der Polyoptimierung, VEB Verlag Technik, Berlin, 1980
4] Holland, J. H.: Adaptation in Natural and Articial Systems, University of Michigan
Press, Ann Arbor (Michigan), 1975
5] Bremermann, H. J.: Optimization through Evolution and Recombination, pp. 93106 in Yovits, Jacobi and Goldstein (Eds.): Self-organizing Systems, Pergamon Press,
Oxford, 1962
6] Rechenberg, I.: Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution, Frommann-Holzboog, Stuttgart, 1973
7] Schwefel, H.-P.: Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie, Birkhauser, Basel, 1977
8] Schwefel, H.-P.: Numerical Optimization of Computer Models, Wiley & Sons, Chichester, 1981

You might also like