0% found this document useful (0 votes)
65 views

BTP Report Final

The document describes a thesis submitted for the degree of Bachelor of Technology. The thesis proposes a novel decomposition-recombination strategy for large scale optimization of power distribution networks. The thesis was submitted by Saumitra Sharma and Nikhil Menghrajani under the guidance of Dr. Gaurav Trivedi at the Indian Institute of Technology Guwahati in April 2018.

Uploaded by

nikhil
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views

BTP Report Final

The document describes a thesis submitted for the degree of Bachelor of Technology. The thesis proposes a novel decomposition-recombination strategy for large scale optimization of power distribution networks. The thesis was submitted by Saumitra Sharma and Nikhil Menghrajani under the guidance of Dr. Gaurav Trivedi at the Indian Institute of Technology Guwahati in April 2018.

Uploaded by

nikhil
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

A Method for large scale Optimization of Power

Distribution Network using a Novel


Decomposition-Recombination strategy

A thesis submitted in partial fulfillment of


the requirements for the degree of

Bachelor of Technology

by

Saumitra Sharma, Nikhil Menghrajani


(Roll No. 140108043, 140102080)

Under the guidance of


Dr. Gaurav Trivedi

DEPARTMENT OF ELECTRONICS & ELECTRICAL ENGINEERING


INDIAN INSTITUTE OF TECHNOLOGY GUWAHATI
April 2018
CERTIFICATE

This is to certify that the work contained in this thesis entitled

A Method for large scale Optimization of Power Distribution


Network using a Novel Decomposition-Recombination
strategy

is the work of

Saumitra Sharma, Nikhil Menghrajani


(Roll No. 140108043, 140102080)

for the award of the degree of Bachelor of Technology, carried out in the
Department of Electronics and Electrical Engineering, Indian Institute of
Technology Guwahati under my supervision and that it has not been
submitted elsewhere for a degree.

Guide

Date:

Place:
DECLARATION

The work contained in this thesis is our own work under the supervision of the
guides. We have read and understood the “B. Tech./B. Des. Ordinances and
Regulations” of IIT Guwahati and the “FAQ Document on Academic Malprac-
tice and Plagiarism” of EEE Department of IIT Guwahati. To the Best of our
knowledge, this thesis is an honest representation of our work.

Author

Date:

Place:
Acknowledgments

Needless to say this project wouldn’t have been possible without the support and guidance of
Dr. Gaurav Trivedi and we are grateful to him for the same. Apart from him, we would also like
to thank Mr. Satyabrata Dash, research scholar, IIT Guwahati for providing necessary guidance
when we ran into problems. We would like to thank CDAC, Pune and IIT Guwahati for allowing
us to use Param-Ishan, the supercomputer on which our algorithm was tested.
Abstract

Technology has been growing at an unfathomable pace with leaps of progress being made in
the fields of Machine learning, AI and more complex and powerful circuits being developed.
A lot of this fields have a common need of finding optimum solutions for parameters at a large
scale. A number of heuristic algorithms were developed for large scale optimization problems.
However, most of them resulted in the premature convergence or were computationally ineffi-
cient as the number of variables in the objective function increased. In this paper, we propose
a methodology which draws inspiration from co-operative co-evolution with a new grouping
strategy and results in convergence of large scale optimization problem. Experimental results
show that this method proves to be more effective than the existing evolutionary algorithms and
requires considerably less computational time. Furthermore, tweaking this algorithm slightly
along with RFD (river formation dynamics) we have attempted to solve the power distribution
network design problem, which is an area of increasing concern in semiconductor industry and
have positive results while doing so.

iv
Contents

Abstract iv

List of Figures vii

1 Introduction 1
1.1 VLSI Power Distribution Network (PDN) Design . . . . . . . . . . . . . . . . 1
1.2 Optimization Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Co-operative Co-evolutionary algorithms (CC) . . . . . . . . . . . . . 16
1.2.3 River Formation Dynamics (RFD) . . . . . . . . . . . . . . . . . . . . 18

2 Proposed Methodologies 22
2.1 Proposed Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.1 Strategy 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.2 Strategy 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.3 Strategy 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Applying RFD to PDN design problem . . . . . . . . . . . . . . . . . . . . . . 25
2.2.1 Graph approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.2 Electrical Circuit Theory Approach . . . . . . . . . . . . . . . . . . . 26
2.2.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3 Applying Proposed Optimization Algorithm over RFD to solve PDN design
problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3 Simulations and Results 29


3.1 Comparison between Optimization Algorithms . . . . . . . . . . . . . . . . . 29
3.2 Comparison between RFD and Proposed Algorithm for PDN design problem . 36

v
4 Conclusion and Future work 37

vi
List of Figures

1.1 Modeling power lines to resistive network . . . . . . . . . . . . . . . . . . . . 2


1.2 Population, Chromosomes and Genes . . . . . . . . . . . . . . . . . . . . . . 7
1.3 CFO decision space for 3 variables . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4 Parallelized implementation of CFO . . . . . . . . . . . . . . . . . . . . . . . 15
1.5 Decision space adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6 River Formation Dynamics scheme . . . . . . . . . . . . . . . . . . . . . . . . 20

2.1 Propagation of solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.1 Convergence time analysis of CFO . . . . . . . . . . . . . . . . . . . . . . . . 30


3.2 Convergence time analysis of parallelized CFO . . . . . . . . . . . . . . . . . 31
3.3 Convergence time analysis of GA . . . . . . . . . . . . . . . . . . . . . . . . 32
3.4 Convergence time analysis of proposed algorithm for 50 decision circles . . . . 33
3.5 Convergence time analysis of proposed algorithm for 100 decision circles . . . 34
3.6 Convergence time analysis of proposed algorithm for 200 decision circles . . . 35
3.7 Convergence time analysis of serial vs parallel RFD . . . . . . . . . . . . . . . 36

vii
Chapter 1

Introduction

Engineering is all about building useful objects by applying the laws of nature and while build-
ing an engineer might just not be interested in making a design that works in a nominal fashion
but is the best design in some way. With the onset of Machine learning, AI and Big data there
is a growing need for better large scale optimization techniques. Power Distribution Network
(PDN) design has been gaining major attention in the last couple of years chiefly due to drop
in operating voltages, increase in operating frequencies, increase in grid density and currents
which has strained the power grid with evolving technology and calls for a faster, more efficient
optimization algorithm that can help in reducing these problems for a significantly large number
number of variables. Through our project we have attempted to solve this problem and have
proposed an optimization algorithm performing better than current evolutionary algorithms.
In the following section we will present in detail the issues faced in designing Power
Distribution Network and how it reduces to an Optimization problem. Section 1.2 explains the
currently available meta heuristic optimization algorithms majorly covering Evolutionary, Co-
operation Co-evolutionary algorithms and River Formation Dynamics. Chapter 2 covers our
proposed optimization algorithm, application of RFD in solving the PDN design problem and
modifying PDN with our algorithm to solve the same. The simulations and the results of the
methods proposed is then shown in Chapter 3 followed by conclusion and future scope.

1.1 VLSI Power Distribution Network (PDN) Design


VLSI Power Distribution Network (PDN) has become one of the most important topics of nano-
scale VLSI design that delivers power and ground voltages from pad locations to all the devices

1
Figure 1.1: Modeling power lines to resistive network

in design. With a decrease in the operating voltage of high performance processors and an in-
crease in the current density of a single chip, results a significant drop in target impedance and
ensuring the proper functionality of an integrated circuit becomes a challenging job [1]. With
low supply voltage, issues related to supply voltage fluctuations cannot be ignored and can
severely compromise the functionality by introducing signal irregularities and logical faults.
Furthermore, with increase in the size of the power distribution network, voltage contours typi-
cally degrade as the current travels from the periphery to the centre of the chip. According to the
International Technology Road map for semiconductors (ITRS) [4], with advancing technology,
average current entering in the integrated circuit increases substantially over time as this will
allow increase in gate switching speeds and hence, reducing IR drop and switching noise of a
PDN caused by leakage and parasitic resistance, loop inductance and transient currents have be-
come extremely important. A foolproof design of power distribution network requires repeated
planning and thorough refinement using optimization techniques, that need to be equipped with
efficient strategies to administer multiple variables with high computational efficiency.

Numerous researches have been done for PDN modelling and noise reduction by mini-
mizing different parameters, such as area, decoupling capacitance and voltage drop. One of the
solutions to the large IR drop issue is using wider wire than signal nets or adding minimum
decoupling capacitors based on reliability, area and current consumed by the PDN. Many ap-
proaches have been proposed to reduce the wire area by constructing a mesh or tree topology of
PDN. Much work has also been done on optimizing the number and location of power supply

2
pads in order to limit the IR drop.

The magnitude of voltage drops and variations depend on the position and the placement
of pads as well as the wire width [2]. Consequently, the amount of IR drop at a particular
node affects the nearby region. Hence, we formulate the IR drop minimization problem as a
mixed-integer nonlinear programming problem where the objective is to minimize the IR drop
of the entire PDN by considering each region individually subjected to area, effective width
and current constraints. In order to formulate this problem, the PDN needs to be modelled as a
network having a set of horizontal and vertical DC loads and connected to a Vdd power supply as
shown in Figure 2.1. To calculate the amount of current drawn by the transistor circuits, current
sinks are placed at each of the nodes except the power supply nodes. For steady state analysis
the entire PDN can be thought of as a system, GV = S, (I, V ) ∈ Rmx1 . G ∈ Rmxm , with
conductance matrix G (modeling conductance), unknown voltage vector V modeling potential
of nodes and right hand side vector S modeling independent sources (power supply nodes and
current sinks). We apply nodal analysis to visualize the system. The system of equations are as
follows:

(AG GATG )vn = −AJ iJ + (AG GATG )Vdd

where AG and AJ are reduced incidence matrices across conductance and independent
current sinks respectively and vn and iJ are the vectors of node potential and current sinks
respectively. If Vir is the vector of IR drop at each node then the above equation can be rewritten
as:

−1
Vir = Vdd − vn = Vdd − (AG GATG ) ((AG GATG )Vdd )

We have assumed here that topology and the physical locations of the power supply pads
(pins and bond wires) are fixed. The optimal IR drop at different nodes of the PDN can be re-
placed as a minimization problem with the above mentioned equation as the objective equation
subject to the following constraints:

X
2
Marea ≤ ρlxy gxy ,
(x,y)∈R
2
ρlxy gxy ≥ wmin ,

3
X ixy
≤ Imax ,
wxy
(x,y)
X
(vy − vx )gxy ≥ ix ,
(x,y)

Vdd − vx ≤ Vth , ∀x ∈ N

where Marea denotes the area of the PDN, ρ denotes the sheet resistance, lxy , wxy , ixy
represents the length, width conductance and current in any branch (x, y), wmin represents the
minimum width of any branch, Imax represents the maximum current density and Vth represents
the threshold of IR drop. Width and area of the wires are considered as constraints to bound the
IR drop. I, the current density on the IR drop node can be considered as the limit of absolute
potential difference of two branches. During analysis each node must satisfy Kirchhoff’s current
law that can be represented as a nonlinear equality constraint. Furthermore, the IR drop at each
node is bounded by the threshold of IR drop. Formulation of the IR drop objective function
requires the knowledge of voltage profile of the PDN. However, with increasing size of the
network the matrix AG grows quadratically and the matrix quickly become diagonal dominant
and irreducible. Deterministic approaches mentioned in the researches on PDN optimization
problem become non-reliable as they require larger computational time and give poor accuracy.
There is thus a need of heuristic approach to analyze the PDN and minimize the IR drop.

4
1.2 Optimization Techniques

1.2.1 Evolutionary Algorithms

Evolutionary computation, as the name suggests consists of the system inspired from the idea
of natural evolution. This field is relatively new with a wide collection of theories explaining
different evolutionary processes of reproduction, selection and mutation. Conventional opti-
mization applications that look for correct parameter values to minimize or maximize the ob-
jective function work for linear functions but are ineffective against more complex functions.
Evolution laid down the foundation for strategies that helped in computing solutions to prob-
lems which appeared difficult. Along these lines, Holland then proposed genetic algorithms as
methods that are capable of recombining the successful scattered pieces of possible solutions
by taking into account learning of independent individuals. [8] Later different approaches were
developed to generate machine intelligence that performed evolution on different algorithms,
which in turn laid the foundation for research into evolutionary programming.

Evolutionary algorithms are a broad class of stochastic optimization algorithms based on mod-
els that consider biological growth of individuals in a natural environment which follow bio-
logical processes such as inheritance and survival of the fittest. They are particularly useful
in optimization problems where the number of variables are large and optimal global solution
are difficult to obtain. The objective function is then optimized through iterative selection and
mutation. The key issues with these from a computational point of view was representation and
searching. These algorithms had been applied in various fields that overlapped to the optimiza-
tion problem to some extent chief being the traveling salesman problem and the transportation
problem. They had also been used in artificial neural networks for finding the optimum set of
weights and also in the designing of network topologies.

Evolutionary algorithms chiefly relied on the three principles [9] that made it stand out among
other search algorithms which are as follows-

1) Population based: These algorithms used the collective learning of a population of indi-
viduals, essentially updating a whole population of possible solutions in each iteration, which
was randomly initialized in the beginning. This is identical to parallel exploring the search

5
space of the problem for a global optimal solution, which the process tends to find analogous to
population of organisms adapting to their surrounding environment.

2) Evolution: Population are evolved by the usage of stochastic operators which includes se-
lection, mutation and recombination. Selection process chooses better suited individuals for
reproduction (survival of the fittest), mutation simple replicates the individuals and recombina-
tion allows parents to pass down certain characteristics to their children.

3) Fitness: In order to ensure that fittest survive during the evolution a measure of fitness of
individuals based on the objective is assigned. The value of this function that measures the
quality of the individual is then used to carry out the selection process. All properties are
unique representations of an individual which are used in implementing the fitness evaluation.
Minor differences in the utilization of these basic principles characterized various evolutionary
techniques.

The pseudo code of a general evolutionary algorithm is shown below -


generation = 0;
create initial population randomly;
while not termination condition do
generation = generation + 1;
calculate fitness of the population;
select parents from population;
recombine the population;
mutate the population;
end
Algorithm 1: General Evolutionary Algorithm Pseudo code
The main loop runs until convergence to the fittest individual. Some of the evolutionary
algorithms which have gained wide popularity and use are mentioned below.

Genetic Algorithm (GA)

In this algorithm an initial set of solution is randomly initialized that constitutes as the first
generation of individuals, the population of which should be large enough so that there is suf-

6
Figure 1.2: Population, Chromosomes and Genes

ficient genetic diversity in the population and at the same time it shouldn’t be large so that the
computational time explodes. The solution obtained is termed as a chromosome that comprises
genes which are the decision variables that need to be be optimized. Our proposed algorithm
uses genetic algorithm for sub problems, the detailed algorithm of which is explained later.

1. Initialization: The genetic algorithm initiates a random population at the beginning


from the search space which is specified by the range of values each variable can take. Each
individual of the population is considered as a gene and consists of a vector of variables called
chromosomes. Each of these chromosomes have their fitness values which are defined by the
objective function such that when the values of the variables in each individual is substituted
in the objective function, it results into the fitness of that particular individual. If the objective
function needs to be minimized then smaller the value of the function, better is the fitness value
and vice versa. In this fashion each and every individual in the population has its corresponding
fitness value linked with it.

2. Ranking: The randomly generated population is then ranked on the basis of their fitness
values such that individuals with a better fitness score are aligned in the front and the ones with
poor fitness scores are aligned at the end. The purpose of this step is to club the best individuals
for crossover.
3. Selection: Starting from this step we start generating our next generation will be better than
the previous generations. In the new generation we preserve top 5% - 7% of the individuals of
previous generation. This process is called elitism and is used to conserve the best solutions of
previous iteration. The process of elitism has proved to generate a faster convergence.
We then select two individuals from the generated population using the roulette search algo-
rithm. In roulette search algorithm we generate a random number between 0 and total sum of
fitness values of each individual. We then start summing up the fitness values from the score

7
of the most fit individual up to the point when partial sum exceeds the randomly generated
value. At this point we check the index of the individual at which this summation exceeds
the randomly generated value and return this index as the index of the first population be bred.
Similarly, the second individual is also selected using roulette wheel selection procedure. Using
this methodology, individuals with better fitness values are given more chances to produce their
off-springs and the individuals with low fitness values are provided less chance to breed. This
in turn ensures generation of a population better than the previously generated population.
4. Crossover: Crossover is a step from which two selected individuals from the previously
generated population generate two new individuals for the new population. Here we define a
crossover rate which tells the odds of the two individuals to crossover. If the randomly generated
number between 0 and 1 happens to be greater than the crossover rate then the two individuals
cross else, the two individuals are passed on to the next step of mutation. If two individuals are
fit for crossover, we apply a modified mid-point crossover technique to generate the off-springs.
Since the individuals are real coded in place of binary coded, modified midpoint technique
was used. In this technique, we calculate the fitness of the two parents x1 and x2 along with
(x1 + x2 )/2, (x1 − 3x2 )/2 and (3x1 − x2 )/2. From these 5 candidates, we select the best two
individuals as our new off springs. This method ensures that the newly generated population is
atleast as good as the old population.
5. Mutation: In order to more closely replicate the original scenario of reproduction, we also
incorporate the issue of mutations that occur in the generated off-springs which in turn, makes
the individual different from its predecessor. Here we define a mutation rate which defines how
often will a mutation will occur in a fashion similar to the crossover rate. Then the individual is
then modified according to the truncated Gaussian mutation operator.

0
 1 φ( xi −xi ), if ai < x0i < bi

σ σi
p(x0i ; xi , σi ) = i

0

, otherwise

After the mutation, the individual is ready for its inclusion in the new population. Once,
the new population is formed, it is ranked in order to find the fitness value of the most fit individ-
ual and the old generation dissolves. The newly generated population then again undergoes the
steps of selection, crossover and mutation in order to get the next population which is at least
as good as its parent population. In this fashion, we keep on getting closer and closer towards
the solution until we converge.

8
The pseudo code of genetic algorithm is given below:
gen = 0;
pop(gen) = randomly initialized population;
evaluate fitness of each individual in pop(gen);
while not termination condition do
gen = gen + 1;
select pop(gen) from pop(gen-1) based on fitness;
apply genetic operators to pop(gen);
evaluate fitness of each individual in pop(gen);
end
Algorithm 2: Genetic Algorithm Pseudo code

Particle Swarm Optimization (PSO)

PSO gets it inspiration from the way a flock of birds communicate with each other to travel to
an undesired location. Initially each bird flies in a set direction but it changes its path of motion
when it is notified by other birds in the population so as to follow the most fit bird or the most
fit solution to the destination. Thus, the birds change their direction of velocity towards that
particular bird. Each bird then explores their new local position and the process of choosing
and following the one bird with the best direction continues till they reach their destination.
It is to be noted here that the flock reaches the final destination by learning from each others
individual local searches.

PSO imitates this behavior by initially starting with N random particles and assigning them
random velocities. The location of the ith bird is denoted by a vector in n dimensional space
where n denotes the count of decision variables. Every particle keeps a check on its best solu-
tion and the entire swarms the most fit solution. Each particle has resides on a position vector
given by (xi (t)) and a velocity vector (vi (t)), the location of the most fit solution termed as
(pbesti ) encountered by the particle and the index of the best particle (gbest) in the swarm.
In each generation, the velocity of the particle is refreshed keeping in mind the particle’s best
solution and the population’s best solution which is given by the set of equations:

9
vi (t + 1) = wvi (t) + c1 (pbest − xi (t)) + c2 (gbest − xi (t))

xi (t + 1) = xi (t) + vi (t + 1)

Ant Colony Optimization (ACO)

Ants use pheromone trails to communicate with each other in order to find the shortest path
from their home to the source of food. These trails serve as path for other ants roaming in var-
ious directions who follow this trail and enforce its trail over it making it more intensive. The
ant that finds a food particle returns to the nest laying a trail along the shortest route, which the
remaining ants follow and add to that trail. Ants thus find optimal solutions using local state
knowledge and each others actions. [9] ACO is thus implemented by representing a variable x
for each ant and variable i to store ni options with their values cij . Their pheromone concen-
tration could be represented by Mij . We start by taking m random ants. The concentration with
each available path is changed in a way such that better solutions are reinforced and are given
by the equations -
Mij (t) = qMij (t − 1) + ∆Mij

where, t is the number of iterations,Mij (t−1) is the given concentration at the previous iteration
(t − 1), Mij (t) is the newly obtained concentration with option index given by cij at given
iteration, ∆Mij is the change in pheromone concentration and q is the rate of evaporation of the
pheromone.

Cultural Algorithm

They consist of an evolutionary population whose experiences are added into a judgment space
consisting of different knowledge paradigms. It is a dual inheritance system inheriting from
the population plus judgments. Initially, individuals in the population space are then evaluated
with fitness functions. An acceptance function will be used to select the individuals which are
believed to impact the belief space. Knowledge of these individuals are used to evolve the
knowledge of the judgment space. The judgments are then employed to improve the knowledge
of the population. New individuals are generated using the knowledge of the judgments along
with the old individuals used to form a new population. [10] The pseudo code of the algorithm

10
is as follows -
initialize it = 0, belief (it), pop (it);
while not end of termination condition do
evaluate pop(it);
update(belief(it) , accept(pop(it)));
generate(pop(it) , influence(belief(it)));
it = it + 1;
select pop(it) from pop(it-1);
end
Algorithm 3: Cultural algorithm Pseudo code

Differential Evolution (DE)

DE is a stochastic optimization algorithm made to optimize real valued functions that can be
used to find almost accurate solutions to problems that can be even non-differentiable and non-
continuous. [11] In DE we create a population of size N and define range for each parameter and
randomly initialize values within this bound. Each of these N variable vectors undergoes mu-
tation, recombination and selection operations. Mutation is used to enhance the search space of
the problem. For a given parameter vector A(t, G) a donor vector is generator using a mutation
factor (m) and three randomly generated distinct vectors given by -

B(t, G + 1) = A(r1 , G) + m(x(r2 , G)m(r3 , G))

Recombination mixes the best solutions from the previous iteration of population. A trial vec-
tor u(t, G + 1) is built from the elements of a particular vector, A(t, G) and the elements of the
donor vector B(t, G + 1). The target vector A(t, G) is compared with the trial vector v(i, G + 1)
and the one with the least function value is incorporated in the next iteration of population. The
process reaches convergence when a particular best solution retains its values for subsequent
iterations.

Further developments have been made in evolutionary algorithms using different type of evo-
lutionary techniques namely hybrid algorithms, which uses different algorithms for local and
global optimized solution and genetic algorithms, which uses genetic algorithms that uses local
search extensively.

11
Central Force Optimization (CFO)

Central Force Optimization (CFO) is a meta heuristic optimization evolutionary algorithm based
on gravitational kinematics. Unlike other meta heuristics such as PSO and ACO, CFO is a
deterministic algorithm which derives inspiration from Newton’s Gravitational Law that defines
the force of attractions between two particles of mass m1 and m2 which is given by:

F = Gm1 m2 /r2

Where r is the distance between the two particles and G is the gravitational constant.
Gravity is a central force as it applies along the line joining the centre of masses of the two
particles. The two particles are attracted towards each other with acceleration of m1 given by:

a = −Gm2 r̂/r2

Where r̂ is a unit vector pointing from m2 to m1 . The position vector of a particle experi-
encing constant acceleration in the interval t to t + ∆t can be written as:

R(t + ∆t) = R(t) + V (t)∆t + 0.5a(t)∆t2

where R(t + ∆t) is the position of the particle at time (t + ∆t) and R(t), V (t) and a(t)
denote the position, velocity and acceleration of the particle at time t.
Given an objective function f (x1 , x2 , ..xN d) and a decision space given by Xi (min) <=
Xi <= Xi (max) CFO randomly allocates N p probes in an N d dimensional system. These
set of probes move through the decision space in discrete time steps. At each given time space
each probe has a fitness value given by the objective function value of its coordinates. [5] The
position vector Rjp denotes the position of probe p at discrete time step j given by:

k=N
Xd
Rjp = xp,j
k eˆk
k=1

As the time passes on the probes move through space driven by equations of motion which
are derived from the gravitational forces. CFO defines the acceleration that probe p faces due
to n at time step j given by:

n p n p p
G.U (Mj−1 − Mj−1 ).(Mj−1 − Mj−1 )α (Rj−1
n
− Rj−1 )
n p β
mod(Rj−1 − Rj−1 )

12
Figure 1.3: CFO decision space for 3 variables

p p n p
where Mj−1 is the fitness value of the probe p at Rj−1 and (Mj−1 − Mj−1 )α represents the
mass. The acceleration in CFO is different from the gravitational one in terms of the parameters
α > 0, β > 0 and the unit step function U (.) Also the mass defined in CFO is a user defined
function of the fitness values which ensures that prove p is drawn to probe n if and only if n
has a higher fitness value than p, thus pushing the probes to the position at which the objective
function obtains global maxima. In the physical system nearby probes may exert greater gravi-
tational pull towards each other, however, taking the fitness difference as mass turns out to be a
better measure of the gravitational pull a larger fitness probe should influence on a smaller one.
p
Finally, the total acceleration that p faces as it moves from Rj−1 to Rjp is given by:

k=N p k p k p p
X G.U (Mj−1 − Mj−1 ).(Mj−1 − Mj−1 )α (Rj−1
k
− Rj−1 )
apj−1 = k p β
k=1,k6=p
mod(Rj−1 − Rj−1 )

The new position is then updated according to the equation mentioned above taking V (t) =

13
0 and ∆t = 1 for simplicity sake. the pseudo code of CFO is as follows:
Initialise parameters alpha, beta, G, Nt, Np;
while p¡=Np do
Initialise initial position vector R0p uniformly over the decision space;
Initialise initial acceleration Ap0 to 0;
Compute initial fitness M0p ;
end
while not termination condition do
Update position according to equation ;
Retrieve stray probes;
Update Fitness;
Compute accelerations;
end
Algorithm 4: CFO Algorithm Pseudo code
The sequential time distribution of CFO shows that majority of the time is spend in up-
dating the acceleration of the probes which is dependant on the updated position and the fitness
of the probes. Thus, CFO can be further sped by adding some parallelization using multiple
threads for updating the position and the acceleration of probes. [7]

Parameter free Central Force Optimization (CFO)

A major issue with all evolutionary algorithms is the setting up of parameters, for example
ACO requires user to set up 9 parameters and PSO requires user to specify the population
size and six bounded parameters. This problem is compounded by the fact there is no correct
methodology of reaching at the suitable values of these parameters. Moreover the right values
for these parameters are problem dependant and the solutions are sensitive to minute changes.
CFO unlike other EA’s is completely deterministic and there is no randomness involved and has
a behaviour strikingly similar to near earth objects. [6] CFO initially places probes uniformly
in the decision space parallel to coordinate axes intersecting at a point that lies on the principal
diagonal of the space given by:

D = Xm in + γ(Xm ax − Xm in)

Stray probes are a problem in CFO since the acceleration at that point is to great to keep

14
Figure 1.4: Parallelized implementation of CFO

15
Figure 1.5: Decision space adaptation

inside the boundaries. In this approach a re positioning factor Fr ep <= 1 is introduced that
is incremented by a certain amount after each iteration. This approach includes the adaptive
reconfiguration of the decision space to improve the convergence speed which is done after
every 10th step as follows:

Xm in0 = Xm in + (Rb est − Xm in)/2

Xm ax0 = Xm ax − (Xm ax − Rb est)/2

Analogous to CFO this approach too can be parallelized using multiple threads for updat-
ing the position and the acceleration of the probes.

1.2.2 Co-operative Co-evolutionary algorithms (CC)

In order to deal with complex structures there is a need to define and develop modularity in
order to give a chance for intermediate solutions to optimization problem to grow better in
terms of fitness and interact in the form of solutions to sub problems to a global problem of
optimization. [12] Cooperative Co evolutionary system develops this particular idea using the
following methods -

• A species is a solution of a sub-problem and is an individual of a sub population.

• Complete solutions are obtained by combining the best solutions of these sub-populations

• Fitness at the specie level is obtained by the contribution of that specie and rest best
individuals of every other specie.

• The number of species should evolve by itself whenever there is a need.

16
• The evolution of each specie is done by a vanilla Evolutionary Algorithm

Multi-objective optimization problems (MOPs) are optimization problems that involve the op-
timization of multiple objective functions at the same time and their solution involves finding
the best possible solution that can handle the trade off between these objective functions. The
objective function values at the set of solutions of MOPs form what is called as a pareto front.
The efficiency of these multi objective evolutionary algorithms (MOEAs) decreases as the di-
mension of the search space increases. Empirical evidence indicates that most of the currently
available CC have been found to work better in solving large scale optimization problems. [13]
This framework incorporates divide and conquer approach that divides the entire population of
decision variables into sub populations of smaller size, so that each of these sub populations is
optimized using separate evolutionary algorithms.

While MOEAs are able to generate a set of candidate solutions to MOPs coping with the
complicated shapes of the pareto front and at the same time being easy to implement, they
have to undergo a lot of evolutions which is computationally inefficient. As the complexity of
the MOP increases further, higher computation power is required to solve these systems. A
distributed parallel cooperative co-evolutionary algorithm was suggested that reduced the op-
timization time by using distributed parallelism. [14] While CC algorithms decomposes MOP
into a set of smaller dimensional sub-problems and performs better than various state of the art
algorithms, they are implemented serially which still doesn’t utilizes the parallel computation
power at hand. The proposed algorithm is implemented on a distributed platform. In order to
increase the amount of parallelism and to incorporate the power of multiple processors.

While CC algorithms introduced an encouraging approach to solve large variable optimization


problems, very few studies were shown that reduced this problem into one of solving vari-
ous lower dimension problems. These reduction approaches were not successful in finding the
strong connections among different components. A new approach was introduced that changed
these varying components dynamically that significantly improved the possibilities of optimiz-
ing interactive components together. Furthermore, co-adaptation is present in different bundles
that helps the overall solution. The chief idea of this approach is to reduce the dimension of the
optimization function into smaller ones and let each one of them learn and develop with a basic
evolutionary algorithm. For co-adaptation a new way is utilized that applies a density to each

17
of the bundles after each iteration and evolves this density vector. Apart from this various other
of grouping algorithms were published two of which are mentioned below.

CC with Delta Grouping

Inadequacy of an innate method disintegrating non separable problems is the biggest hitch for
CC frameworks. Bunching interactive components in various bundles brings in a threshold to
which these components can be optimized, to put it simply it hampers the growth of interac-
tive components. [15] Delta grouping calculates the mean difference across the population and
utilizes it for finding the interactive components.

CC with Random Grouping

It was shown that it is more difficult to bunch interacting components into a bundle as the
number of components increased. Multilevel Co-operative Co-evolution (MLCC) uses random
bunching to group interacting components into a bundle. Adaptive weighting proves to useless
and does not help in getting the solution quicker or getting better quality, thus wasting comput-
ing time. The new method randomly bunches components into bundles to get better chances
at bunching interactive bundles in problems that are difficult to separate. [16] This method also
develops density values for transformation of bunches that helps in getting better solutions.

1.2.3 River Formation Dynamics (RFD)

RFD is a meta heuristic algorithm that is used to search for an optimal trajectory of solution
by incorporating a decision through probability of transition from each and every node in a
graph to any of its neighbouring nodes. RFD algorithm is found effective in solving various
NP-complete problems like the problem of wireless routing, VLSI PDN, and analog circuit
design problems. This meta heuristic optimization algorithm uses the analogy of water drops
flowing in a river trying to find an optimal trajectory to reach the bottom or the ocean which
is a flat surface. The trajectory is assumed to be a straight line and while flowing, these water
drops choose their trajectories probabilistically at each level of altitude by the downward slope
gradient and carry some sediment along with it (sediment transport) through the process of
erosion. These water drops are just moving points in a multidimensional search space which
through following different paths, try to find the best possible path for optimization.

18
Initially the water mass is released at the top of a terrain (say a mountain) from where the
water mass tries to find a path to reach the bottom of the terrain where it finds a flat surface
(say a sea or an ocean) where there is no more possible gradient in height. While these water
drops flow down the slope to reach the bottom, they take away some of the soil particles along
with them by a process called erosion. Moreover these eroded soil particles are also dropped
by the flow of water during their path to their destination at intermediate altitudes. This process
of erosion transforms the landscape during its traversal and thus modifies the terrain for the
next coming water drops. Thus these water drops generated as a result of rain at some point of
origin, tends to increase or decrease the altitudes at various points in their trajectory and thus
try to optimize the path in order to reach the bottom. In case there is only one source of water
mass release, this path will be the shortest optimal path.
In general, the RFD algorithm is used to find the shortest path in the search space and
solves the travelling salesman problem quite efficiently. This algorithm is an improvement over
its predecessor Ant Colony Optimization method by providing more accurate results. In this
algorithm, each node is assigned some altitude values. Nodes themselves are connected by
edge scaled paths having some specific length and posses some gradient values depending on
the heights of the nodes it connects. Water drops travel from nodes to nodes through these
paths. During travelling in between nodes, these water drops carry some soil with them and
either erode the ground (decrease the altitudes of the nodes) or sediment the soil on these nodes
(increase the altitudes of these nodes). While traversing through nodes, the choice of a certain
edge while standing on a node depends on the gradient of the slope of the connecting edges.
Initially the we assume a flat environment that is the source node (where the water falls is
placed at a certain height), the destination node is a hole and the rest of the surface is assumed
to be flat with zero gradient. In order to avoid water stalling at these points, we assign some
non zero transition probabilities even for zero gradient edges. These drops are then allowed
to flow to different nodes through edges until they reach the destination node. This erodes the
intermediate nodes and thus create slopes. Therefore, if a short path is found in RFD, it is more
likely to be followed again by subsequent water drops. A noticeable point is that the difference
between the altitudes of the source and destination nodes remains same throughout the entire
algorithm and the path length varies creating gradients. Edges with shorter path length will have
steep slope and thus will be preferred.
A strength of RFD over its predecessor (ACO) is the fact that it is capable of avoiding inef-

19
Figure 1.6: River Formation Dynamics scheme

ficient solutions. [3] If a path leads to a node which has an altitude lower than all its surrounding
nodes, water drops will deposit sediments on these paths and thus will increase altitude.
Initialize Drops;
Initialize Nodes;
while not allDropsFollowSamePath not terminationCondition do
move Drops;
erode Paths;
deposit Sediments;
analyze paths;
end
Algorithm 5: Pseudo code for RFD
First drops are initialized i.e. placed in node 0 in order for them to traverse freely down
slope. Next all the nodes are initialized with the rule that start node with altitude 1, destination
node with altitude 0 and the rest of the nodes with a constant non zero altitude. Next the drop
moves from various nodes to the other nodes based on a probabilistic rule given by:

∇Dr (ni ,nj )
, if nj ∈ Nrx (ni )


 deg(ni )
 P
x ∇Dr(ni ,nj )
P(ni ,nj )
= nk =1


0

otherwise

where , deg(ni ) is the set of connected nodes and N r (ni ) denotes the set of nodes that are
connected to ni . Dr(ni , nj ) is the negative gradient between node ni and nj which is given by
the following equation as:

Valt (ni ) − Valt (nj )


∇Dr(ni , nj ) =
W (ni , nj )
Where, V alt (ni ) stands for the altitude of node ni and W (ni , nj ) is the weight or cost of the edge
ni to nj .

20
Some important points to be noted:

• Altitude of the sink remains constant throughout the algorithm.

• Once erosion is finished, the altitudes of all the nodes is increased a little in order to
account for the sediment flushed through the sink.

• Erosion avoids looping throughout the same sequence of nodes within a graph as once
the area is eroded, the probability of going back upstream is very low. When there are
multiple drops flowing from several initial nodes, it is not necessary that RFD algorithm
will provide a single best possible solution [17]. This property of forming multiple indi-
vidual paths to reach the destination from each individual nodes can be applied to power
grid analysis.

21
Chapter 2

Proposed Methodologies

2.1 Proposed Optimization Algorithm


Our method is capable of further increasing the capabilities of cooperative co-evolution by
iterative division and recombination of the set of variables onto optimization contours. The
proposed method is expected to show faster convergence for complex optimization problems
whose sub problems are harder to find or in simpler words, whose set of independent species
are harder to find. Moreover, we exploit the domains of parallel computing in order to boost
the speed of our algorithm and fully utilize the available resources in modern era. The bed
stone of the algorithm lies on the division of the set of variables. Suppose we have an objective
function given as f (x1 , x2 , xN ) with x1 , x2 , xN as the variables that needs to be optimized. We
propose 3 strategies of intelligent division and recombination of variables that results in a faster
convergence of algorithm.

2.1.1 Strategy 1

Strategy 1 is the unsorted equal division of variables into sub-problems. In this strategy suppose
we have N variables and we decided to keep k sub-problems which we call as decision circles
then there will be an equal division of N/k variables on each circle. These variables are divided
such that the first k variables are distributed on the outermost circle, the next k variables are
distributed on the outermost but one circle and so on till the last set of variables are distributed
on the innermost circle.

22
After being distributed, a controlled GA is applied onto variables distributed on each circle.
The characteristics of this controlled GA is as follows:

• The algorithm runs for a specific number of iterations, rather than running for infinite
time for convergence

• We incorporate the solution of the previous iteration into the set of individuals, rather than
using all the individuals as the randomly generated individuals

We define 2 sets of variables for each circle viz. True variables and Pseudo variables.
True variables are those that will be treated as the search variables for the genetic algorithm and
Pseudo variables are the one which will be used just in order to evaluate the fitness values.
This list is then sent to the underlying GA along with two best individuals from previous
iteration. We do this in order to incorporate the solution of the previously run iteration in the
current iteration. The previous two individuals are directly used by replacing the two randomly
generated individuals from the original list of individuals. Rather than running the entire GA
until convergence, we run GA for a few iterations so that the current best individual moves
closer to the global best solution. The resultant two individuals are then kept in separate buffers
for recombination at the end when GA runs on all the circles. Once the GA runs on all circles
and best two individuals are obtained from each circle, these individuals are combined to form
the individual of the entire population. These individuals are either the final solutions or are
used as parents for the next iteration.

We keep on repeating the above procedure until the overall fitness value of the best individual
of the overall population converges. We call this solution as the global solution of the overall
optimization algorithm.

2.1.2 Strategy 2

Here we use unsorted unequal division of variables on the optimization contours or circles. In
this strategy, we divide the variables in a fashion that some circles or the optimization contours
have more number of variables and some have less number of variables. This strategy was em-
ployed keeping in mind that some dimensions of the current best population will be closer to
the global optimum and some dimensions or variables might be farther away from the global
optimum.

23
In such a case the dimensions which are closer to the global optimum will need lesser num-
ber of GA iterations to reach the optimal value whereas the one which are farther away need
more GA iterations to reach this point of global optimum. In co-operative co-evolution, we run
similar number of iterations on all the variables, thus wasting computation power that could
be used for speeding up the computations. We compute the change in the variable values that
occur in each of those iterations and push the variables that are inherently constant to the inner
circles or the optimization contours. We claim to achieve a major boost in the convergence time
by using this technique.

2.1.3 Strategy 3

Here we use sorted equal division of the variables on each of the optimization contour. In
this strategy, we initially divide the total number of decision variables equally on all the opti-
mization contours. Then we run the iteration of our method on all these contours once. After
obtaining the top two best individuals from the first global iteration of our method, we sort these
variables on the basis of their contribution in the optimization procedure. After being sorted,
these variables are grouped according to their contribution in the fitness of the overall objective
function in equal numbers. The ones with the most contribution are grouped together followed
by the ones with lesser contribution and so on till we finally finally reach the ones with least
contribution to the fitness of the objective function.

We claim to improve the performance of the existing co-operative co-evolution methods us-
ing this strategy as it will avoid convergence of the algorithm in a local minima/maxima. As
the groupings of variables in the optimization procedure changes, it ensures perturbations of the
individual in the decision space and thus it will eventually catch a direction that leads to global
minima/maxima. This strategy is repeated in all global iterations until we finally converge to a
stable solution.

Even though the implementations of the above strategies prove to be effective in obtaining
a major performance boost, we can further utilize another major power viz. parallel computing
in order to get a manifold increase in the speed of these optimization strategies. We claim to
use parallel computing in our optimization contours to run GA on all these contours parallel.

24
Moreover we hypothesize to parallelize GA itself by evaluating the population in parallel. All
the above strategies that we have mentioned so far can be implemented in the parallel procedure
in which each optimization contour is given a core to optimize the problem in parallel.

2.2 Applying RFD to PDN design problem


As was shown earlier minimizing the IR drop in a PDN boiled down to an optimization problem.
PDN can be substituted by a linear system of equations and the potential at each node can be
found by figuring the best path taken by a drop to reach the required destination. PDN analysis
can be done by considering some number of drops to take different routes and taking the average
gradient of each path as a rough solution. [3] If sufficient number of water flows is considered
then the average gradient gives us a fairly accurate solution.

2.2.1 Graph approach

Consider a connected graph with N vertices and E edges. If we allow a drop of water to flow
from an initial parent node say no then the probability of finding the drop at node ni after ith
transition is inversely proportional to the dimension of the graph. Let us consider a drop that
will flow through the power grid network having system matrix AGAT . We define T as matrix
of transitional probabilities given by:

T = (Pna ,nb )na ,nb ∈N = GA

where G is the diagonal conductance matrix and A is the adjacency matrix representing
the PDN. Transition rules can be written as:

Pni+1 = T T .Pni

Pni = (T T )i .P0

where Pni is the probability distribution after the ith step. Harmonic functions can further
prove the feasibility of using RFD to represent the potential nodes of the network.

25
2.2.2 Electrical Circuit Theory Approach

DC analysis on the network requires modeling of the network consisting of source voltages,
resistors and currents. Applying Kirchhoff’s current law at a node gives us:

a=n
X
Ia = Ib
a=1

where n is the number of neighbours of b, Ib is the current drawn from b and Ia is the
current flowing in the branch connected to the ath neighbour. This can be further simplified to:

a=n
X a=n
X
Ia = ga (Va − Vb )
a=1 a=1

where ga and Va are the conductance and the voltage of node a and Vb is the voltage of
node b. In order to apply RFD, we consider Vdd pads as the destination or the sea, potential of a
node compare to the altitude and the slope of the edge can be replaced with resistance. If p(a, b)
is the probability of drop flowing to b from a then:

b=n
X
pa,b = 1,
b=1

∆D(a, b)
pa,b = Pb=n ,
b=1 ∆D(a, b)
∆D(a, b) = ga (Va − Vb )
ga (Va − Vb )
T hus, pa,b = Pb=n
b=1 ga (Va − Vb )

2.2.3 Implementation

RFD takes benefit from the grid locality to increase computational efficiency of finding the
optimum solution by providing local estimates for hotspots. This enhancement in performance
comes from the fact that we completely avoid solving the problem using matrices. the water
droplets in RFD search for new ways to reach the destination according to their current and
adjoining contour viz. the gradient of adjacent edges. New edges are derived from each node
after computing the probability from adjacent gradients. Once the water drop selects an edge
to flow along the altitude of the corresponding node is updated in accordance to the erosion
caused. In order to productively apply RFD erosion at a node is taken as the potential over the
current sink branch of that node and can be represented as:

26
Iasink
va = −
Gtotal
a

where va denotes the potential across the sink at node a, Ia is the current flowing through
that branch and Ga is the total sum of conductances connected to node a. This process is
continued till the droplet reaches a Vdd node. When this happens the potential of the start node is
computed by summing the erosion at all visited nodes along with the potential of the destination.
This is continued till the potential at all nodes converges and this termination condition can be
written as:

−α < Va − Va0 < α

2.3 Applying Proposed Optimization Algorithm over RFD to


solve PDN design problem
We finally propose to solve the problem of power distribution network analysis using our pro-
posed algorithm for optimization. Since our proposed method requires an underlying algorithm
to work, we choose RFD as that algorithm to tackle the the given problem at hand. Initially
we divide the entire set of nodes into various circles which is basically grouping the nodes to-
gether. The size of grouping depends on the extent of parallelization that we wish to achieve
while solving the problem. Since there are resistances attached in between nodes, we are es-
sentially dividing the set of variables into various sets. Now since these nodes are connected
together, there is a inter group dependency existent in this structure. For example if we divide
10K nodes in set of 2K each, the set of first 2K nodes are connected to the next set of 2K nodes
as the nodes in these 2 sets might have interconnects in between them.
In order to tackle this problem, we propose a concept of pseudo mesh ends. The previous
batch of nodes that share common ends to the next batch imports those common nodes and
their interconnections to this batch and within each other. This similar strategy is applied to
other batches as well where they import pseudo mesh ends from previous as well as the next
batch. This import of pseudo nodes increases the size of each batch depending upon the number
of nodes shared by these batches with each other. The imported nodes have number of edges
lesser in count than the number of edges in their original batch. After importing these nodes,

27
Figure 2.1: Propagation of solution

we run the instances of RFD over each and every batch of nodes. When the RFD algorithm
converges for each of these batches, we tend to propagate the solution within batches with the
help of the pseudo mesh ends.
The common mesh ends have potential values which are essentially the altitudes of nodes
in the River terrain, whenever we allow the flow from one compartment to other, the water drops
shall flow from the region of higher altitude to the lower one, tending to average out the altitude
of water at those points. We tend to adapt to this strategy in the fashion that we average out the
node altitudes at the point where the common nodes between different batches join. In order
to ensure that the solution is propagated properly, the above methodology is repeated multiple
times until the convergence is reached. In this fashion we applied the strategy 1 of our proposed
method to solve this problem.
The proposed methodology is suitable to increase performance as the task of evaluating
the altitudes of each and every node which was initially done serially can now be done parallely
in form of batches. Moreover the batch size is significantly reduced thus giving benefits in terms
of memory efficiency on a single processor.

28
Chapter 3

Simulations and Results

3.1 Comparison between Optimization Algorithms


The entire code for genetic algorithm, central force optimization and the proposed optimization
algorithm for strategy 1 has been written in Python. For implementing the parallel comput-
ing architecture, we have used MPI (the message passing interface) which is a job scheduling
interface for parallel computing. To make MPI work simultaneously with the written Python
implementation of our strategies, we used mpi4py python library that interacts with the existing
version of MPI in the system to run our code on parallel processors.

The code was run on Intel Core i5 processor and param-ishan, the super computing facility at
IIT Guwahati. On Param-Ishan the code was run for 100, 500, 1000, 10000, 100000, 500000,
1000000 and 5000000 variables and the objective function used was the sphere function. The
time of convergence was observed with varying the number of decision circles. To incorporate
the parallel computing architecture, we allocated an entire CPU core to a single decision cir-
cle. The time of convergence for these algorithms for varying number of decision circles was
then recorded for 10 observations and their mean values were tabulated and plotted and are as
follows:

29
Figure 3.1: Convergence time analysis of CFO

30
Figure 3.2: Convergence time analysis of parallelized CFO

31
Figure 3.3: Convergence time analysis of GA

32
Figure 3.4: Convergence time analysis of proposed algorithm for 50 decision circles

33
Figure 3.5: Convergence time analysis of proposed algorithm for 100 decision circles

34
Figure 3.6: Convergence time analysis of proposed algorithm for 200 decision circles

35
Figure 3.7: Convergence time analysis of serial vs parallel RFD

3.2 Comparison between RFD and Proposed Algorithm for


PDN design problem
The original RFD algorithm was simulated in C++ and run over 6 different benchmarks to opti-
mize the power distribution network problem. The benchmarks were standard industry bench-
marks where the initial file with initial node potentials and initial current values were supplied
by using LU decomposition. The benchmark files were standard 10,000 nodes, 250,000 nodes
and 1,000,000 nodes files. Moreover 3 IBM standard files were also used each having dif-
ferent configuration of nodes. The optimization algorithm was run on Param-Ishan, the super
computing facility available at IIT Guwahati on a single node using a single processor.
The parallel RFD implementation was also made on C++ on the top of the serial RFD
implementation and was parallelized using Message Passing Interface (MPI). The parallel RFD
implementation was run on the similar industry standard input files. The number of divisions
used were 5 and the code was run on Param-Ishan, on a single node utilizing 5 processors (one
batch running on each processor). The speedup thus obtained is as follows:

36
Chapter 4

Conclusion and Future work

With the inherent increase in the size of problems varying from VLSI power distribution net-
work to the more recent trending topics like big data and machine learning, the process of
optimization is unavoidable. Moreover, the number of variables involved in the optimization
process has been increasing. For solving such huge problems in real time, we need optimization
algorithms that efficiently use the power of parallel processors available today. Our proposed
meta-heuristic algorithm has shown promising results given the constraints and is thus, well
suited to solve real world problems requiring large scale optimization.
The results of the Power Distribution Network design problem can however be further
improved by running on GPU resources. Moreover,the proposed method can be ported to other
areas of big-data and real time machine learning problems.

37
Bibliography

[1] Y. Kim et al., ”Power distribution network design and optimization based on frequency dependent
target impedance,” 2015 IEEE Electrical Design of Advanced Packaging and Systems Symposium
(EDAPS), Seoul, 2015, pp. 89-92. doi: 10.1109/EDAPS.2015.7383675

[2] Pant, Sanjay. (2008). Design and Analysis of Power Distribution Networks in VLSI Circuits.

[3] S. Dash, K. L. Baishnab and G. Trivedi, ”Applying River Formation Dynamics to Analyze
VLSI Power Grid Networks,” 2016 29th International Conference on VLSI Design and 2016
15th International Conference on Embedded Systems (VLSID), Kolkata, 2016, pp. 258-263. doi:
10.1109/VLSID.2016.42

[4] A. Allan, D. Edenfeld, W. H. Joyner, A. B. Kahng, M. Rodgers and Y. Zorian, ”2001 tech-
nology road map for semiconductors,” in Computer, vol. 35, no. 1, pp. 42-53, Jan. 2002. doi:
10.1109/2.976918

[5] Formato, R.A. OPSEARCH (2009) 46: 25. https://doi.org/10.1007/s12597-009-0003-4

[6] Formato, Richard. (2015). Central Force Optimization. 10.13140/RG.2.1.4831.1767.

[7] Korany R. Mahmoud and Safwat Hamad, Parallel Implementation of Hybrid GSA-NM Algorithm
for Adaptive Beam-Forming Applications

[8] X. Ma et al., ”A Multi objective Evolutionary Algorithm Based on Decision Variable Analyses
for Multi objective Optimization Problems With Large-Scale Variables,” in IEEE Transactions on
Evolutionary Computation, vol. 20, no. 2, pp. 275-298, April 2016.

[9] http://hdl.handle.net/10603/151006

[10] R. G. Reynolds and B. Peng, ”Cultural algorithms: modeling of how cultures learn to solve prob-
lems,” 16th IEEE International Conference on Tools with Artificial Intelligence, 2004, pp. 166-172.
doi: 10.1109/ICTAI.2004.45

38
[11] Arunachalam, Vasan, ”Optimization Using Differential Evolution” (2008). Water Resources Re-
search Report. 22.

[12] Luis Miguel Antonio and Carlos A. Coello Coello. Use of Cooperative Co evolution for Solv-
ing Large Scale Multi objective Optimization Problems, in 2013 IEEE Congress on Evolutionary
Computation (CEC’2013), pp. 2758–2765, IEEE Press, Cancn, Mxico, 20-23 June, 2013.

[13] Zhenyu Yang, Ke Tang, Xin Yao, Large scale evolutionary optimization using cooperative coevo-
lution, In Information Sciences, Volume 178, Issue 15, 2008, Pages 2985-2999, ISSN 0020-0255

[14] B. Cao, J. Zhao, Z. Lv and X. Liu, ”A Distributed Parallel Cooperative Co evolutionary Multi ob-
jective Evolutionary Algorithm for Large-Scale Optimization,” in IEEE Transactions on Industrial
Informatics, vol. 13, no. 4, pp. 2030-2038, Aug. 2017.

[15] M. N. Omidvar, X. Li and X. Yao, ”Cooperative Co-evolution with delta grouping for large scale
non-separable function optimization,” IEEE Congress on Evolutionary Computation, Barcelona,
2010, pp. 1-8.

[16] M. N. Omidvar, X. Li, Z. Yang and X. Yao, ”Cooperative Co-evolution for large scale opti-
mization through more frequent random grouping,” IEEE Congress on Evolutionary Computation,
Barcelona, 2010, pp. 1-8. doi: 10.1109/CEC.2010.5586127

[17] S.G. Akl et al.(Eds.): UC 2007, LNCS 4618

39

You might also like