Deadline Constrained Cloud Computing Resources Scheduling For Cost Optimization Based On Dynamic Objective Genetic Algorithm

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Deadline Constrained Cloud Computing Resources

Scheduling for Cost Optimization Based on Dynamic


Objective Genetic Algorithm
Zong-Gan Chen, Student Member, IEEE, Ke-Jing Du, Zhi-Hui Zhan (Corresponding Author), Member,
IEEE, and Jun Zhang, Senior Member, IEEE

Abstract—Cloud computing resources scheduling is services over the Internet, but also the hardware and software
significant for executing the workflows in cloud platform in the data centers that provide those services [1]. NIST
because it relates to both the execution time and execution cost. (National Institute of Standards and Technology)’s definition
In order to take both the time and cost into consideration, of cloud computing [2] is that cloud computing is a model for
Rodriguez and Buyya have proposed a cost-minimization and enabling ubiquitous, convenient, on-demand network access
deadline-constrained workflow scheduling model on cloud to a shared pool of configurable computing resources (e.g.,
computing. Their model has great applicability but the solution
networks, servers, storage, applications, and services) that
of their particle swarm optimization (PSO) approach is not good
enough and cannot meet a tight deadline condition. In this paper, can be rapidly provisioned and released with minimal
we propose a genetic algorithm (GA) approach to solve this management effort or the interaction of service providers.
model. In order to tackle with the tight deadline condition, a In 2006, Eric Schmidt, the CEO of Google, first proposed
dynamic objective strategy is further proposed to let GA focus on the concept of “Cloud Computing” in SES San Jose 2006.
optimize the execution time objective to meet the deadline Although the birth of cloud computing is less than 10 years,
constraint when the feasible solution hasn’t been obtained. After the present availability of high-capacity networks, low-cost
obtaining a feasible solution, the GA focuses on optimizing the computers and storage devices as well as the widespread
execution cost within the deadline constraint. Therefore, the adoption of hardware virtualization, service-oriented
proposed dynamic objective GA (DOGA) has adaptive ability to
architecture, and autonomic and utility computing have led to
the search environment to different objectives. We have conduct
extensive experiments based on workflows with different scales a growth in cloud computing. Cloud vendors are experiencing
and different cloud resources. Experimental results show that growth rates of 50% per annum [4].
DOGA can find better solution with smaller cost than PSO does Cloud Computing has several features as follows.
on different scheduling scales and different deadline conditions. z The vast scale of data. Google Cloud already has over
DOGA approach is more applicable to be used in commercial 100 million servers, Amazon, IBM, Microsoft, Yahoo
activities. and other “cloud” all have hundreds of thousands of
Keywords—cloud computing; resource; scheduling; genetic servers. “Cloud” can give users an unprecedented
algorithm; dynamic objective strategy computing power.
z Hardware virtualization. By using virtualization
I. INTRODUCTION technology, cloud computing can provide users with

C
computer infrastructure, server platforms, application
loud computing is the development of distributed
software, and other services via infrastructure as a
computing, parallel computing, and grid computing. It
service (IaaS), platform as a service (PaaS ), software as
relies on not only the applications delivered as
a service (SaaS ), and other models [2][3]. It enables us
to manage different types of resource easily [5].
Z.-G. Chen, K.-J. Du, Z.-H. Zhan, and J. Zhang are with the Department of
Computer Science, Sun Yat-Sen University, Guangzhou, 510275, China,
Developers only need to consider the scheduling logic
with the School of Advanced Computing, Sun Yat-Sen University, in application level, without the need to consider the
Guangzhou, 510275, China, with the Key Laboratory of Machine underlying resource scheduling.
Intelligence and Advanced Computing (Sun Yat-sen University), Ministry of z Expansibility. The scale of “cloud” is flexible so that it
Education, China, with the Engineering Research Center of Supercomputing
Engineering Software (Sun Yat-sen University), Ministry of Education,
can meet the need of application and the growth of the
China, and also with the Key Laboratory of Software Technology, Education quantity of users.
Department of Guangdong Province, China. Zhi-Hui Zhan is the z On-demand service. “Cloud” is a huge resource pool and
corresponding author, email: [email protected]. users can pay money to lease some resources that meet
This work was supported in part by the, in part by the National Natural
Science Foundations of China (NSFC) with No. 61402545, the Natural
their demands.
Science Foundations of Guangdong Province for Distinguished Young In the dawn of “era of big data”, cloud computing is faced
Scholars with No. 2014A030306038, the Project for Pearl River New Star in with those problems that have a huge scale of data. Therefore,
Science and Technology, Guangzhou, China, the Fundamental Research workflow scheduling on clouds has become a significant
Funds for the Central Universities, the NSFC Key Program with No.
research topic. It is related to the cost and efficiency of cloud
61332002, the NSFC for Distinguished Young Scholars with No. 61125205,
and the National High-Technology Research and Development Program (863 computing. But in reality, workflow scheduling is a NP-hard
Program) of China No.2013AA01A212. problem so that it is impossible to generate an optimal

‹,(((

solution within polynomial time. What we can do is to find an the population cannot find a feasible solution, we set
approximate or near-optimal solution. With the development execution time as the optimization objective until one of
of intelligent computing algorithm, using intelligent chromosome find a feasible solution. Once at least a feasible
computing algorithm to solve the workflow scheduling solution has been found, we set execution cost as the
problem of cloud computing has been popular. optimization objective. We name this DOS based GA
Many researches concentrate on minimizing the approach as DOGA.
workflow’s execution time. For example, Rahman et al. [6] In this paper, experimental study show that our proposed
used the workflow’s dynamic critical paths to find a solution. DOGA approach is better than the PSO approach in solving
Chen and Zhang [7] used ant colony optimization (ACO) to the cost-minimization and deadline-constrained model.
meet various kinds of QoS requirement. But in the business DOGA can not only generate a solution with smaller cost than
activities, money is definitely one of the factors that cannot be PSO does in many test cases, but also has better adaptability
ignored. The pursuit of solution that completes the task fastest than PSO, which mean that DOGA can meet a tighter
often requires a larger investment. As a result, we need to deadline while PSO sometimes fails to obtain feasible
make balance between cost and time in order to find a solutions.
solution that can maximize the economic profit. Mao and The rest of this paper is organized as follows. Section II
Humphrey [8] proposed a dynamic approach for scheduling presents background introduction of the deadline based
workflow. They used the methods of finding the most model and its solution code and fitness evaluation. Section III
economical and practical resource and merging several tasks presents the DOGA approach and the strategy to solve the
to a single one. However, they can only find the cheap situation of tight deadline. Section IV presents the
solution but not the near-optimal solution. Malawski et al. [9] experimental results. Finally, Section V presents the
used various dynamic and static algorithms to generate a conclusion.
solution that can meet users’ QoS. However, in their method,
only one type of virtual machine (VM) is used, which ignores II. BACKGROUND
the elasticity and heterogeneity of the resource in cloud
computing. A. Cost-minimization and Deadline-constrained Model
In 2014, Rodriguez and Buyya proposed a deadline based The workflow has a set of tasks, and these tasks have
resource provisioning and scheduling algorithm [10]. They topology structure. An example is shown in Fig. 1. The
develop a cost-minimization and deadline-constrained model, number on the line represents the time that parents transfer
which means to meet the deadline constraint and minimize data to its child when they runs on different resources. And
the cost. This model has great progressiveness. It tally with there is a set of resource, named Rini, that can be leased to
business need that a workflow task should be done within a execute the tasks.
deadline and what we should do is minimizing the cost in
order to gain biggest profit. In order to solve their
optimization model, they used particle swarm optimization
(PSO) as their approach. Although promising results have
been obtained by their PSO approach, there are still rooms for
enhancement.
1. In the particle code, the index of the resource is used to
Figure 1.  A simple example of workflow
represent the position of the particle. However, resource
index doesn’t carry any feature of the resource. As a We define the ‘total execution cost’ (TEC) and the ‘total
result, the process that particle fly to globally best and execution time’ (TET) as follows:
personal best may not lead to a better solution.
2. If the scale of scheduling problem is large, PSO may TET max{ ETti : ti  T } (1)
easily converge to a local optimal solution.
| R|
3. When the deadline is tight, PSO is difficult to find a
feasible solution.
TEC ¦ (C rj
u ( LETr j  LSTr j )) (2)
j 1
In this paper, we propose a genetic algorithm (GA) based Where ETti represents the ‘end time’ that the task ti ends its
approach to solve the deadline based cloud resource
scheduling model [10], which is proposed by Rodriguez and execution. Cr represents the cost to lease the resource rj for
i

Buyya. We still use the index of the resource to code the a unit of time. LSTri represents the ‘lease start time’ of rj
chromosome. However, the index is with integer value but
not floating value like in PSO. Moreover, as GA doesn’t has a while LETri represents the ‘lease end time’ of rj. TET is
learning process like PSO, and the crossover of GA can calculated by the end time of the latest finish task. TEC is
preserve the good combination of different index values, GA calculated by accumulating the cost to lease every resource
would have a good guidance to promising solution by using while the cost to lease every resource is calculated by
such code scheme. multiplying the cost of the resource by its lease time.
We propose a dynamic objective strategy (DOS) to deal As we want to find a solution that minimizes the cost
with tight deadline constraint. When all the chromosomes in while meet the deadline constraint, it means that the target is


to minimum TEC while its TET should meet the deadline In order to obtain TEC and TET, we need to know the time
constraint, the formulas is shown below: that tasks execute in every resource. The execution time is
Minimize TEC (3) stored in an array exetime, where exetime[i][j] represents the
TET  deadline (4) execution time that ti runs on the resource rj. However, when
parents task and its child are executed in different resource,
B. PSO Framework parents may need some time to transfer data to its child. In
PSO is an evolutionary computational algorithm, which is this case, the transfer time in the array transfertime is used,
proposed by Eberhart and Kennedy in 1995[11], and has fast where transfertime[i][j] represents the time that ti transfers
developed in recent years [12][13][14]. Every particle has data to tj.. An example of transfertime and exetime
two vectors, x and v. The x represents the position of the corresponding to Fig.1 is shown in Fig. 3.
particle, which is also a solution to defined problem. The v As we have known these variables, we can obtain the
determines the movement of the particle in the defined workflow scheduling. The workflow scheduling
problem space. Every particle’s best position pBest and the corresponding to Fig. 2 and Fig. 3 is shown in Fig. 4.
best position of the population gBest will be stored and r1 t1 t4
updated. For every particle i, the update of the x and v in every
generation is shown in formula (5) and (6). r2 t2 t5
xid xid  vid (5) r3 t3 t6
vid
w u vid  c1 u r1 u pBestid  xid 5 10 15 20 25 30
(6) Time

 c2 u r2 u gBest  x
i
d d
i Figure 4.  The workflow scheduling
Where d represents the dimension of the vector, Ȧ is inertia
weight while c1 and c2 is the acceleration coefficients [15][16].
r1 and r2 are random numbers between 0 and 1. In every pos[| T |]
generation, evaluate the fitness of every particle and update
their pBest and gBest. Finally, update the x and v of every
particle according to formula (5) and (6).
‡
C. Scheduling Scheme and Fitness Evaluation
As our DOGA uses a similar scheduling scheme to encode
the solution, we describe the particle code scheme herein STti =LETrpos [ i ]
according to [10]. In order to programing the PSO approach,
STti max(max{ETt p : t p  parents(ti )}, LETrpos[ i ] )
the index of the resource is used to encode the particle. The
length of the position is the same as the number of tasks, and
exe exetime[i ][ pos[i ]]; transfer 0;
the value of each dimension represents the corresponding
resource that executes this task. Fig. 2 is a simple example
with 6 tasks and 3 resources. transfer  transfertime[i ][c]

r r
PTti pos[i ] exe  transfer; ETti STti  PTti pos[i ] ;
rpos[ i ]  R
LSTrpos[ i ] STti
Figure 2.  An example of encoding the particle.
R R{rpos>i @}
As the figure shows, every dimension stands for
corresponding task, and its value represents the resource that LETrpos[ i ] ETti
the task is scheduled to.
In this case, formula (3) and (4) are used to evaluate a TET ^
max ETti : ti  T `
| R|
particle’s fitness. So we need to obtain TEC and TET from the
particle’s position, and then calculate a fitness function.
TEC ¦C *( LET
i 1
i ri  LSTri )
t1 t2 t3 t4 t5 t6 r1 r2 r3
t1 ª0 2 3 0 0 0º t1 ª 1 4 6º Figure 5.  The pseudo-code to calculate TEC and TET
t2 ««0 0 0 3 0 0»» t2 «« 5 4 7 »»
The pseudo-code to calculate TEC and TET is shown in
t «0 0 0 0 2 0» t3 « 3 6 8» Fig. 5. At first, we will initialize TET, TEC, and R. R is a set of
transfertime 3 « » exetime « »
t4 « 0 0 0 0 0 4» t4 « 2 1 4» resource that we lease during the execution, initialized as ‡ .
t5 « 0 0 0 0 0 5» t5 « 5 4 9» And then the algorithm will iterate every coordinate i. For
« » « »
t6 «¬0 0 0 0 0 0 »¼ t6 «¬ 3 2 5 »¼ every task ti, we can get the resource it runs according to
Figure 3.  An example of transfertime and exetime pos[i]. ti’s start time STti is determined by two factors. It


should wait until rpos[i] is available, which is LETrpos[ i ] , and if ti B. Selection
has parent ta, ti should wait until ta end its execution. In DOGA, we use roulette wheel selection [19] to select
Execution time exe plus the time that ti use to transfer data to chromosome to next generation. The chromosome with
its child, named transfer, is ti’s processing time PTt . The end i
bigger fitness would be more likely to be chosen. As we want
time of ti, which is ETti , is calculated by STti plus PTti . If rpos[i] to minimize the cost, we use the 1/TEC to evaluate the
chromosome’s fitness. Formulation is shown below.
hasn’t been leased, the lease start time of rpos[i], which fitnessi 1/ TECi (7)
is LSTr , is STt . The lease end time LETr is the end time of
pos [ i ] i pos [ i ]
fitnessi
ti. Finally, we can calculate TEC and TET through formula (1) pi (8)
¦
N 1
fitness j
and (2) and then we can obtain TEC and TET of the solution. j 0

III. DOGA APPROACH C. Crossover

GA [17][18], proposed by Holland, is a search heuristic that Every chromosome has a probability Pc to crossover. If
mimics the process of natural selection. It can be applied to Random (0, 1) < Pc , the chromosome can crossover. The
optimization problems according to selection, crossover, and crossover is processed by generating a random number n and
mutation. In our proposed DOGA approach, it has similar then two chromosome can exchange the first n coordinate. A
algorithmic structure to traditional GA except that DOGA simple example of crossover is shown in Fig. 7.
uses a dynamic objective strategy to adaptively evaluate the
chromosomes during the evolutionary process. The whole
flowchart of DOGA is illustrated in Fig. 6 and is described in
the following subsections.

Figure 7.  Example of crossover

D. Mutation
Every coordinate of a chromosome has a probability Pm to
mutation. The mutation is processed by generating two
random number, m and v, and then the value of coordinate m
will mutate to v. A simple example of mutation is shown in
Fig. 8.

Figure 8.  Example of mutation, v=1

E. Keep-the-Best Strategy


De Jong proposed “maintain the best solution found over
time before selection” [19] to avoid wrecking the best
chromosome during mutation or crossover. In our approach,
we define a variable to store the global best chromosome.
While in every generation, if the best chromosome is better
than the global best one, update the global best chromosome.
If not, the global best chromosome will replace the worst
chromosome of the current generation. This strategy can
accelerate the convergence of DOGA.
F. Dynamic Objective Strategy
Figure 6.  The flowchart of DOGA approach GA is not designed to solve a constraint problem. So when
the deadline is tight, the algorithm cannot even find a feasible
solution in the first generation and if we still use formula (3)
A. Encoding
and (4) to evaluate the fitness, the algorithm may be difficult
The way to encode the chromosomes of population is to find a feasible solution. This problem is more serious in
similar to PSO approach. The only difference is that we use PSO, because in first generation there’s no feasible solution,
integer rather than floating number. The coordinate i’s value which means that there’s no gbest and pbest, so that PSO
represents the resource that ti runs on. For example, dimi=j cannot continue running. Rodriguez and Buyya [10]used the
represents that ti runs on the resource rj. constraint-handling strategy proposed by Deb et al. [20]. That


is, when both solutions are infeasible, the one with smaller We use algorithm in Fig. 9 to generate topology structure
overall constraint violation is selected, which means that if of the tasks, we consider that the tasks can be executed
two solutions cannot meet deadline constraint, the one with a sequentially, which means for ti, child (i)>i. For ti, we set
smaller TET is selected. Pchild to be the probability that a task tk to be the child of ti. In
Differently, in this paper, we proposed a dynamic order to generate a balanced structure, Pchild is increased with
objective strategy (DOS). In order to programing, if the the increasing of i.
chromosome cannot meet the deadline constraint, we defined We use 3 different scales of data to do our experiment. We
its fitness as 1/1000000, which is far smaller than the feasible first compare GA with DOGA to show the effect of the
chromosome’s fitness. And then if all the chromosomes’ dynamic objective strategy. And then we will compare the
fitness in the population is 1/1000000, which means no results of DOGA and PSO by representing the result of every
chromosome can find a feasible solution, DOGA sets TET as scale of data via a table and a figure.
the optimization objective until one of chromosome obtain a DOGA and PSO may need some generations to find a
feasible solution. After that, DOGA will still use the formula feasible solution, named find generation (FGEN). If FGEN
(3) and (4) to evaluate a chromosome’s fitness. By using the >100000, we regard that the algorithm cannot find a solution.
DOS, when deadline is very tight, DOGA will first try to find Otherwise, the algorithm runs more 4000 generations after
a solution with smaller TET that can meet the deadline. As a finding a feasible solution, so as to find the best solution with
result, DOGA is more likely to meet tight deadline. smallest TEC. We define Time as the computational time of
Moreover, DOGA can use different Pc and Pm values when two algorithms, measured by second. The results will
setting different optimization objective so that the algorithm compare the FGEN and TEC of PSO and DOGA under
can be more adaptive. different deadline constraints. Moreover, we will plot the
After finding a feasible solution, the algorithm will run convergence curves on the TEC metric along the 4000
4000 generations to find the final best solution. We identify generations of DOGA and PSO. The X-axis means
that the algorithm cannot find a solution if the algorithm generation, named GEN and the Y-axis means TEC.
cannot find a feasible solution within 100000 generations. In PSO approach, we follow the proposals in [10] as
c1=c2=2.0, Ȧ=0.5, and the population is 100. The population
IV. EXPERIMENTS AND COMPARISONS of DOGA is also set to be 100. For the crossover and mutation
probabilities of DOGA, as we use dynamic objective strategy
In our experiment, for resource rj, its price Cj is a random in different stages, therefore, if the algorithm is in the stage of
value generated in range [1, 5], which is shown in formula (9). finding a feasible solution, we set formula (3) and (4) to as
For task ti, the exetime[i][j] is determined by two factor. We optimization target, and set Pc =0.15 and Pm =0.008;
know that the resource with high cost may be more likely to
have good efficiency to execute tasks. We use 6 –Cj to otherwise, if the algorithm is in the stage of finding the
represents this factor. But there are various kinds of tasks and smallest TET during the 4000 generations, we set TET as
resources with different properties and some resources may optimization target, we set Pc =0.8 and Pm =0.002. Moreover,
have a good efficiency to execute some of the tasks so that because of the randomness, we execute the both DOGA and
cost cannot totally determine the efficiency of the execution PSO 30 times on each instance and use the average result to
of all of the tasks. We use Random (1, 5) to represents this evaluate these two approaches.
factor. We consider that these two factors have the same A. Compare GA with DOGA
influences to the execution time. So we use the formula (10)
to determine the exetime[i][j]. We set transfertim[i][child(i)] As is mention in Section III, dynamic objective strategy is
as Random(0,1), which is shown in formula (11). designed to meeting tight deadline constraint. In this section,
C j Random(1,5) (9) we will compare the tightest deadline that DOGA and GA can
meet to show the effect of the strategy.
exetime >i @> j @ (6  C j ) u 0.5  Ramdom(1,5) u 0.5 (10)
transfertime[i ][child (i )] Random(0,1) (11) TABLE I.  THE TIGHTEST DEADLINE THAT GA AND DOGA CAN MEET
(GA-DOGA)
Case GA DOGA
/GA
50 tasks 15 resources 115 60 47.8%
75 tasks 25 resources 200 100 50.0%
100 tasks 30 resources 330 150 54.5%

Notice that the results in Table I are measured by 5 seconds


as a unit. Table I shows that DOS can help GA more adaptive
to deal with the tighter deadline in the cost-minimization and
deadline-constrained model. More significantly, as the
scheduling scale being increasing, the advantages of DOGA
Figure 9.  Algorithm of generating topology structure. become more obvious, as indicating by the value of
(GA-DOGA)/GA.


B. Compare PSO with DOGA 

1) Small Scale of Data PSO


In the small scale of data, the workflow has 50 tasks and  DOGA
there are 15 kinds of resource that can be leased. We set 3
deadlines as 80, 100, and 120. The results are compared in 

TableIIand Fig. 10.

TEC
The better results in the Table are marked with boldface. 

The results show that DOGA can always obtain smaller TEC
values than PSO does under different deadline constraints. 
Moreover, the computational time of DOGA is always less
than PSO. The FGEN values tell how many generations PSO 
and DOGA have to use until they can find at least one feasible
solution. When the deadline constraint is large, e.g., both 
PSO and DOGA can obtain feasible solution at the beginning, 0 1000 2000 3000 4000
GEN
with FGEN=0. The PSO approach may convergent faster
than DOGA sometimes in obtaining the feasible solution, as Figure 11.  The results on medium scale of data with deadline=280
indicated by the case with deadline as 100. However, when
the deadline become tighter, e.g., 80, PSO cannot even find a The results in Table III also show that DOGA can do better
feasible solution while DOGA can still find feasible solution than PSO under different deadline constraints, with smaller
in about 743.5 generations. final TEC values and with less CPU computational time.
Moreover, the curves in Fig. 11 show that DOGA has faster
TABLE II.  FGEN AND TEC UNDER DIFFERENT DEADLINE
CONSTRAINT ON SMALL SCALE OF DATA convergence speed than PSO on medium scale of data with
deadline=280.
Dead PSO DOGA
3) Large Scale of Data
line FGEN TEC Time FGEN TEC Time In the large scale of data, the workflow has 100 tasks and
120 0 315.87 16 0 259.92 13 there are 30 kinds of resource that can be leased. We set 3
100 63.3 484.12 18 188.1 367.63 14 deadline constraints as 200, 300, and 400. The results are
80 N/A N/A N/A 743.5 424.59 16 compared in Table IV and Fig. 12. These compared results
700
further show that DOGA can obtain feasible solutions under
different deadline constraints and that DOGA has general
PSO
better performance than PSO in terms of both solution quality
DOGA
600 and computational time.
TABLE IV.  FGEN AND TEC UNDER DIFFERENT DEADLINE
CONSTRAINT ON LARGE SCALE OF DATA
500
Dead PSO DOGA
TEC

line FGEN TEC Time FGEN TEC Time


400
400 0 1169.462 52 0 756.595 45
300 93.33 1647.823 54 245.6 978.411 48
300
200 N/A N/A N/A 4490.3 1164.298 92

2400
200
0 1000 2000 3000 4000
PSO
GEN 2100 DOGA
Figure 10.  The results on small scale of data with deadline=120
1800
2) Medium Scale of Data
In the medium scale of data, the workflow has 75 tasks and
TEC

1500
there are 25 kinds of resource that can be leased. We set 3
deadline, 120, 200, and 280. The results are compared in
1200
Table III and Fig. 11.
TABLE III.  FGEN AND TEC UNDER DIFFERENT DEADLINE
CONSTRAINT ON MEDIUM SCALE OF DATA 900

Dead PSO DOGA


600
line FGEN TEC Time FGEN TEC Time 0 1000 2000 3000 4000
280 0 776.689 31 0 540.829 27 GEN
200 11.9 914.840 32 21.1 663.633 28 Figure 12.  The results on large scale of data with deadline=400
120 N/A N/A N/A 3670.1 738.395 51


4) Analysis [7] W. N. Chen and J. Zhang, “An ant colony optimization approach to a
grid workflow scheduling problem with various QoS requirements,”
1. Convergence
IEEE Trans. Syst., Man, Cybern., Part C: Appl. Rev., vol. 39, no. 1, pp.
In the case of small scale of data, PSO’s convergence speed 29–43, Jan. 2009.
is better than DOGA. With the increasing scale of data, [8] M. Mao and M. Humphrey, “Auto-scaling to minimize cost and
DOGA has better convergence speed than PSO. meet application deadlines in cloud workflows,” in Proc. Int. Conf.
2. Result High Perform. Comput., Netw., Storage Anal., 2011, pp. 1–12.
In 3 different scales of data, DOGA can generate a better [9] M. Malawski, G. Juve, E. Deelman, and J. Nabrzyski, “Cost-and
solution with smaller TEC than PSO does. With the deadline-constrained provisioning for scientific workflow ensembles in
IaaS clouds,” in Proc. Int. Conf. High Perform. Comput., Netw.,
increasing scale of data, the disparity between PSO and Storage Anal., 2012, pp. 1–11.
DOGA is more and more obvious. [10] M. A. Rodriguez and R. Buyya, “Deadline based resource provisioning
3. Meeting deadline constraint and scheduling algorithm for scientific workflows on clouds,” IEEE
In 3 different scales of data, DOGA can meet a much Transactions on Cloud Computing, vol. 2, no. 2, pp. 222–235
tighter deadline than PSO. So DOGA has a higher April-June 2014
applicability than PSO, which means that DOGA is more [11] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proc.
6th IEEE Int. Conf. Neural Netw., 1995, pp. 1942–1948.
likely to meet the needs of different kinds of users.
[12] M. Shen, Z. H. Zhan, W. N. Chen, Y. J. Gong, J. Zhang, and Y. Li,
“Bi-velocity discrete particle swarm optimization and its application to
V. CONCLUSION multicast routing problem in communication networks,” IEEE
Transactions on Industrial Electronics, vol. 61, no. 12, pp. 7141-7151,
In this paper, we propose a DOGA approach to solve the Dec. 2014.
resource scheduling problem in cloud computing [13] Z. H. Zhan, J. Li, J. Cao, J. Zhang, H. Chung, and Y. H. Shi, “Multiple
environment under the cost-minimization and populations for multiple objectives: A coevolutionary technique for
deadline-constrained models [10]. The model has great solving multiobjective optimization problems,” IEEE Transactions on
availability and can meet the needs of business organizations. Cybernetics, vol. 43, no. 2, pp. 445-463, April. 2013.
But the PSO approach they proposed [10] has some problems [14] Y. H. Li, Z. H. Zhan, S. Lin, J. Zhang, and X. N. Luo, “Competitive and
cooperative particle swarm optimization with information sharing
and the result is not good enough. So we use GA to solve this
mechanism for global optimization problems,” Information Sciences,
model. During experiment, we found that GA still cannot get vol. 293, no. 1, pp. 370-382, 2015.
the solution in the case of tight deadline so we proposed a GA [15] Z. H. Zhan, J. Zhang, Y. Li, and H. Chung, “Adaptive particle swarm
approach with dynamic objective strategy (DOS), named optimization,” IEEE Transactions on Systems, Man, and
DOGA. The experiments under different scheduling scales Cybernetics--Part B, vol. 39, no. 6, pp. 1362-1381, Dec. 2009.
and different deadline constraints show that DOGA is more [16] Z. H. Zhan, J. Zhang, Y. Li, and Y. H. Shi, “Orthogonal learning
adaptive to the constraint of various deadlines and can find a particle swarm optimization,” IEEE Transactions on Evolutionary
Computation, vol. 15, no. 6, pp. 832-847, Dec. 2011.
better solution with smaller TEC than PSO does.
[17] Z. H. Zhan, G. Y. Zhang, Y. J. Gong, and J. Zhang, “Load balance
In the future work, other evolutionary computation aware genetic algorithm for task scheduling in cloud computing,” in
algorithms such as ant colony optimization [21], differential Proc. Simulated Evolution And Learning, 2014, pp. 644-655.
evolution [22], artificial bee colony [23], and brain storm [18] J. Zhang, Z. H. Zhan, Y. Lin, N. Chen, Y. J. Gong, J. H. Zhong, H. S.H.
optimization [24] will be investigated to solve this resource Chung, Y. Li, and Y. H. Shi, “Evolutionary computation meets machine
scheduling problem in cloud computing environment. learning: A survey,” IEEE Computational Intelligence Magazine, vol.
6, no. 4, pp. 68-75, Nov. 2011.
Moreover, dynamic and multi-objective characteristics will [19] K. A. De Jong, “An analysis of the behavior of a class of genetic
be considered and related algorithms would be studied adaptive systems,” Ph.D. Dissertation, University of Michigan, 1975.
[25][13]. [20] K. Deb, A. Pratap, S. Agarwal, and T. A. M. T. Meyarivan, “A fast and
elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol.
REFERENCES Comput., vol. 6, no. 2, pp. 182–197, Apr. 2002.
[21] Z. H. Zhan, J. Zhang, Y. Li, O. Liu, S. K. Kwok, W. H. Ip, and O.
[1] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski,
Kaynak, “An efficient ant colony system based on receding horizon
G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “A view of
control for the aircraft arrival sequencing and scheduling problem,”
cloud computing,” Communications of the ACM, vol. 53 no. 4, pp.
IEEE Trans. Intell. Transport. Syst., vol. 11, no. 2, pp. 399-412, Jun.
50-58 Apr. 2010.
2010.
[2] P. Mell and T. Grance, “The NIST definition of cloud computing,” [22] Y. L. Li, Z. H. Zhan, Y. J. Gong, W. N. Chen, J. Zhang, and Y. Li,
Communications of the ACM, vol.53, no.6, pp. 50-50, Jun. 2011. “Differential evolution with an evolution path: A DEEP evolutionary
[3] V. William, B. James, and R. Buyya, “Introduction to cloud algorithm,” IEEE Trans. on Cybernetics, DOI:
computing,” Cloud Computing: Principles and Paradigms. New York, 10.1109/TCYB.2014.2360752, 2014.
USA: Wiley Press, 2011, pp. 3–37. [23] M. D. Zhang, Z. H. Zhan, J. J. Li, and J. Zhang, “Tournament selection
[4] “The economy is flat so why are financials Cloud vendors growing at based artificial bee colony algorithm with elitist strategy,” in Proc.
more than 90 percent per annum?”. FSN. March 5, 2013. Conf. Technologies and Applications of Artificial Intelligence, Taiwan,
[5] X. F. Liu, Z. H. Zhan, K. J. Du, and W. N. Chen, “Energy aware virtual Nov. 2014, pp. 387-396.
machine placement scheduling in cloud computing based on ant colony [24] Z. H. Zhan, J. Zhang, Y. H. Shi, and H. L. Liu, “A modified brain storm
optimization approach,” in Proc. Genetic Evol. Comput. Conf., optimization,” in Proc. IEEE World Congr. Comput. Intell., Brisbane,
Vancouver, Canada, Jul., 2014, pp. 41-47. Australia, Jun. 2012, pp. 1-8.
[6] M. Rahman, S. Venugopal, and R. Buyya, “A dynamic critical path [25] Z. H. Zhan, J. J. Li, and J. Zhang, “Adaptive particle swarm
algorithm for scheduling scientific workflow applications on global optimization with variable relocation for dynamic optimization
grids,” in Proc. 3rd IEEE Int. Conf. e-Sci. Grid Comput., 2007, pp. problems,” in Proc. IEEE Congr. Evol. Comput., 2014, pp. 1565-1570.
35–42.



You might also like