TASK - SCHEDULING MCDM PDF
TASK - SCHEDULING MCDM PDF
TASK - SCHEDULING MCDM PDF
DOI: 10.1002/dac.4379
RESEARCH ARTICLE
1
Department of Computer Engineering,
Dolatabad Branch, Islamic Azad
Summary
University, Isfahan, Iran The massive growth of cloud computing has led to huge amounts of energy
2
Department of Computer Engineering, consumption and carbon emissions by a large number of servers. One of the
Mobarakeh Branch, Islamic Azad
major aspects of cloud computing is its scheduling of many task requests sub-
University, Isfahan, Iran
mitted by users. Minimizing energy consumption while ensuring the user's
Correspondence QoS preferences is very important to achieving profit maximization for the
Reihaneh Khorsand, Department of
Computer Engineering, Dolatabad
cloud service providers and ensuring the user's service level agreement (SLA).
Branch, Islamic Azad University, Isfahan, Therefore, in addition to implementing user's tasks, cloud data centers should
Iran. meet the different criteria in applying the cloud resources by considering the
Emails: [email protected];
[email protected] multiple requirements of different users. Mapping of user requests to cloud
resources for processing in a distributed environment is a well-known NP-
hard problem. To resolve this problem, this paper proposes an energy-efficient
task-scheduling algorithm based on best-worst (BWM) and the Technique for
Order Preference by Similarity to Ideal Solution (TOPSIS) methodology. The
main objective of this paper is to determine which cloud scheduling solution
is more important to select. First, a decision-making group identify the evalu-
ation criteria. After that, a BWM process is applied to assign the importance
weights for each criterion, because the selected criteria have varied impor-
tance. Then, TOPSIS uses these weighted criteria as inputs to evaluate and
measure the performance of each alternative. The performance of the
proposed and existing algorithms is evaluated using several benchmarks in
the CloudSim toolkit and statistical testing through ANOVA, where the
evaluation metrics include the makespan, energy consumption, and resource
utilization.
KEYWORDS
best-worst method (BWM), cloud computing, energy consumption, multi-criteria decision
making, TOPSIS method
1 | INTRODUCTION
Cloud computing is a high-performance computing environment to represent the cooperation among multiple virtual
machines (VMs) via a network that provides services to the users using pay-as-you-go model.1, 2 It has emerged as the
Int J Commun Syst. 2020;e4379. wileyonlinelibrary.com/journal/dac © 2020 John Wiley & Sons, Ltd. 1 of 17
https://doi.org/10.1002/dac.4379
2 of 17 KHORSAND AND RAMEZANPOUR
fifth utility services after water, electricity, gas, and telephony.3, 4 Using the cloud computing, customers can access to a
shared resource pool applicable on demand.5–7 The cloud computing providers are equipped with computational facili-
ties called data centers, which enable them to provide necessary spaces required by businesses and customers. The map-
ping process of user requests to VMs in the cloud data centers based on the preferences of the user(s) is named the
multi-criteria task scheduling where multiple conflicting criteria could be considered into a scheduling strategy. Most
of the available studies in task scheduling only focus on a single criterion decision making, while many real-world
problems are of multi-criteria that must be considered. In multi-criteria task-scheduling process management, there
exist challenges like finding the values of the different criteria and the scheduling tasks by considering user's QoS
preferences, energy efficiency, etc.8–10
The QoS-based algorithms are designed to utilize the data center resources in order to meet the users' requirements
in an efficient manner. A nature-inspired–based algorithm called Osmosis Load Balancing (OLB) for better perfor-
mance as well as load balancing was presented by Mallikarjuna and Krishna.11 The algorithm uses the principle of
osmosis to reschedule the tasks of VMs and is based on the distributed hash table (DHT) with a chord overlay mecha-
nism. Similarly, Mallikarjuna and Krishna12 designed a nature-inspired–based algorithm for load balancing among
VMs, which is called honeybee foraging (LB-HBF). Their algorithm considered equal time allocation policy, including
reducing the task migrations over VMs.
Minimizing the energy usage of data centers is an important challenge and complex issue for data centers in the
cloud.13, 14 To address this problem, many techniques have been applied in order to reduce energy usage such as load
balancing. The main objective of this technique is to dispatch the load between all available VMs in a way that
improves the utilization of VMs and minimizes energy consumption. Energy-efficient task scheduling can be applied
as a dynamic load balancing mechanism, which distributes the workloads based on the speed and memory capabili-
ties of the VMs and reflects the dynamic nature of cloud computing environment.15 Safari and Khorsand16 proposed
an optimization model for time-constrained workflow scheduling in order to minimize energy consumption using a
dynamic voltage and frequency scaling (DVFS) method in cloud computing. Their approach was formulated to reduce
energy consumption by mapping tasks to VMs by considering the task deadline constraints. In general, methods that
only find energy-aware solution are not sufficiently good especially for problems with multi-criteria that required to
be considered simultaneously. To the best of our knowledge, the idea of merging energy-efficient load balancing with
rank and selecting the appropriate task assignment operation to assure user's QoS preferences has not been a study
case. Consequently, in this paper, an energy-efficient algorithm is proposed for multi-criteria scheduling based on
BWM-TOPSIS hybrid methodology, which allows for the better discovering cloud scheduling solution to adopt and
the ranking process using BWM-TOPSIS methodology under a systematic manner. Hence, first, a decision-making
committee performs the identification of evaluation criteria. Then, we need to use a BWM process17 to assign the
importance weights for each criterion because of the varied importance of the selected criteria. Finally, TOPSIS
process can employ these weighted criteria as inputs in order to evaluate and measure the performance of each
alternative.
• designing an energy-efficient load balancing framework based on user's QoS preferences, which can be applied to
consider the dynamic nature of the scheduling problem in cloud environment,
• developing a task-scheduling strategy based on the integration of best-worst and TOPSIS multi-criteria
decision-making algorithms, which allows for the better discovering cloud scheduling solution to adopt, and
• comparing the proposed and existing algorithms with several benchmarks on CloudSim toolkit and statistical test
through analysis of variance (ANOVA), where the evaluation indicators include the makespan, energy consumption,
and the VM utilization.
The rest of the paper is organized as follows: Section 2 investigates related work in the recent literature. Section 3
presents the proposed framework. Section 4 describes the proposed algorithm. Then, the results of the simulation is
evaluated in Section 5. Finally, the paper is concluded in Section 6.
KHORSAND AND RAMEZANPOUR 3 of 17
2 | R E LA T E D WOR K
The task-scheduling problem is an active research area in the heterogeneous cloud environment. However, there is no
exact optimal solution that optimizes all criteria in the cloud scheduling. Traditional task-scheduling algorithms for
clouds focus in minimizing the makespan without considering energy consumption. For example, Panda et al18
designed a task-scheduling algorithm to balance load in the cloud and applied a task partitioning into preprocessing
and processing phases. They evaluate the makespan and average cloud resource utilization to compare the proposed
and existing algorithms and use statistical test through ANOVA. Khorsand et al19 designed a deadline constraint
scheduling algorithm which finds the schedule for workflow application adaptively under run-time circumstances in
cloud computing. They applied the opportunistic load balancing algorithm in the first stage and load balance Min-Min
algorithm in the second stage. In their proposed approach, the dynamic workload is not considered in cloud computing,
and it may not reveal an appropriate performance, in many cases.
Optimization is one of the challenging and complex issues of modern applied mathematics. Many papers have
studied optimization techniques to optimize load balancing in the cloud environment. For example, Keshanchi and
Navimipour20 introduced the problem of task-scheduling using multiple priority queues and a memetic algorithm
(MPQMA) in a virtualized heterogeneous cloud, while minimizing the makespan. To assign a priority to each subtask,
a genetic algorithm (GA) along with hill climbing has been introduced, which allows to find a solution for the task-to-
VM mapping. Mallikarjuna and Krishna21 investigated the problem of efficient task scheduling by using a bee colony
optimization model and considering the iteration process for improving load balancing. Keshanchi et al22 proposed
the problem of task scheduling in virtualized heterogeneous environments using the priority queues and addressed it
using a powerful one-point crossover operator in the improved GA. The objective is to minimize the execution time
and SLA violation. Statistical analyzes on the different randomly generated task graphs are applied to estimate the ver-
ification results. Rafieyan et al23 studied the problem of cloud scheduling based on a combination of the concept of
the multi-criteria decision-making method and compromise ranking method (VIKOR) in virtualized heterogeneous
systems, while considering three levels of priorities, including scheduling level, resources level, and job level. The
authors have investigated the effect of priority of each job in compared with other jobs separately using comparison
matrixes of jobs.
In the energy-efficient load balancing algorithms, Safari and Khorsand13 proposed a new approach for the problem
of minimizing energy consumption in task allocation of VMs on heterogeneous cloud system. They have combined
power-aware list-based scheduling algorithm with DVFS technique under deadline constraint to maintain the quality
of service. Therefore, the goal is to improve performance and energy consumption in the task scheduling. Moreover,
likewise, Alahmadi et al24 investigated the problem of energy-efficient task scheduling in a cloud application
environment. Besides DVFS energy saving techniques and VM reuse, the authors have introduced and applied a new
EATS-FFD algorithm, where FFD was used as a base task-scheduling policy. The objective is to minimize the energy
consumption by maintaining QoS. Fernández-Cerero et al25 proposed a set of energy-aware strategies for task schedul-
ing in clouds with the ability to hibernate those VMs that operate in idle state. It simultaneously minimizes makespan
and power consumption. Garg and Goraya26 introduced an energy-efficient heuristic for scheduling tasks under the
deadline constraint in virtualized cloud. This method contains two instances to improve the energy efficiency by execut-
ing maximum workload of the VM and by maximum energy saving in the idle state of the VM.
Surveyed scheduling algorithms have various limitations and main contributions associated with them, as listed in
Table 1 of this paper. It is observed that the surveyed algorithms typically do not work for the idea of energy-efficient
load balancing merged with finding the values of the different criteria and the scheduling alternatives according to the
user's QoS preferences. To overcome this drawback, an energy-efficient task-scheduling algorithm is introduced
according to the multi-criteria decision making in cloud computing that performs energy-efficient load balancing,
ranks, and selects the appropriate task assignment operation to assure QoS.
The details of the proposed energy-efficient task-scheduling framework based on best-worst and TOPSIS multi-criteria
decision making in cloud computing is illustrated in Figure 1, with the following explanations:
In the first stage, user tasks are submitted and are stored in the global waiting queue based on their arrival time. In
the second stage, a decision-making committee (eg, a group of experts) performs the identification of evaluation criteria
4 of 17 KHORSAND AND RAMEZANPOUR
or the QoS preference. Because the selected criteria have varied importance, we apply a best-worst method by Rezaei17
in the third stage to assign the importance weights for each criterion in the users perspective. Then, in the fourth stage,
TOPSIS process uses these weighted criteria as inputs to determine the rank of each task. These ranked tasks can be
sent to the four level priorities (low, medium, high, and very high). In the fifth stage, there is an energy-efficient
dispatcher. The energy-efficient dispatcher is responsible for dispatch different tasks among dynamic priority queues
into four levels, such as the very high, high, medium, and low priority queues, respectively. For the set the priority of
VMs, those are sorted based on their processing power in ascending order. Finally, the energy-efficient dispatcher sends
the next task from dynamic priority queues to proper VM for processing.
KHORSAND AND RAMEZANPOUR 5 of 17
4 | P R O P O S ED A L G O R I T H M
When the user tasks are submitted and are stored in the global waiting queue, a ranking algorithm based on best-worst
and TOPSIS methodology is applied. Multi-criteria decision making is regarded as the process of identifying and
selecting alternatives based on the decision maker's preferences where several conflicting criteria are involved. In this
paper, during the decision process, different end users have different criteria; therefore, the recently developed best-
worst method by Rezaei17 that considers two vectors of pairwise comparisons and allows the assessment of the weight
of each criterion in relation to user requests in the input queue is applied. The steps of best-worst method in pseudo-
code format for the assessment of the weight of each criterion in relation to user requests is presented in Algorithm 1,
and it is described as follows:
The steps of the best-worst–based criteria weighing method are as follows:
Step 1. The end users rely on the judgements of a group of experts and initialize a set of preferred criteria {c1, c2, …,
cn} of their requests, for instance, as shown in Table 2. The most important criteria are considered such as task
length, burst time, deadline, waiting time, and VM utilization.
Step 2. The best (eg, most desirable/important) and the worst (eg, least desirable/important) criteria are analyzed and
determined in the users perspective, for instance, as shown in Table 3.
ALGORITHM 1
Steps of best-worst method in pseudo-code format for the assessment of the weight of each criterion
in relation to user requests
Input: A set of preferred criteria
Output: The weight of each criterion in relation to user requests
1: Initialize a set of preferred criteria with respect to QoS parameters of requests
2: Determination of the best and the worst criteria in the users perspective
3: Identify the preference of the best criterion in relation to the other criteria
4: Identify the preference of the criteria in relation to the worst criterion
5: Calculate the criteria weights, where the maximum absolute differences for all i are minimized.
6: Obtain the optimal weight for the preferred criteria through Equation (1)
7: End
6 of 17 KHORSAND AND RAMEZANPOUR
Row Criteria
1 Throughput (C1)
2 Cost (C2)
3 Makespan (C3)
4 Waiting time (C4)
Step 3. The preference of the best criterion is identified in relation to the other criteria. This is made according to the
scale (eg, aBi) from 1 to 9. aBi value denotes the preference of the best criterion B (eg, C1) in comparison with
criterion i, for instance, as shown in Table 4.
Step 4. The preference of the criteria is identified in relation to the worst criterion. This is made according to the scale
(eg, aiW) from 1 to 9. aiW value denotes the preference of the criterion i in comparison with the worst
criterion W, for instance, as shown in Table 5.
B
Step 5. The criteria weights (W1,…,Wn) are calculated, where the maximum absolute differences W −a and
W i
Bi
Wi
W W − aiW for all i are minimized. The minmax model for calculating the criteria weights is expressed as
follows:
X
W B W i
minmax −aBi − aiw W i = 1 W i ≥0 foralli: ð1Þ
i Wi Ww i
By solving the above problem, for instance, the optimal weight for the preferred criteria is obtained, as shown in
Table 6.
TABLE 4 The preference of the best criterion in compared to the other criteria
TABLE 5 The preference of all the criteria in compared to the worst criterion
C4 Lowest Indicator
8 C1
4 C2
4 C3
1 C4
3 C5
KHORSAND AND RAMEZANPOUR 7 of 17
Criterion Weights
Throughput 0.42
Cost 0.20
Makespan 0.21
Waiting time 0.05
VM utilization 0.42
The TOPSIS is a well-known multi-criteria ranking method devised by Hwang and Yoon,27 to determine the best alter-
native with a finite criteria count. According to this technique, the best alternative is chosen as the one with having
both the shortest distance from the positive-ideal solution and the longest distance from the negative-ideal solution.
The TOPSIS method provides ease in computation and more comprehensible concept than other multi-criteria ranking
methods. The obtained the criteria weights are determined through the best-worst method and are used as the inputs to
evaluate and rank each request. The steps of TOPSIS method in pseudo-code format for ranking user requests is pres-
ented in Algorithm 2, and it is described as follows:
Step 1. A decision matrix of m user requests and different criteria (n criteria) are determined, where each row repre-
sents a user request and each column represents a QoS criterion of all user requests. An integer value (eg, xij)
between 1 and 9 can be the average value of a certain QoS criterion of a user request, as shown in Table 7.
Because of the heterogeneity of QoS criteria, their values may have different measurement units, leading to the con-
struction of a normalized decision matrix. The matrix (zij)m * n represents the normalized form of (xij)m * n according to
the normalization method expressed through Equation 2.
sffiffiffiffiffiffiffiffiffiffiffiffi
X m
r ij = x ij = x 2ij : ð2Þ
i=1
For instance, the elements of normalized decision matrix r are tabulated in Table 8.
ALGORITHM 2
Steps of TOPSIS method in pseudo-code format for ranking user requests
Input: The criteria weights are determined through the Best-worst method
Output: Evaluate and rank each request
1: Create of normalized decision matrix
2: Create of weighted normalized decision matrix according to Equation (3)
3: Determination of ideal (A+) and negative-ideal (A−) solutions according to Equations (4) and (5)
4: Calculation of the Euclidean distance (Si + , Si − ) for each alternative in matrix V from the positive ideal
solution A+ through Equation (6)
5: Calculation of relative closeness to the ideal solution through Equation (8)
6: Ranking of alternatives based on the closeness coefficient C i* values
7: End
8 of 17 KHORSAND AND RAMEZANPOUR
V ij = W j *r ij , ð3Þ
where Wj is the optimal weight for the preferred criterion using the best-worst method (for instance, the optimal
weights for the preferred criteria W1, W2, W3, …, Wnare as shown in Table 9), where the sum of the all weights is equal
to 1.
For instance, the weighted normalized decision matrix V is tabulated in Table 10.
Step 3. According to the weighted normalized decision matrix V, the positive ideal (A+) and negative ideal (A−) solu-
tions are defined according to Equations (4) and 5 with their details tabulated in Table 11.
+ f
ðA + Þ = vf f
1 , v2 , …, vn ,
+ + ð4Þ
− −
ðA − Þ = vf f f
1 , v2 , …, v
−
n , ð5Þ
TABLE 11 The positive ideal (A+) and negative ideal (A−) solutions
where vf f−
j = maxfvij } and vj =min vij from matrix (V)
+
i i
Step 4. The Euclidean distance (Si + , Si − ) is calculated for each alternative in matrix V from the positive ideal
solution A+ through Equation 6 and Table 12 and from the negative ideal solution A− through Equation 7 and
Table 13.
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
uX
u n 2
Si + = t V ij − vf j
+
i = ð1, 2, …, mÞ, ð6Þ
j=1
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
uX
u n 2
Si − = t V ij − vf −
j i = ð1, 2,…, mÞ: ð7Þ
j=1
Step 5. The relative closeness coefficient C i* to the ideal solution A+ of each request is calculated through Equation 8
and as shown in Table 14.
Si −
C i* = 0 < Ci* < 1 i = ð1, 2,…, mÞ: ð8Þ
S i − + Si +
Step 6. A set of requests can now be ranked in descending order according to the closeness coefficient C i* . The request
with the higher priority should be sent to the adaptive dispatcher. The final priority of requests are tabulated in
Table 15.
Task 1 0.040202
Task 2 0.036861
Task 3 0.088118
Task 1 0.070732
Task 2 0.071169
Task 3 0.0265
Task 1 0.637606
Task 2 0.658791
Task 3 0.231202
10 of 17 KHORSAND AND RAMEZANPOUR
TOPSIS process, in the previous section, uses the weighted criteria as inputs to determine the rank of each task. For
this, all the tasks, which have a very high priority after the ranking process, should be quickly allocated to VMs. The
energy-efficient dispatcher is responsible for dispatch different tasks to the dynamic priority queues. The priority can be
very high, high, medium, and low. We determine three thresholds α, β, and γ to prioritize the tasks, queues, and VMs
inspired by Jun et al28 as follows:
Very high priority level task = Task (i) > γ T and Expected finish timeTask(i) > DeadlineTask(i)
High priority level task = βT < Task (i)≤γ T
Medium priority level task = αT<Task (i))≤βT
Low priority level task = Task (i)≤αT
In addition to a good load balancing, we consider prioritization of the VMs based on their computing capabilities
in the next step. For the set the priority of VMs, those are sorted based on their processing power in ascending order.
We assume that all VMs can be classified into four priority levels such as low priority level VM, medium priority level
VM, high priority level VM, and very high priority level VM according to three thresholds αv, βv, and γv. Next for each
group priority, all queues are selected for finding the optimal task allocation to the VMs. Then, the energy-efficient
dispatcher sends the next task from dynamic priority queues to proper VM for processing. Finally, the algorithm deter-
mines the makespan of tasks and returns the maximum makespan among VM group. Then, it keeps track of the best
solution for task allocation to the VMs. The pseudo-code of the energy-efficient dispatcher component is presented in
Algorithm 3.
ALGORITHM 3
Pseudo-code for energy-efficient dispatcher ()
Input: Ranked tasks with a low, medium, high, or very high priority
Output: The best solution of scheduling tasks among VMs
1: Calculate the capacity of all VMs
2: //Classification of VMs
3: Sort the list of VMs in descending order
4: Determine the priority order of each VM
5: Define group VMs into the priority group according to the thresholds
6: Receive list of all queues
7: Sort the all queues in descending order
8: While true
9: for (Every task queue) do
10: Send the TaskVery high priority to the VMVery high priority
11: Send the Taskhigh priority to the VMhigh priority
12: Send the Taskmedium priority to the VMmedium priority
13: Send the Tasklow priority to the VMlow priority
14: End
KHORSAND AND RAMEZANPOUR 11 of 17
5 | PERFORMANCE EVALUATION
In this paper, the simulation is run by applying the CloudSim 3.0.3 toolkit29–31 and statistical test through ANOVA32 to
accomplish the proposed approach. The objective of the proposed approach is to meet the multi-criteria QoS require-
ments, in both cloud users' and providers' context by minimizing makespan for user, and minimizing energy consump-
tion, improving resource utilization and load balancing for providers. Simulation setup applying in modeling proposed
approach is tabulated in Table 16.
Experiments are run with three sets of scenarios (Table 17).
The performance analysis of the proposed energy-efficient task-scheduling algorithm is run through the energy
consumption and makespan metrics as follows:
• Total energy consumption: The energy consumption of the tasks execution is determined as follows18:
P
E(x) = ENij + Eo
Subject to: γ j(taski) < Tdeadline
ENij= Rj * γ j(taski)
where ENij is the energy consumption produced by the task Ti running on the VM Vj, Eo is the power needed to operate
a data center, Rj represents the energy consumption rate of the VM, and Tdeadline represents the maximum latency that
a user can tolerate.
• Makespan: It is the completion time of processing a set of tasks submitted to the system.
• VM utilization: It is defined as CPU utilization, RAM utilization, and storage utilization. For example, in this paper,
VM utilization indicates the value between 0 and 1, which explains the percentage of CPU usage. Value 1 means that
CPU is not available for further processes.33
To evaluate the proposed approach and to have a better view of its advantages, the following algorithms are described
which are then compared in the following subsections:
• This algorithm work toward minimizing energy of directed acyclic graph-structured applications on heterogeneous
cloud system. It also combines power-aware list-based scheduling algorithm with dynamic voltage and frequency
scaling (DVFS) technique for real-time tasks to maintain the quality of service while considering tasks deadlines.13
• Energy aware for tasks scheduling with deadline constraint (EATSD): This approach is based on considering different
users' priorities and deadlines constraints that can allocate the resources on the basis of their class as well as the class
of received tasks so that the energy consumption is reduced.34
The newly proposed approach is compared with the above-mentioned approaches with respect to the characteristics
tabulated in Table 4. In the simulation, values are obtained by running each algorithm for 20 times.
The objective of this experiment is to identify the makespan of various arrival rates. According to the results
exhibited in Figure 2, the X-axis reveals the count of tasks, and the Y-axis reveals the makespan. The count of
user tasks is variable within 100-1000, where a direct relation is evident between the increase in makespan and
the task count, due to an increase in task request per VM processing. When the user task count increases, a
significant difference is observed between different algorithm's makespan because the proposed approach selects
the VM based on the task priority. For example, with 1000 user tasks, this proposed approach provides users with
lower makespan than EATSD and PL-DVFS, respectively. The proposed approach outperforms other algorithms,
because the energy-efficient scheduler prioritizes all VMs based on their processing power and selects the
corresponding output queue to the appropriate VMs. In addition, all of the tasks are considered based on their
priorities.
The mean VM utilization metric subject to different user tasks count, compared with the other algorithms, is bar
charted in Figure 3. In PL-DVFS, the resource provisioning is not considered, because the objective is to satisfy the
cloud user's objectives. The objectives of EATSD are to meet user requirements, to stabilize the VMs count, and to
promote resource utilization with the assistance of resource provisioning, with no concern on the multi-criteria
scheduling. As observed in Figure 3, the proposed approach in all cases has better utilization than the other algorithms.
The proposed approach seeks to increase the revenue of the service provider by optimally VM utilizing and concen-
trates on data center utilization. Data center utilization is determined through the VMs count applied in processing the
given task requests. This reason for improving the mean utilization in this proposed algorithm is because of a good load
balancing among different available resources. The proposed approach prioritizes the VMs based on their processing
power.
The results of energy consumption metric in relation to variation in task count are bar charted in Figure 4. One of
the reasons for evaluating this metric is to assure cost reduction. There exists a direct relation between cost and energy
consumption. As observed in Figure 4, PL-DVFS and EATSD energy consumption increase rapidly when the number of
tasks increases, while in the case of our algorithm, it increases very slowly. The count of tasks vary in relation to time.
Unlike the available algorithms, which do not apply system load to allocate the resource, the proposed approach does
so. Here, it can be deduced that the proposed approach outperforms PL-DVFS, because it sends tasks with higher cost
and size on the VMs with higher power.
To check statistical analysis of the experimental results and to evaluate the proposed algorithm with the PL-DVFS and
EATSD algorithms, we present the boxplots of makespan, VM utilization, and energy consumption for the graphs in
simulation results. We also performed ANOVA in SPSS tool.32 ANOVA is a well-known statistical approach that we use
to validate our simulation results. It compares the mean of two or more groups to determine a considerable difference
among the groups if there is. We consider a null hypothesis (H0) that shows that the means of all the groups are equal.
Moreover, we consider an alternative hypothesis (H1) that shows that the means are not equal. According to the statisti-
cal analysis of the 1000 tasks, we have done two separate ANOVA tests based on energy efficiency and makespan; that
test results are presented in Tables 18 to 20, respectively. If the sig. value is lesser than the selected α level (=.05), then
the null hypothesis will be rejected. We reject the null hypothesis because the means are not equal. Note that df denotes
the degree of freedom in Tables 18 to 20 and is determined by calculating the difference between the number of samples
and number of groups. In addition, boxplot of makespan, VM utilization, and energy consumption using different
approaches is illustrated in Figures 5–7, respectively.
Cloud computing is a popular computing paradigm on the internet which enables computing as a service model and
prepares a shared pool of virtualized resources applicable on demand. In many real-world task scheduling, multi-
criteria need to be considered when choosing VMs depending on users' requirements. Most of the available studies in
this area focus only on a single criterion, which cannot satisfy users' requirements or the service provider. To address
this issue, this paper proposes an energy-efficient task-scheduling algorithm based on BWM and the TOPSIS method-
ology. The main objective of this paper is to determine which cloud scheduling solution is more important to select.
The ranking process using BWM-TOPSIS methodology allows this objective to be more comprehensive and principled.
In addition, the energy-efficient dispatcher is responsible for dispatch different tasks to the dynamic priority queues
such that all the tasks, which have a very high priority after the ranking process, should be quickly allocated to VMs.
The applicability of the proposed approach is demonstrated through different experimental scenarios by applying the
simulation and statistical test through ANOVA and is compared with two other approaches. The experimental results
proved that the proposed approach in comparison with its counterparts can effectively reduce the makespan and
energy consumption. Moreover, the proposed approach can improve the VM utilization making it appropriate for
exploring large-scale problems in a significant manner. In future studies, the other QoS parameters like reliability and
availability need to be applied for possible development of the energy-aware scheduler for famous scientific workflows.
Moreover, our proposed work is simulated in CloudSim toolkit that has become one of the most popular open source
cloud simulators in research and academia. However, in the proposed algorithm, we have not considered the
large-scale data centers where an amount of workloads may exceed the capacity of a single scheduler using CloudSim
simulator. Therefore, our future effort is aimed to use SCORE simulator35, 36 as an extension to the Google Omega
lightweight simulator that focused on the large-scale data centers to analyze the efficiency of the proposed task-
scheduling model.
ORCID
Reihaneh Khorsand https://orcid.org/0000-0003-1613-4938
Mohammadreza Ramezanpour https://orcid.org/0000-0002-1588-0982
R EF E RE N C E S
1. Ashouraie M, Jafari NN. Priority-based task scheduling on heterogeneous resources in the Expert Cloud. Kybernetes. 2015 Nov 2;44(10):
1455-1471.
2. Azad P, Navimipour NJ. An energy-aware task scheduling in the cloud computing using a hybrid cultural and ant colony optimization
algorithm. International Journal of Cloud Applications and Computing (IJCAC). 2017 Oct 1;7(4):20-40.
16 of 17 KHORSAND AND RAMEZANPOUR
3. Ghobaei-Arani M, Khorsand R, Ramezanpour M. An autonomous resource provisioning framework for massively multiplayer online
games in cloud environment. Journal of Network and Computer Applications. 2019 Jun 7;142:76-97.
4. Khorsand R, Safi-Esfahani F, Nematbakhsh N, Mohsenzade M. Taxonomy of workflow partitioning problems and methods in distrib-
uted environments. Journal of Systems and Software. 2017 Oct 1;132(2):253-271.
5. Khorsand R, Ghobaei-Arani M, Ramezanpour M. FAHP approach for autonomic resource provisioning of multitier applications in cloud
computing environments. Software: Practice and Experience. 2018 Dec;48(12):2147-2173.
6. Dordaie N, Navimipour NJ. A Hybrid Particle Swarm Optimization and Hill Climbing Algorithm for Task Scheduling in the Cloud
Environments. ICT Express; 2017 Sep 20.
7. Hoseinnejhad M, Navimipour NJ. Deadline constrained task scheduling in the cloud computing using a discrete firey algorithm. Interna-
tional Journal of Next-Generation Computing. 2017 Nov 1;8(3):198–209.
8. Jafari Navimipour N, Masoud Rahmani A, Habibizad Navin A, Hosseinzadeh M. Job scheduling in the Expert Cloud based on genetic
algorithms. Kybernetes. 2014 Aug 26;43(8):1262-1275.
9. Milani AS, Navimipour NJ. Load balancing mechanisms and techniques in the cloud environments: systematic literature review and
future trends. Journal of Network and Computer Applications. 2016 Aug 1;71(10):86-98.
10. Navimipour NJ. Task scheduling in the cloud environments based on an artificial bee colony algorithm. In International Conference on
Image Processing 2015 Jun 3 (pp. 38-44).
11. Mallikarjuna B, Krishna PV. OLB: a nature inspired approach for load balancing in cloud computing. Cybernetics and Information
Technologies. 2015 Nov 1;15(4):138-148.
12. Mallikarjuna B, Krishna PV. A nature inspired approach for load balancing of tasks in cloud computing using equal time allocation.
International Journal of Innovative Technology and Exploring Engineering (IJITEE), ISSN. 2014;2278-3079.
13. Safari M, Khorsand R. PL-DVFS: combining power-aware list-based scheduling algorithm with DVFS technique for real-time tasks in
cloud computing. The Journal of Supercomputing. 2018 Oct 1;74(10):5578-5600.
14. Khorsand R, Ghobaei-Arani M, Ramezanpour M. A self-learning fuzzy approach for proactive resource provisioning in cloud environ-
ment. Software: Practice and Experience. 2019 Nov 1. 1618–1642.
15. Gonçalves Junior R, Sampaio A, Rolim T, Mendonça NC. Cloud application architecture appraiser (CA3): a multicriteria approach and
tool for assessing cloud deployment options based on nonfunctional requirements. Software: Practice and Experience. 2018 Dec;48(12):
2278-2301.
16. Safari M, Khorsand R. Energy-aware scheduling algorithm for time-constrained workflow tasks in DVFS-enabled cloud environment.
Simulation Modelling Practice and Theory. 2018 Sep 1;87(7):311-326.
17. Rezaei J. Best-worst multi-criteria decision-making method: some properties and a linear model. Omega. 2016 Oct 1;64:126-130.
18. Panda SK, Pande SK, Das S. Task partitioning scheduling algorithms for heterogeneous multi-cloud environment. Arabian Journal for
Science and Engineering. 2018 Feb 1;43(2):913-933.
19. Khorsand R, Safi-Esfahani F, Nematbakhsh N, Mohsenzade M. ATSDS: adaptive two-stage deadline-constrained workflow
scheduling considering run-time circumstances in cloud computing environments. The Journal of Supercomputing. 2017 Jun 1;73(6):
2430-2455.
20. Keshanchi B, Navimipour NJ. Priority-based task scheduling in the cloud systems using a memetic algorithm. Journal of Circuits,
Systems and Computers. 2016 Oct 20;25(10):119–131.
21. Mallikarjuna B, Krishna PV. A nature inspired bee colony optimization model for improving load balancing in cloud computing.
International Journal of Innovative Technology and Exploring Engineering (IJITEE). December, 2018;8(2S2):51–54.
22. Keshanchi B, Souri A, Navimipour NJ. An improved genetic algorithm for task scheduling in the cloud environments using the priority
queues: formal verification, simulation, and statistical testing. Journal of Systems and Software. 2017 Feb 1;124:1-21.
23. Rafieyan E, Khorsand R, Ramezanpour M. An adaptive scheduling approach based on integrated best-worst and VIKOR for cloud
computing. Computers & Industrial Engineering. 2020 Jan;8:106272–106293.
24. Alahmadi A, Che D, Khaleel M, Zhu MM, Ghodous P. An innovative energy-aware cloud task scheduling framework. In2015 IEEE 8th
International Conference on Cloud Computing 2015 Jun 27 (pp. 493-500). IEEE.
25. Fernández-Cerero D, Jakóbik A, Grzonka D, Kołodziej J, Fernández-Montes A. Security supportive energy-aware scheduling and energy
policies for cloud environments. Journal of Parallel and Distributed Computing. 2018 Sep 1;119:191-202.
26. Garg N, Goraya MS. Task deadline-aware energy-efficient scheduling model for a virtualized cloud. Arabian Journal for Science and
Engineering. 2018 Feb 1;43(2):829-841.
27. Hwang CL, Yoon K. Methods for multiple attribute decision making. In: Multiple attribute decision making. Heidelberg: Springer, Berlin:
1981, 58-1191.
28. Jun Y, Qingqiang M, Song W, Duanchao L, Taigui H, Wanchun D. Energy-aware tasks scheduling with deadline-constrained in clouds.
In2016 International Conference on Advanced Cloud and Big Data (CBD) 2016 Aug 13 (pp. 116-121). IEEE.
29. Calheiros RN, Ranjan R, Beloglazov A, De Rose CA, Buyya R. CloudSim: a toolkit for modeling and simulation of cloud computing
environments and evaluation of resource provisioning algorithms. Software: Practice and experience. 2011 Jan;41(1):23-50.
30. Navimipour NJ, Milani FS. Task scheduling in the cloud computing based on the cuckoo search algorithm. International Journal of
Modeling and Optimization. 2015 Feb 1;5(1):44–55.
31. Navin AH, Navimipour NJ, Rahmani AM, Hosseinzadeh M. Expert grid: new type of grid to manage the human resources and study the
effectiveness of its task scheduler. Arabian Journal for Science and Engineering. 2014 Aug 1;39(8):6175-6188.
KHORSAND AND RAMEZANPOUR 17 of 17
32. Chiarandini M, Paquete L. In: Bartz-Beielstein T, Preuss M, eds. Experimental methods for the analysis of optimization algorithms. Berlin:
Springer; 2010 Nov 1.
33. Hemasian-Etefagh F, Safi-Esfahani F. Dynamic scheduling applying new population grouping of whales meta-heuristic in cloud
computing. The Journal of Supercomputing. 2019;1-65.
34. Ben Alla S, Allah B, Touhafi A, Ezzati A. An efficient energy-aware tasks scheduling with deadline-constrained in cloud computing.
Computers. 2019 Jun;8(2):46–58.
35. Fernández-Cerero D, Fernández-Montes A, Jakóbik A, Kołodziej J, Toro M. SCORE: simulator for cloud optimization of resources and
energy consumption. Simulation Modelling Practice and Theory. 2018 Mar 1;82:160-173.
36. Fernández-Cerero D, Jakóbik A, Fernández-Montes A, Kołodziej J. GAME-SCORE: game-based energy-aware cloud scheduler and
simulator for computational clouds. Simulation Modelling Practice and Theory. 2019 May 1;93:3-20.
How to cite this article: Khorsand R, Ramezanpour M. An energy-efficient task-scheduling algorithm based on
a multi-criteria decision-making method in cloud computing. Int J Commun Syst. 2020;e4379. https://doi.org/10.
1002/dac.4379