Energy Optimization and Saving For Green Data Centers: Niharika Raskar

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Energy Optimization and Saving for

Green Data Centers

Niharika Raskar
Department of Computer Science
Virginia Commonwealth University
Richmond, Virginia, USA
[email protected]

Abstract— Cloud computing has paved the way for a new 365. The annual electricity consumed by these data centers
generation of technology which gives the view of having infinite amounts to 450 Terawatt hours which is equivalent to 90
amount of computational power and resources. This has resulted billion USD [2]. Microsoft’s single data center at Quincy, WA
in the rise of data centers which are consuming energy at a very consumes 48 megawatts of electricity enough to power 40,000
fast pace thereby increasing the carbon footprint of organizations homes [3]. Significant amount of energy is consumed in
and GHG emissions. This paper aims to provide a survey of the running the servers and the cooling infrastructure necessary to
various techniques suggested and state-of-the-art towards the maintain the environmental conditions for smooth operation.
process of “greenifying” these data centers, making them Figure 1 provides a breakdown of energy consumption in a
environment friendly. The paper also provides a detailed look at
typical data center. The consumption can be higher if the layout
the data center architecture which will provide the intended
audience a clear picture in order to understand the main areas
of the infrastructure is improper and the airflow is insufficient.
which will be targeted for optimization in the papers surveyed. This increases the Green House Gas emissions and energy
Optimization techniques covered include virtualization, usage of consumptions are bound to grow and negatively impact the
energy aware algorithms, virtual machine management, environment in future. For the past several years the main focus
temperature management, data center design, mathematical has been on delivering to consumers a service with high
model, software techniques and general strategies. The paper quality, reliability and performance without any regards to the
ends with a discussion of challenges and future work which needs design which might impact the environment. But now,
to be given due consideration in order to deal with the rising organizations can face huge fines and are expected to pay a
demands in the future. certain amount based on their carbon footprint as part of a
scheme called “Carbon Reduction Commitment” [14]. There is
Keywords—cloud computing, data centers, optimization, a huge reliance on fossil fuels currently and the need of the
virtualization, energy efficiency hour is to use renewable resources of energy. Gartner now lists
“Green IT” as the leading issue of the “top ten strategic trends
I. INTRODUCTION and technologies” for ICT organizations [38].
Cloud computing has gained a lot of popularity as it comes Significant research is being done in this regard to optimize
with one of the primary advantages of offering on demand the energy consumption in data centers, use renewable sources
memory and computation power. This IT paradigm is served of energy and follow the thermal guidelines as prescribed by
and supported by data centers which can be defined as ASHRAE [4] since early 2000s. Numerous works have been
structures dedicated to the centralized accommodation, released which range from server virtualization, using green
interconnection and operation of IT and network strategies, software features, frameworks for VM management,
telecommunications equipment providing data storage, temperature aware capping, provisioning of resources and
processing and transport services, together with all the support designing algorithms. Some of the strategies work for small
facilities for power supply and environmental control with the data centers while others work for only large data centers
necessary levels of resilience and security required to provide which might be generating their own electricity. This is
the desired service availability [1]. These data centers can also because not all organizations especially the small and medium
be viewed as power hungry giants which can consume as much scale ones have the technical manpower to implement the
electricity as is needed to run a modern day city. The rising solutions suggested. These organizations can take help of local
CO2 emissions from them significantly increase the negative utility providers to optimize their energy consumption. Servers
impacts of information and communication technology on the and cooling infrastructure has received a lot of attention and
environment. have improved over time, however, attention needs to be paid
to networking infrastructure as well whose energy intake might
Organizations whether public or private are striving to meet
contribute to 50% of the total energy consumed in a data center
the demands of their consumers and in order to gain
in future [39].
competitive advantage are moving most of their operations to
cloud which involves data centers running day and night, 24 by
This paper aims to provide a survey of the various methods to guide businesses looking for option to reduce their
techniques suggested and state-of-the-art towards the process carbon footprint.
of “greenifying” these data centers. This survey includes 18
papers classified according to the type of solution suggested – Data centers were designed previously keeping just one
[9, 16, 17, 22] provide an approach for virtual machine aspect in mind that they should provide excellent performance
management to reduce energy consumption, [5] suggests a (reliability, availability, speed, computational power, memory)
virtualization technique, [11, 15, 19] suggest an algorithm without any regards to energy consumption or GHG
which is energy aware, [10, 12] provide temperature emissions. This was the main reason behind the success of
management techniques, [13, 14] suggest an efficient data cloud computing. However, now when the power demands of
center design, [18] provides a mathematical model to maximize these data centers are rising incessantly, steps are being taken
the profit of running a data center while reducing energy to use more energy efficient schemes.
consumption, [7] provides an details of software provided by
Intel, [6, 20] provide general strategies for creating green data III. OVERVIEW OF DATA CENTER ARCHITECTURE
centers, [8] talks about the statistics for data center energy The main parts of data center infrastructure include IT
consumption and [21] discusses in detail the performance room, data center support area and additional spaces. The core
metrics to measure how “green” a data center is. of the data center that is the IT equipment and connecting
Thus, this paper summarizes the previous work done in this cables are housed in the IT room [20]. The support room has
regard, challenges and future work. The paper starts by controlling systems like switch boards, UPS (Uninterruptable
discussing the motivation followed by an overview of data Power Supply) and cooling system [20]. Rest of the area is
center architecture in section 3, survey of the papers mentioned used by the personnel deployed at the data center for physical
previously in section 4, challenges and future work in section security and other operations. IT infrastructure is mainly
5, related work in section 6 and conclusion in section 7. composed of several structures called racks. Each houses a
number of servers and storage devices which are the core of
any data center. Back of the rack consists of a number of
networking cables connecting the devices. The architecture is
best illustrated with the help of Figure 2. The environmental
conditions of the surrounding area - weather, humidity,
temperature, precipitation play an important role in deciding
the cooling system. The equipment in the IT room needs to be
protected under all circumstances and weather conditions as
compromise of data confidentiality or integrity can cause
substantial financial and reputational loss to any organization.
ASHRAE [3] provides the thermal guidelines for running
the data centers, it provides the required temperature, humidity,
particulate and gaseous levels. Majority of the medium to large
scale organizations will follow these guidelines. They help in
maintaining reliability while providing high energy efficiency
and were developed for use across all types of businesses and
conditions [3]. The main motive of these guidelines is to
improve the average PUE (Power Usage Effectiveness). It is
the ratio of the data center’s total power consumption to the
data center’s IT infrastructure power consumption. It is
considered as a measure of data center’s efficiency. Currently,
Fig. 1. Data center energy consumption breakdown [22] the typical value for most enterprise data centers is 2.0 or more.
However, recent studies have suggested that many data centers
II. MOTIVATION can soon reach a PUE of 1.7 [40].
With the rising demand of data centers there will be a need
Power distribution systems of the data center is designed
of energy optimizing strategies. While there a few papers [6,
specially to provide uninterrupted power supply without any
20, 21] which provide an overview of the work done towards
dips or spikes which can jeopardize the IT equipment. Diesel
energy optimization using a single approach of either managing
generators are used for providing backup power and UPS is
the virtual machines, maintaining the temperature, using
used for providing a smooth transition in case of a power
renewable resources or modifying the data center design, I
failure [20]. There are four main types of power distribution
could not find a survey paper which provides a holistic
systems used for data centers categorized as Tier I which
overview of previous researches using different approaches
corresponds to the basic structure and it has non-redundant
under the same roof. This paper aims to address the various
capacity components and distribution paths. Tier II is quite
aspects of green data centers in addition to energy optimization
similar to Tier I, just including redundancies in capacity
components but not in power distribution paths. Tier III and IV
presents redundancies both in capacity components and of the major sources of power consumption and GHG
distribution paths, being Tier IV the only fault tolerant site emissions in a data center.
infrastructure, so failure of any component will not impact Green Data Centers
computer equipment [22]. As described in [21] there are two ways of establishing
Cooling System green data centers – either use “green” or environment friendly
methods to build and design a data center or “greenify” the
There are three main approaches used for cooling in a data process of day to day running and maintenance of data center.
center explained by Ioannis Manousakis et al. in [10] – Chiller The literature that is surveyed in this paper will cover both the
based, Water-side economized and Direct evaporative/free above ways. Green data center is a concept arising recently
cooling. Chiller based cooling uses chillers, cooling towers where organizations have started using renewable sources of
and computer room air handlers to carry the hot air from energy for running their centers, trying to reduce the wasteful
servers to water chiller coils. The heat from the water gets consumption of electricity by under provisioning resources,
transferred to a water loop directed to the cooling towers running the cooling equipment efficiently while still providing
which cool down the water using evaporation. Remaining heat acceptable level of performance and meeting SLAs. Google
from the water is removed by the chillers and the resulting contracted to buy 114 MW of wind power for 20 years from a
chilled water is sent back to the cooling coils [10]. This wind project in Ames to power Google’s DC in Council Bluffs,
system is expensive and leads to the creation of hot spots if the Iowa [23] and Microsoft bought wind power to power part of
airflow design is poor. Water-side economized cooling just its 22.2 MW DC in Dublin, Ireland [24].There are methods
makes use of cooling towers to cool the water and in turn the involving the combination of modification of infrastructure and
hot air from the servers, it keeps the chillers off to save energy processes to make the process more eco-friendly.
consumption unless explicitly required when towers are not
sufficient. It does not mix outside and inside air [10]. Direct IV. OPTIMIZATION TECHNIQUES
evaporative or free cooling is the most economical option This section is the core of the paper and explains the
where cool air from outside is tapped to maintain the various optimization techniques used to reduce the energy
temperature of IT equipment. It does not require any cooling consumption in green data centers. Some of the papers suggest
towers or water chillers. Either large fans are used to blow ways to reduce cost but indirectly help in reducing energy
cold air inside or water is sprayed on the outside air as it intake. The research work ranges from highly technical
passes through a washer system. However, sometimes solutions which will require considerable programming and
unexpected weather may cause a variation in temperature and customization effort to generic strategies which can be
humidity levels which will not occur in previous two adopted by almost any data center in any region with little
approaches. effort and cost.
Apart of these three techniques there are literature which A. Virtualization
differentiate the cooling based on whether it is by liquid or air.
Air based cooling is the use of simple air conditioning which Jun-Ho Huh presented a study in [5] for reducing the cost
is used in a number of small to medium scale data centers and of running a network server based on NFV (Network Function
Virtualization) technology and mainly helps the
is the traditional method. Liquid based cooling involves using
telecommunication providers who need to incur excessive cost
a cold-plate heat exchanger positioned directly above the
for maintaining sufficient network capacity and bandwidth at
Central Processing Unit (CPU) or some other approach that
all times. Usually the telecommunication providers buy
brings water in close but not in immersive contact with the
CPU [20]. Newcombe et al. propose another kind of taxonomy equipment from giants such as Ericsson, Nokia, Cisco and
for free cooling technologies in [41] as follows: Samsung which is quite costly as it is developed using
advanced technologies. So, equipment virtualization can be
performed using an x86 compatible server instead of using the
 Direct Air (DA) free cooling: External air is used to
cool the facility. manufacturer’s hardware. NFV can separate hardware and
software on various pieces of network equipment using a
 Indirect Air (IA) free cooling: Circulating air within the hypervisor. The paper proposes a virtual Evolved packet Core
facility is passed through a heat exchanger to remove (vEPC) model for server operation and virtualization to save
heat to the atmosphere. energy. Evolved Packet Core unifies voice and data on an
 Direct Water (DW) free cooling: Chilled water cooled Internet Protocol (IP) service architecture and voice is treated
by the external air via a free cooling coil. as just another IP application. This allows operators to deploy
and operate one packet network for 2G, 3G, WLAN, WiMAX,
 Indirect Water (IW) free cooling: Chilled water is LTE and fixed access (Ethernet, DSL, cable and fiber) [25].
cooled by the external conditions via a heat exchanger
which is used between the condenser and chilled water The advantage of virtualization is reduced cost and
circuits. efficiency of operation as a normal system using one OS has
an effective use rate of 2%-30%, but virtualization can
Getting a clear understanding of the cooling technology is increase it to 80% by operating multiple operating systems in
imperative to study energy optimization techniques as it is one one system [5]. The author creates a test bed using a Java
server consumes about 70% of its peak power [26]. This is
immense wastage of energy and the root cause for this is the
improper ways in which task scheduling is done. Energy
efficient task scheduling solutions are required to attain
variable resource management, live migration, minimal
virtual machine design, overall system efficiency, reduction
in operating costs, increasing system reliability, and
prompting environmental protection with minimal
performance overhead [15]. So, their work involves creating
an energy aware task scheduling algorithm based on cache
Fig. 2. Power supply from main grid to IT equipment [20]
memory and broadcasting. The algorithm covers assigning
of jobs to a server and also creation of new virtual machines
application and shows a 24% reduction in cost using
in case the servers are overloaded.
virtualization. The paper suggests 2 user interfaces, one for
vEPC server management and another one for monitoring Their solution [15] enhances the MBFD algorithm for
server use volume. optimal VM placement by using Artificial Bee Colony
algorithm which monitors the server performance to make
B. Using Energy Aware Algorithms
sure it is not overloaded. Whenever the user will create a job
Keramollahi et al. present an algorithm in [11] which it will be posted to the central server for scheduling. The
makes use of Cloud Broker to design an algorithm for central server will then check for the cache memory for any
reduced energy consumption. The Cloud Broker acts as an available sub servers. The chosen sub server which will
intermediary between the users and data centers, specifically ideally be within a user’s range (distance) will process the
the data centers that are geographically diverse. The job and respond back to the central server. The algorithm
algorithm proposed by the authors attempts to assign a data suggested in this work is intuitive and makes use of one of
center to a user by reading the rating of various data centers the best algorithms in Artificial Intelligence but if a sub
and assigning the best one to each user thereby enhancing server gets overloaded, its VMs will have to be migrated to
user satisfaction as well. There are three major components a new sub server. This process can consume a lot of
of cloud networks considered in this paper: users, brokers computational power if not implemented efficiently.
and data centers. When a user sends a request to a cloud
broker, the broker will assign a data center to the user based Ismail Alan et al. in [19] propose 3 data transfer
on pre-defined policies. The algorithm has two major phases algorithms which aim to achieve high throughput rate while
(1) Introduction Phase – data centers will be ranked reducing energy consumption. TCP pipelining, parallelism
according to a number of characteristics for computing and concurrency play a significant role in the network
energy, cost and performance ranks (2) Running Phase – it throughput that can be achieved. Tuning these parameters is
combines another characteristic, low time of the servers a challenging problem as it might lead to underutilization of
with the characteristics in phase 1 and computes lowTime the network, degrade performance, overtly increase energy
rank. This rank points to peak work time of servers, if a consumption or overload the network [19]. The first
server is in peak workload time the request would be algorithm proposed by the authors is Minimum Energy
diverted to another server [11]. An overall rank will be algorithm (MinE) which tries to minimize the energy
calculated now for each data center at the end of these consumption without any regards to performance. It divides
phases. the data into small, medium and large chunks based on file
sizes and Bandwidth Delay Product (BDP). Then it
The algorithm [11] was tested for 11 data centers using accordingly fine tunes the three protocol parameters like
CloudSim which is a Java based tool used to simulate concurrency levels for small and medium files are kept
various algorithms in cloud networks. The paper claims to higher and concurrency levels for large files are kept very
save 44% of energy and 7% of cost in the simulated user low as using more concurrent channels for large files
space. The formula for calculating ranks is the core of this increases power consumption. The second algorithm
algorithm which is very much done based on author’s suggested by the authors in [19] is High Throughput
discretion. Like for calculating energy rank, they give 15% Energy-Efficient (HTEE) algorithm which targets the best
weightage to main memory and 0 weightage to equipment possible concurrency levels to achieve high throughput and
such as motherboards, fans, PCI slots, etc. which might low energy consumption. It works on the similar lines as
need to be considered in case of some medium to large scale MinE by dividing the data into chunks and using BDP. The
data centers. The overall algorithm is quite flexible though third algorithm is SLA Based Energy-Efficient Transfer
and can be considered for small to medium data centers. Algorithm whose main purpose is to meet the SLA
Kavita A. Sultanpure et al. propose an algorithm in [15] requirements while reducing the energy consumption. End
which tries to reduce energy consumption by doing more users define their throughput requirements depending on the
efficient task scheduling across the servers available SLA and the authors suggest a way of fine tuning
because scheduling requires a lot of computation. An idle concurrency parameter to achieve the desired performance.
chooses a VM type based on the resource requirement. If an
The authors evaluate the algorithm on testbeds suggested active (awake) host has necessary resources, the VM is
by previous research XSEDE [27], FutureGrid [28] and created on top of it otherwise a sleeping host is awakened to
DIDClab [29]. Based on their results the authors claim that create VM on top of it. The time limit of this flow is
these algorithms not only reduce the energy usage at the end constrained by the SLA of the task that needs to be
systems but also reduce the energy consumed by network completed. They call it Energy-aware Task-based Virtual
infrastructure. The algorithms provide a flexible way of Machine Consolidation (ETVMC) algorithm.
modifying the parameters to achieve results based on the
They design the algorithm in Java and test it using
requirements of the organization. In this work, the authors
CloudSim simulator and Xen hypervisors with randomly
used a particular kind of hardware equipment which drives
generated service requests. The authors claim that this
the selection of tuning parameters. So if the equipment
algorithm performs better than FCFS, Round-Robin and
changes then the parameters will have to be changed
EERACC [31] with lesser task rejection rate and higher
accordingly.
scalability. The solution suggested in this paper will work in
C. Virtual Machine Management almost any practical scenario as the algorithm is taking into
Ali Pahlevan et al. propose a VM allocation scheme in consideration a number of possible types of tasks and VMs.
[9] which requires minimal number of servers and reduces Figure 3 and Figure 4 show the results of using their
carbon footprint and the total energy consumption of data algorithm.
centers. Their solution makes use of Hybrid Electric
Systems (HES) and renewable energy sources in its multi-
level and multi objective framework. Their framework
basically consists of 2 modules running in parallel (1) Data
Center Energy Controller which reduces energy intake
without any noticeable QoS degradation and provides
energy consumption data (2) Green Energy Controller that
manages the renewable sources and HES and provides
feedback to the Data Center Energy Controller. It
maximizes the lifetime of battery banks and manages a
photo voltaic source to reduce the energy consumed.
Module 1 is based on correlation-aware VM allocation
scheme proposed in [30]. The authors’ work mainly Fig. 3. Comparison of energy consumption of the system for ETVMC,
proposes HES architecture which can replace the standard EERACC, Round-Robin and FCFS [16]
UPS (Uninterruptible Power Supply System) and can
exploit the energy buffers for locally generated renewable
energy. They also design a control loop which connects the
VM allocation scheme [30] to the HES manager and
optimizes resources in real time.
They make use of real data center setups to gather
utilization data and test their framework. The authors claim
to attain 11.6% energy savings, 10.4% improvement in QoS
and 96% money saving in electricity bill. The solution
proposed in this work can be implemented in large scale
data centers and energy savings look considerably good.
Sambit Kumar Mishra et al. propose an algorithm in [16] Fig. 4. Comparison of energy consumption of the system for ETVMC,
EERACC, Round-Robin and FCFS [16]
which tries to solve the optimal Virtual Machine placement
problem which is NP complete. The problem is to map tasks
Bo Zhang et al. in [17] propose a garbage collection
to a VM based on their resource requirement and then VM
mechanism in the cloud which will reduce unnecessary
to a physical machine (PM) in order to reduce energy
carbon emissions by cloud providers by efficient
consumption and enhance resource utilization. Resource
provisioning of the underlying infrastructure. As cloud
allocation depends on the kind of tasks that need to be
computing gives an image of infinite number of resources in
scheduled – CPU intensive, memory intensive, I/O intensive
order to maintain elasticity, there is a lot of wastage of
and communication intensive. The authors suggest a sub
energy as a result of resource leaks. The authors propose an
optimal solution for this by proposing an algorithm that
approach for detecting, recycling and recovering idle VM
basically reduces the number of active physical machines,
instances so that their resources become usable when the
makespan of the project and task rejection rate, thereby
cloud infrastructure reaches it limits of exhaustion. They
reducing energy intake. It works by initializing all the hosts
name it as CloudGC and it is a dedicated service for VM
in sleep state. When a task arrives, the host manager
garbage collection working in integration with OpenStack. D. Temperature Management Techniques
It is designed as a middleware service so that it can easily Ioannis Manousakis et al. provide a framework in [10]
integrate with existing IaaS services. The idea is based on that reduces cooling costs and thereby energy consumption
the garbage collection mechanism used in Java Virtual by under provisioning the cooling infrastructure. Few
Machine (JVM) to free the memory occupied by objects organizations have already started working towards
which no longer exist in the application. reducing the energy intake by using various techniques like
The approach involves three phases (1) Detecting Idle increasing the temperature of equipment and taking
VM Instances – In this phase as soon a VM instance advantage of weather conditions at the location, however
becomes idle, it is added to the list of available instances (2) there is no significant reduction in capital costs as traditional
Recycling – This phase requires saving the current state of provisioning approaches are overtly expensive and no prior
the instance once its resources are released so that they can work has been done towards efficient provisioning of
be recovered later (3) Recovering – Automatically loads the cooling infrastructure. This paper claims to reduce both
snapshot from OpenStack Image Service (Glance) and operational and capital costs by making use of cheap
restores the initial VM configuration. The authors claim that cooling technologies and reducing the cooling capacity
as long as enough resources are available in the cloud, required. As a solution approach the authors propose to
CloudGC will incur very little overhead and it adds only 5 deviate from the traditional conservative approach of
seconds to the standard provisioning process. The solution providing cooling infrastructure in such a way that it should
is based on already established, successful concept of be able to support peak IT power consumption regardless of
garbage collection and effectively demonstrates how it is surrounding weather conditions. The author’s approach
applicable in the cloud. The approach looks promising and involves using a profile of the datacenter. This profile will
can be evaluated in integration with other cloud solutions include the expected power consumption based on a rough
like AWS and Microsoft Azure. estimate of highs and lows of power consumption over a
period of time (eg. one week). Using this “power profile”
Amine Barkat et al. present a framework in [22] which and outside conditions expectancy the provisioning of
is used for joint management of cloud data centers and their cooling infrastructure will be decided. The decision will be
networks. They name it as “Global Green Cloud aimed to reduce the total cost of ownership and
Management Framework” and propose to use it in a simultaneously meeting the performance constraints.
Platform-as-a-Service (PaaS) scenario where the IT
infrastructure is provided by geographically distributed data They describe the architecture for two main components
centers connected by a network. As part of their solution - datacenter simulator and optimizer, provide complex cost
they designed an optimization model based on Mixed calculations and trade-offs involved for deploying the
Integer Linear Programing (MILP) and aim to achieve system suggested. They run a number of simulations to
reduced energy costs and exploit locally available green design the cooling behavior, power and cost models starting
energy sources. The energy costs being discussed involve from a small area and then extrapolating the results to a
the cost of consumption by the data center servers and the larger data center. They then evaluate the solution for
interconnecting network. The goal of optimization includes Parasol covering 4 locations for evaluation under diverse
reducing energy intake from the electrical grids as well as weather conditions and claim to achieve 55% cost savings in
renewable resources of energy. energy consumption.

They consider data centers that have thousands of Sungkap Yeo et al. propose a technique in [12] which is
physical servers and their model is based on studying the specially designed for small and medium scale data centers
traffic profile of interconnecting links. Based on this profile, which find it difficult to cut down their expenditure on
the system takes decisions of allocating VMs to servers and cooling costs. These centers end up wasting a lot of energy
migrating VMs across the data centers to reduce energy. in cooling because of two reasons mentioned by the authors
The model also takes into consideration the energy cost of (1) the safety margin that cooling units must ensure and (2)
VM migration, location of the data center and its time zone, the non-uniform inlet air temperatures across servers [12].
price of energy in that region and availability of renewable Each server is located at different position with respect to
resources. The mathematical model in turn uses various the CRAC (computer room air conditioning) and the floor
parameters such as bandwidth available, number of VMs of which leads to non-uniform air temperatures. The cooling
each type, PUE (Power Usage Effectiveness) of the data costs amount to about 70% of the entire electrical power
center, number of routers in the link, energy loss rate and used in these data centers. So, the authors propose a
many more. The authors claim that the model can be solved technique of maximizing the power efficiency while
using MILP solver (CPLEX) and tested it to achieve minimizing overheating using ambient temperature-aware
significant cost saving as compared to previous research. capping. The authors claim that their technique guarantees
According to the authors, their model will force the data reliability of the IT equipment by trading off performance
center providers to reduce greenhouse gas emission by using which greatly improves the cooling efficiency. As per their
more renewable sources of energy. ATAC technique (Ambient Temperature-Aware Capping)
they enable inlet air supply to provide less cooling air to the
servers which need to lower down their frequency and generated which includes data generated by wind tunnel
voltage thereby capping their performance and reducing simulations and operational log data from the data centers
energy consumption. used to build data-driven models for the simulator
components [13]. This data can be explored analytically to
The mechanism involves collecting information about
find out patterns of results that are repeating across different
each server such as the temperature of the core and the inlet
kinds of configurations.
air, the rotational speed of fans, and the thermal design
power (TDP) of the CPU. The algorithm starts by supplying Kashif Bilal et al. propose to include “Green
the air at the coolest possible temperature. Then it compares Networking” as part of data center design in [14].
the inlet air temperature (T_inletair) with the emergency According to the authors as attention has been paid to
temperature (T_emergency) of the servers. If the inlet air reduce energy consumption by cooling equipment and
temperature is greater than the emergency temperature, the servers, focus needs to be shifted to networking equipment
mechanism kick starts performance capping by lowering the which dissipates a lot of heat and ends up contributing
supply air temperature. The monitoring is done using significantly towards Green House gas emissions. The use
thermal sensors embedded in the servers. They used of energy efficient equipment can reduce this cost and
modified SimWare [32] as an evaluating platform. The data wastage but legacy equipment needs to be dealt with in
center architecture used by the authors is a raised platform order to “greenify” the existing data centers. There are
with hot and cold aisle layout without any partition between existing green networking techniques classified into 3 kinds
the aisles. They make use of Google cluster data [33, 34] to – consolidation, selective connectedness and proportional
feed SimWare. The authors claim 38% savings in cooling computing [14]. Consolidation tries to consolidate entire
power, 7% savings in total power with a performance traffic on a subset of devices to exploit overprovisioning and
degradation of only 1%. path diversity. Selective connectedness requires transferring
idle devices into sleep (inactive) modes. Proportional
E. Data Center Design
computing refers to the process of system consuming energy
Avrilia Floratou et al. propose to change the data center proportional to utilization. The authors highlight that
design process into a systematic process in [13] using an existing architectures mainly focus on performance,
integrated approach. They suggest that a data center should reliability, fault tolerance, bandwidth and agility. These
be designed taking into consideration hardware and software architectures involve over provisioning and underutilization
interdependencies and their impact on user’s expectations. of resources.
Additionally, a large scale simulation is needed to assess the
impact of design process on the requirements of users and The paper mainly talks about 2 green networking
cloud providers. For this simulation, a “wind tunnel” should techniques: (1) workload consolidation and (2) scaling down
be used. Traditional methods of data center design which the communication link data rate. Consolidating the entire
work iteratively by designing the software configuration network traffic on a subset of devices and links can increase
first followed by hardware setup accordingly do not utilization and save power by sending some devices to sleep
consider the interaction between the two components and mode. But the transition from sleep to active mode may
are therefore not optimal. The authors suggest 3 methods for cause performance degradation as it is time consuming. So,
systematic data center design: another kind of optimization is used called Adaptive Link
(1) Perform what-if analysis on small prototype clusters Rate (ALR). As the name suggests, this technique scales
with different types of configurations [13] – This method down the data rate of network equipment which is
will be expensive to carry out if it to be done for a large underutilized to save energy. This reduces the overall
setup and will be time consuming. A number of different energy saved as compared to the consolidation method but it
sizes of clusters will have to be analyzed to reach the final does not affect the network operations.
design. F. Mathematical Model
(2) Analytical Modelling which makes use of analytical
Mahdi Ghamkhari et al. propose a mathematical model
methods [13] – This approach will require extensive
[18] to maximize the data center profit while minimizing
understanding of mathematical models and will be
energy consumption. The authors’ main motivation behind
challenging to implement to get accurate results.
suggesting this approach was that there was no prior work to
(3) Simulation-based Wind Tunnel [13] – In this method the
determine this trade-off between cost and revenue of a data
behavior of all the data center components such as hardware
center. The trade-off exists because minimizing cost
and software are simulated in order to realize a more
involves turning down servers, scaling down CPU clocks or
complex model. It gets rid of the repetitive and costly
migrating VMs [18] which degrades the QoS. So, the
process of testing with different sizes of clusters to reach a
authors suggest that their approach is systematically
final goal. Thus, this method is better from a financial
maximizes a green data center’s profit. Their model takes
perspective and can handle a number of design problems
into account a number of parameters such as computer
such as Performance SLAs, Availability SLAs and
server’s power consumption profiles, data center’s power
Hardware provisioning. The authors claim that their design
usage effectiveness, price of electricity, availability of
is modular and extensible. A large amount simulation data is
renewable generation, total workload in terms of the rate at
which service requests are received at each time of day, at majority of the data centers to get significant energy
practical service-level agreements and their parameters for savings without requiring the addition of any special
service deadline, service payment, and service violation equipment or changes in networking infrastructure. The
penalty [18]. They provide profit maximizing strategies paper lists out 7 basic strategies as follows:
(mathematical formulae) for 2 cases: with and without (1) Regular Hardware Audits - This involves replacing the
behind the meter renewable generators. IT equipment which is no longer in use specifically the
“Comatose servers” which refers to servers that are still on
The system model that they evaluated consists of service
the rack but are no longer in use. This strategy was adopted
requests being placed in a queue before being selected by
by Barclays Bank to achieve savings of $4.5 million by
one of the servers and results in optimal utilization. They
replacing 9126 unused servers [6]. Non IT equipment such
tested the performance of their model with experimental
as UPS (Uninterruptable Power Supply) should also be
values for workload, renewable power generation, SLA and
inspected regularly for operational efficiency. This will help
price of electricity using computer simulations. They then
in achieving a targeted PUE at all times.
compare their optimization-based designs against two
(2) Routine PUE Measurement – This will help in getting a
previous algorithms proposed [35, 36]. The authors claim
clear picture of how PUE gets affected by changing
that the profit gain achieved by their model outperforms the
configuration or weather. A regular monitoring of PUE
previous designs [35, 36] mainly because their model
fluctuations will help the data center manager in critical
considers data center profit as a function of service rate and
decision making. Figure 5 illustrates the PUE monitoring
the queueing model they used (G/D/1) provides a realistic
dashboard.
distribution of workload as compared to the queueing model
(3) Upgrade hardware and improve server efficiency – This
(M/M/1) used by previous approaches.
strategy mainly hints at the techniques discussed previously
G. Software Technique in this paper such as workload consolidation and
Sreelekshmy Syamalakumari published a case study on virtualization to optimize server utilization.
using Intel Intelligent Power Node Manager [7] for energy (4) Improving data center cooling efficiency – This strategy
efficiency in Neusoft Aclome, a cloud management deals with working on the cooling infrastructure to save
environment. It is a software component that comes as part energy such as hot and cold aisle containment to prevent the
of Intel Xeon processor E5 family. Its various features creation of hot spots, thermal monitoring, use of
include providing power monitoring, thermal monitoring, temperature sensors and taking advantage of cold air outside
and policy-based power management for individual servers to utilize “free cooling”.
[7]. It is embedded in Intel server chipsets and is an out-of- (5) Increase the data center temperature – The paper states
band power management policy engine. It works with that setting the data center operating temperatures as per
server’s hardware, software and operating system to ASHRAE [4] recommendations to 18℃ ~ 27℃ (64.4°F ~
dynamically adjust platform power to achieve maximum 80.6°F) will help in achieving considerable amount of
performance for a single node. Its key features include (1) savings in cooling costs as compared to the conventional
Dynamic power monitoring (2) Platform power capping by 7℃.
setting a targeted power budget while maintaining (6) Introduction of DCIM system – A Data Center
performance levels (3) Power threshold alerting when the Infrastructure Management System will provide an intuitive
targeted budget cannot be maintained, it sends alerts to the user interface to the administrator to get a holistic view of
console. the data center infrastructure. The administrator can easily
identify unused servers and other resources by monitoring
Neusoft Aclom is provided by China based IT services their utilization from time to time. This will help in
provider, Neusoft. It provides high level power management implementation of strategy 1 and 2.
features example, resource on demand allocation and power (7) Use clean, renewable energy sources – Organizations
on demand allocation so that data center managers can use which require large scale data centers such as Apple,
energy efficiently. It optimizes productivity per watt in Google and Microsoft have already shelled a huge amount
order to reduce total cost of ownership [7]. A user can of sum to install solar panels and wind turbines in order to
monitor the power in real time and make adjustments tap renewable energy. The paper states that small and
dynamically. Both these software - Neusoft Aclome and medium scale data centers should also take advantage of
Intel Node Manager can interact using an IPMI (Intelligent renewable energy by cooperating with local utility
Platform Management) based interface. The tests performed providers.
with these two software show that a 13% power saving can
be achieved with less than 0.5% degradation in primary
performance [7].
H. General Strategies
Delta Power Solutions which is the leading provider of
data center infrastructure across the globe [37] published a
white paper [6] providing generic strategies that can be used Fig. 5. PUE Monitoring Dashboard [6]
research focusses on energy optimization and management
Eduard Oró et al. in [20] provide various ways in which and little work has been done to study “green” metrics. So,
modern day data centers are integrating their infrastructure in this paper they provide a taxonomy study of these
with renewable sources of energy. They give an overview of metrics.
three such concrete ways – (1) Data Centers with “on-site”
generation - These centers generate their own energy using
solar collectors to generate thermal energy and photo voltaic
panels, wind or hydro turbines to generate electrical energy.
The energy generation and data center infrastructure coexist
at the same location. Main advantage of “on-site” facility is
that there is not much loss of power as it is not transmitted
over long distances [20]. (2) Data Centers with “off-site”
generation – Sometimes the geographical and environmental
conditions are not conducive for “on-site” generation of
renewable energy. In this case, the data center owners invest
in renewable energy plants at different locations and tap the
energy produced using heating/cooling grids for
transmission [20]. There are some losses due to
transmission over a certain distance but that can be
compensated increasing the plant’s generation capacity. (3)
Data Centers with renewable energy provided by third
parties – In this case the data center operator is not actively Fig. 6. Geographical distribution of data centers participating in the Code
of Conduct [8]
involved in energy generation and provision.
I. Trends in energy consumption The authors also specify the criteria that metrics must meet -
Maria Avgerinou et al. present a detailed view of trends technically sound, holistic and balanced system evaluation,
in energy consumption in data centers across the European all-featured evaluation, low cost to implement, ease of use.
Union in [8]. The authors provide statistical results on a They express performance metric as a ratio of useful work
study done to capture energy consumption and to environmental cost. The metrics are further divided into 2
implementation of steps taken by data centers and PUE categories: basic metrics and extended metrics.
efficiency achieved as a result. The data for analysis was Basic metrics include greenhouse gas, humidity, thermal
taken by a number of participating organizations (Figure 6) metrics and power metrics and provide a view of how green
and included information such as operations, IT a data center is. Extended metrics are functions of basic
measurement, electricity data, building information and metrics and they include server utilization, network
other practices implemented in the facility from 2000 to utilization, storage utilization, power, area, weight, cooling
2012. Organizations used free cooling technologies such as and airflow. The paper can be referred by researchers and
Direct Air (DA), Indirect Air (IA), Direct Water (DW) and organizations to carry out a monthly assessment of how
Indirect Water (IW) [8]. The reports shared showed that efficient their green data center is and what kind of
average PUE is the lowest in case of IW technique so it is optimizations can be achieved in future.
the most efficient form of free cooling.
The main aim of this study was to show that by using
renewable resources of energy (e.g., photovoltaic, wind
power, shared-generation using biomass/bio-fuel, etc.)
carbon footprint can be reduced considerably. The data was
submitted either monthly or annually. Results show that the
average PUE of all approved sites is 1.80 (Figure 7) and the
average annual consumption has decreased by more than
700MWh, suggesting the adoption of better practices and
more energy efficient systems [8].
J. How green your data center is?
Lizhe Wang et al. provide performance metrics in [21]
to measure how green a data center is. These performance Fig. 7. Number of Data Centers per PUE range [8]
metrics help in evaluating the environmental impacts which
occur as a result of operating a data center by providing a
qualitative and a quantitative assessment. The authors state
that the main motivation behind this paper is that existing
Table 1 Open Issues and Summary of the Optimization Techniques suggested
S. WORK PROPOSED SCHEME TYPE OF OPEN ISSUES
NO. TECHNIQUE
1 Jun-Ho Huh. The paper proposes a virtual Evolved packet Virtualization Issues can arise in the future
(2018) [5] Core (vEPC) model for server operation and when the traffic increases, it
virtualization to save energy. will result in increased
number of hypervisors
competing for insufficient
resources.
2 Keramollahi et The paper makes use of Cloud Broker to Energy
al. (2017) [11] design an algorithm for reduced energy Aware
consumption and attempts to assign a data Algorithms
center to a user by reading the rating of
various data centers, assigning the best one
to each user thereby enhancing user
satisfaction as well.
3 Kavita A. The solution proposes an energy aware task Energy There is no information
Sultanpure et al. scheduling algorithm based on Artificial Aware available on whether the
(2018) [15] Bee Colony algorithm. Algorithms algorithm has been tested
and evaluated in a realistic
scenario.
4 Ismail Alan et Propose 3 data transfer algorithms which Energy Environmental conditions of
al. (2015) [19] aim to achieve high throughput rate while Aware the surrounding region in
reducing energy consumption via fine Algorithms which the data center is
tuning of parameters such as TCP located are not considered at
pipelining, parallelism and concurrency. all which can be tapped in
cold countries and that will
affect the fine tuning of
parameters again.
5 Ali Pahlevan et Propose a VM allocation scheme which Virtual The use of Photo Voltaic
al. (2017) [9] uses HES architecture which can replace the Machine sources depends a lot on
standard UPS system and can exploit the Management outside weather conditions,
energy buffers for locally generated so this solution might not be
renewable energy. applicable in a number of
regions where data centers
already exist.
6 Sambit Kumar The authors suggest a sub optimal solution Virtual Since the allocation scheme
Mishra et al. to solve the Virtual Machine placement Machine is trying to minimize energy
(2018) [16] problem which is NP complete by Management intake within the given
proposing an algorithm that basically SLA, it would be interesting
reduces the number of active physical to see that how the
machines, makespan of the project and task algorithm will work in a real
rejection rate, thereby reducing energy world scenario and whether
intake. it will lead to a significant
reduction in energy
consumption.
7 Bo Zhang et al. Propose a garbage collection mechanism in Virtual
(2017) [17] the cloud which will reduce unnecessary Machine
carbon emissions by cloud providers by Management
efficient provisioning of the underlying
infrastructure and detecting, recycling and
recovering idle VM instances.
8 Amine Barkat Propose a framework which is based on an Virtual
et al. (2017) optimization model on Mixed Integer Machine
[22] Linear Programing (MILP) that aims to Management
achieve reduced energy costs by exploit
locally available green energy sources and
specifically works for geo - distributed data
centers which are fully interconnected.
9 Ioannis Provide a framework that reduces cooling Temperature Implementation of the
Manousakis et costs and thereby energy consumption by Management solution would be difficult
al. (2015) [10] under provisioning the cooling Techniques in regions where
infrastructure while maintaining temperatures crosses 32
performance constraints. degree celsius mark. Large
scale data centers already
exist in such regions.
10 Sungkap Yeo et The authors propose to enable inlet air Temperature Scheme can be risky if the
al. (2104) [12] supply to provide less cooling air to the Management inlet air temperature cannot
servers which need to lower down their Techniques be controlled accurately as
frequency and voltage thereby capping their businesses are dealing with
performance and reducing energy expensive hardware
consumption. equipment.
11 Avrilia Floratou Propose to change the data center design Data Center The approach could be very
et al. (2014) process into a systematic process by using Design expensive and technically
[13] an integrated approach that uses what-if overwhelming for small to
analysis followed by analytical modelling medium scale data centers.
and simulation-based wind tunnel.
12 Kashif Bilal et The paper mainly proposes to include 2 Data Center
al. (2013) [14] green networking techniques - workload Design
consolidation and scaling down the
communication link data rate as part of data
center design process.
13 Mahdi Propose a mathematical model to maximize Mathematical
Ghamkhari et the data center profit while minimizing Model
al. (2013) [18] energy consumption.
14 Sreelekshmy Provides a case study on using Intel Software
Syamalakumari. Intelligent Power Node Manager for energy Technique
(2012) [7] efficiency in Neusoft Aclome, a cloud
management environment.
15 Delta Power Provides generic strategies that can be used General
Solutions. at majority of the data centers to get Strategies
(2014) [6] significant energy savings without requiring
the addition of any special equipment or
changes in networking infrastructure.
16 Eduard Oró et Provide various ways in which modern day General
al. (2015) [20] data centers are integrating their Strategies
infrastructure with renewable sources of
energy.
17 Maria The authors provide statistical results on a Trends in
Avgerinou et al. study done to capture energy consumption Energy
(2017) [8] and implementation of steps taken by data Consumption
centers and PUE efficiency achieved as a
result.
18 Lizhe Wang et Provides a taxonomy of performance How green
al. (2011) [21] metrics to measure how "green" a data your data
center is. center is?
V. CHALLENGES AND FUTURE WORK source of energy consumption. This is where metrics play
As I mentioned earlier in the paper that research into an important role. However, most of the organizations fail to
energy optimization techniques started in early 2000s but a perform metrics monitoring as a regular activity and often
lot needs to be done to encourage businesses and service end up using old inefficient cooling infrastructure which
providers to implement those techniques. Major players in leads to increased GHG emissions. Data center personnel
the IT industry though are doing their bit by moving need to be suitably trained for the maintenance and
towards renewable sources of energy, but use of other monitoring of the cooling infrastructure on a monthly basis.
techniques still remains a challenge either due to a Energy aware VM allocations schemes have been discussed
technology gap or financial constraints. The techniques need in detail in this paper and look promising but future research
to be evaluated and tested in a real time scenario but it also needs to find a solution to reduce the overhead caused
would incur significant costs for implementation. For by VM management.
example, in [5] the author claims that the technology VI. RELATED WORK
suggested will be a stepping stone for future sustainable
computing. However, even though virtualization using Anton Beloglazov et al. [42] provide a survey and
vEPC can increase the network capacity by creating taxonomy of energy-efficient data centers. They basically
additional servers without the use of new equipment, issues survey and classify a number of ways to achieve power and
can arise as the traffic increases further in future as a result energy efficiency in data centers, but it is quite old as it was
of increased usage of IoT devices because an increased published in 2011. A lot of research work has been done in
number of hypervisors will start competing for insufficient the past few years which is covered in this paper. On the
resources. Also, the organizations will have to shell out same lines Kashif Bilal et al. [43] give an overview of
additional money for purchasing virtualization software. So research done in data center networks up till 2013 and stress
there could be a trade-off between cost savings from on data center network architecture, traffic management,
spending lesser energy and costs incurred because of setting characterization and performance monitoring. Fanxin Kong
up vEPC model. et al. [44] cover only three ways of achieving optimization
namely workload scheduling, virtual machine management,
Another challenge is the development of techniques and energy capacity planning. San Murugesan et al. [45]
keeping in mind the outside weather conditions. So, a discuss the challenges in introducing green practices in IT,
particular solution might work for data centers in one region developments done prior to 2008 and directions for the
while fail in other regions. The solution suggested as per future. Ramon Mata-Toledo et al. [46] discuss the impacts
[19] will incur significant effort to fine tune the parameters of toxic waste generated by electronic devices and global
as and when the weather changes to maintain the throughput warming specifically with respect to data centers but do not
while reducing energy consumption. Similar issue is with provide a detailed view of optimization techniques as per
areas depending on wind and solar energy as renewable past research.
sources because the conditions do not stay the same
throughout the year. Risks of damage to the expensive IT VII. CONCLUSION
equipment and non-availability is another challenge that This paper provides an in-depth view of the energy
prevents the implementation of a number of techniques like optimization techniques used in green data centers. The
the ones suggested in [10, 12]. Careful consideration needs paper begins with a discussion of cloud computing and the
to be given to the temperature settings recommended by negative effects of data centers on the environment with
commercial hardware equipment providers. Right from the supporting facts and figures. This is followed by a detailed
time cloud computing became popular the main aim was to view of the architecture of a typical data center, the cooling
meet the SLAs signed with the consumers in order to gain infrastructure and power supply that is used. Next comes the
competitive advantage. Future work to improve green data survey of a number of optimization techniques suggested by
centers should be done with due consideration to SLA the academia and research community which is the core of
requirements. Another important aspect is customizability of the paper. The papers surveyed cover a wide variety of
the solution so that legacy data centers can easily use it techniques such as virtualization, usage of energy aware
without requiring design changes. algorithms, virtual machine management, temperature
Networking infrastructure needs to be designed management, data center design, mathematical model,
efficiently in order to “greenify” a data center. software techniques and general strategies. Current trends in
Overprovisioned or underutilized resources can cause data center energy consumption and metrics for measuring
significant wastage of energy. At the same time, utilization how environment friendly or “green” a data center is have
efficiency process should not result in performance also been covered. The paper ends by highlighting the
degradation and latency. Simulating multiple scenarios with challenges which could occur and suggestions which need
different kind of infrastructure might be helpful to to be addressed as part of future work.
determine the perfect setup but it is challenging as it
requires realistic traffic generation. Evaluation of the
cooling system being used is imperative as it is the main
REFERENCES [22] Barkat, Amine & Kechadi, Tahar & Verticale, Giacomo & Filippini,
Ilario & Capone, Antonio. (2017). Green Approach for Joint Management
[1]Data Centre solutions. Available from of Geo-Distributed Data Centers and Interconnection Networks. Journal of
http://www.mayflex.com/_assets/downloads/MF_Data_Centre_Product_G Network and Systems Management. 10.1007/s10922-017-9441-0.
uide_Web.pdf [21] Shuja, Junaid & Bilal, Kashif & Madani, Sajjad & Othman, Mazliza &
[2] M. Gupta and S. Singh. Greening of the internet. In ACM SIGCOMM Ranjan, R & Balaji, Pavan & Khan, Samee. (2016). Survey of Techniques
[3] R. Katz, “Tech titans building boom,” IEEE Spectrum, vol. 46, no. and Architectures for Designing Energy-Efficient Data Centers. IEEE
2,pp. 40–54, Feb. 2009. Systems Journal. 10. 507-519. 10.1109/JSYST.2014.2315823.
[4] ASHRAE whitepaper. Thermal guidelines for data processing [22] Turner WP, Seader JH, Renaud V. Tier classification define site
environments—expanded data center classes and usage guidance, infrastructure performance. Uptime Institute; white paper; 2008
Technical Committee (TC) 9.9, ASHRAE; 2011. [23] Google’s Green PPAs: What, How, and Why. Available from
[5] Huh, Jun-Ho. (2018). Server Operation and Virtualization to Save <http://static.googleusercontent.com/external_content/untrusted_dlcp/cfz.c
Energy and Cost in Future Sustainable Computing. Sustainability. 10. c/en/us/green/pdfs/renewable-energy.pdf>; 2013.
1919. 10.3390/su10061919. [24] I.D.A. Ireland, microsoft to expand its Dublin data center to support
[6] Delta Power Solutions. (2014). Overview of Green Energy Strategies growing demand for its cloud services; 2012. Available from
and Techniques for Modern Data Centers. <http://www.idaireland.com/news-media/press-releases/microsoft-to-
[7] Syamalakumari, S. (2012). Energy Efficiency using Intel® Intelligent expand-its-d/>; 2014.
Power Node Manager. Retrieved October 19, 2018, from [25] What is Evolved Packet Core (EPC) ? - Definition from WhatIs.com.
https://software.intel.com/en-us/articles/energy-efficiency-using-intel- SearchTelecom. https://searchtelecom.techtarget.com/definition/Evolved-
intelligent-power-node-manager Packet-Core-EPC. Accessed November 6, 2018.
[8] Castellazzi, Luca & Maria, AVGERINOU & Bertoldi, Paolo. (2017). [26] E. Naone, “Conjuring clouds,” Technology Review, vol. 112, no. 4,
Trends in data centre energy consumption under the European Code of pp. 54–56, 2009
Conduct for data centre energy efficiency. 10.2760/358256. [27] The extreme science and engineering discovery environment (xsede).
[9] Pahlevan, Ali & Rossi, Maurizio & G. Del Valle, Pablo & Brunelli, https://www.xsede.org/.
Davide & Atienza, David. (2017). Joint Computing and Electric Systems [28] Futuregrid testbed. http://www.futuregrid.org.
Optimization for Green Datacenters. 1-21. 10.1007/978-94-017-7358- [29] DIDCLab. Data intensive distributed computing laboratoy.
4_35-1. http://www.didclab.org.
[10] Ioannis Manousakis, Íñigo Goiri, Sriram Sankar, Thu D. Nguyen, and [30] Kim, J., et al.: Correlation-aware virtual machine allocation for
Ricardo Bianchini. 2015. CoolProvision: underprovisioning datacenter energy-efficient datacenters.In: Design, Automation & Test in Europe
cooling. In Proceedings of the Sixth ACM Symposium on Cloud (DATE) Conference, pp. 1345–1350 (2013)
Computing (SoCC '15). ACM, New York, NY, USA, 356-367. DOI: [31] S. Vakilinia, B. Heidarpour, M. Cheriet, Energy efficient resource
https://doi.org/10.1145/2806777.2806938 allocation in cloud computing environments, IEEE
[11] Keramollahi, Ahmadreza & Chalechale, Abdolah & Ahmadi, Accessdoi:10.1109/ACCESS. 2016.2633558
Mahmood. (2017). Energy consumption improvement and cost saving by [32] S. Yeo and H.-H. S. Lee. Simware: A holistic warehouse-scale
Cloud broker in Cloud datacenters. The International Arab Journal of computer simulator. Computer, 45(9):48–55, 2012.
Information Technology. 15. [33] C. Reiss, A. Tumanov, G. R. Ganger, R. H. Katz, and M. A. Kozuch.
[12] Yeo, Sungkap & M. Hossain, Mohammad & Huang, J.-C & Lee, H.- Towards understanding heterogeneous clouds at scale: Google trace
H.S.. (2014). ATAC: Ambient Temperature-Aware Capping for Power analysis. Technical report, 2012.
Efficient Datacenters. Proceedings of the 5th ACM Symposium on Cloud [34] C. Reiss, A. Tumanov, G. R. Ganger, R. H. Katz, and M. A. Kozuch.
Computing, SOCC 2014. 10.1145/2670979.2670996. Heterogeneity and dynamicity of clouds at scale: Google trace analysis. In
[13] Floratou, Avrilia & Bertsch, Frank & Patel, Jignesh & Laskaris, Proceedings of the Third ACM Symposium on Cloud Computing, SoCC,
Georgios. (2014). Towards building wind tunnels for data center design. 2012
Proceedings of the VLDB Endowment. 7. 781-784. [35] Z. Liu, M. Lin, A. Wierman, S. Low, and L. L. H. Andrew,
10.14778/2732939.2732950. “Geographical load balancing with renewables,” in Proc. ACM
[14] Bilal, Kashif & Khan, Samee & Zomaya, Albert. (2013). Green Data GreenMetrics Workshop, San Jose, CA, USA, Apr. 2011.
Center Networks: Challenges and Opportunities. Proceedings - 11th [36] L. Rao, X. Liu, L. Xie, and W. Liu, “Coordinated energy cost
International Conference on Frontiers of Information Technology, FIT management of distributed internet data centers in smart grid,” IEEE Trans.
2013. 229-234. 10.1109/FIT.2013.49. Smart Grid, vol. 3, no. 1, pp. 50–58, Mar. 2012.
[15] A. Sultanpure, Kavita & Reddy, L.S.S.. (2018). An Energy Aware [37] Data Center, Critical Infrastructure & Telecom Power Solutions -
Resource Utilization Framework to Control Traffic in Cloud Network and DELTA. http://www.deltapowersolutions.com/en/. Accessed November 6,
Overloads. International Journal of Electrical and Computer Engineering. 2018.
81. 1018-1027. 10.11591/ijece.v8i2.pp1018-1027. [38] Energy efficiency in the data center, Hewlett-Packard, Sep.2008.
[16] Mishra, Sambit & Puthal, Deepak & Sahoo, Bibhudatta & Jayaraman, [39] D. Abts, M. Marty, P. Wells, P. Klausler, and H. Liu, “Energy
Prem Prakash & Jun, Song & Zomaya, Albert & Ranjan, R. (2018). Proportional Datacenter Networks,” In Proceedings of the 37th
Energy-Efficient VM-Placement in Cloud Data Center. Sustainable International Symposium on Computer Architecture, Saint Malo, France,
Computing: Informatics and Systems. 10.1016/j.suscom.2018.01.002. June 2010, pp. 338–347.
[17] Zhang, Bo & al-dhuraibi, Yahya & Rouvoy, Romain & Paraiso, Fawaz [40] United States Environmental Protection Agency, “EPA report on
& Seinturier, Lionel. (2017). CloudGC: Recycling Idle Virtual Machines in server and data center energy efficiency,” Final Rep. to Congress, Aug.
the Cloud. 105-115. 10.1109/IC2E.2017.26. 2007
[18] Ghamkhari, Mahdi & Mohsenian-Rad, Hamed. (2013). Energy and [41] Newcombe, L.; Acton, M.; Booth, J.; Flucker, S.; Rabbetts, A.;
Performance Management of Green Data Centers: A Profit Maximization Rouyer, A. 2014 Best Practices for the EU Code of Conduct on Data
Approach. Smart Grid, IEEE Transactions on. 4. 1017-1025. Centres. Version 5.1.1. Available online:
10.1109/TSG.2013.2237929. http://www.daxten.com/de/download/whitepapers/2014%20EU%20Code%
[19] Alan, Ismail & Arslan, Engin & Kosar, Tevfik. (2015). Energy-aware 20of%20Conduct%20Best%20Practices%20Data%20Centre.pdf(accessed
data transfer algorithms. 1-12. 10.1145/2807591.2807628. on 10 February 2017).
[20] Oró, Eduard & Depoorter, Victor & Garcia, Albert & Salom, Jaume. [42] Beloglazov, Anton & Buyya, Rajkumar & Choon Lee, Young &
(2015). Energy efficiency and renewable energy integration in data centres. Zomaya, Albert. (2010). A Taxonomy and Survey of Energy-Efficient Data
Strategies and modelling review. Renewable and Sustainable Energy Centers and Cloud Computing Systems. Advances in Computers. 82.
Reviews. 42. 10.1016/j.rser.2014.10.035. 10.1016/B978-0-12-385512-1.00003-7.
[21] Wang, Lizhe & Khan, Samee. (2011). Review of performance metrics [43] Bilal, Kashif & Malik, Saif & Khalid, Osman & Hameed, Abdul &
for green data centers: A taxonomy study. The Journal of Supercomputing. Alvarez, Enrique & Wijayasekara, Vidura & Irfan, Rizwana & Shrestha,
63. 1-18. 10.1007/s11227-011-0704-3. Sarjan & Dwivedy, Debjyoti & Ali, Mazhar & Shahid Khan, Usman &
Abbas, Assad & Jalil, Nauman & U. Khan, Samee. (2013). A taxonomy
and survey on Green Data Center Networks. Future Generation Computer
Systems. 36. 10.1016/j.future.2013.07.006.
[44] Kong, Fanxin & Liu, Xue. (2014). A Survey on Green-Energy-Aware
Power Management for Datacenters. ACM Computing Surveys. 47. 1-38.
10.1145/2642708.
[45] Murugesan, San & Gangadharan, G R. (2012). Harnessing Green It:
Principles and Practices. 1-21. 10.1002/9781118305393.ch1.
[46] Mata-Toledo, Ramon & Gupta, Pranshu. (2010). Green data center:
how green can we perform?

You might also like