Main

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Case Studies in Thermal Engineering 36 (2022) 102228

Contents lists available at ScienceDirect

Case Studies in Thermal Engineering


journal homepage: www.elsevier.com/locate/csite

Retrofitting of an air-cooled data center for energy efficiency


Mustafa Kuzay a, d, Aras Dogan a, d, Sibel Yilmaz a, Oguzhan Herkiloglu b,
Ali Serdar Atalay b, Atilla Cemberci b, Cagatay Yilmaz c, Ender Demirel a, d, *
a
Design and Simulation Tech. Inc., 26480, Eskisehir, Turkey
b
Radius Solution Center, Kisikli, Bahar Arkasi Street, No:11, Camlica, 34692, Uskudar, Istanbul, Turkey
c
Lande Industrial Metal Products Inc. Co., Organized Industrial Zone, 20th Street, No: 14, Eskisehir, Turkey
d
Eskisehir Osmangazi University, 26480, Eskisehir, Turkey

A R T I C L E I N F O A B S T R A C T

Keywords: Small-scale data centers suffering from low cooling efficiency consume an intense power for
Data center reliable operation of IT equipment. In this study, thermal distribution in an air-cooled data center
Thermal efficiency is simulated using an open source Computational Fluid Dynamics (CFD) model to examine the
Cooling efficiency underlying mechanism that reduces thermal and cooling efficiencies. The numerical model is
CFD modeling validated with the temperature measurements conducted in the data center. Numerical simula­
Open-source
tions have revealed that recirculating hot flows observed at the top of the racks increased the air
temperature beyond the allowable maximum temperature. To this end, the data center has been
retrofitted by creating a hot aisle with the implementation of a moving baffle at the rear of the
rack. Numerical simulations conducted for two working scenarios have demonstrated that such a
minor modification could result in remarkable enhancement in the cooling efficiency. Efficiency
of the data center has improved by 47.2% and 22.7% with respect to the RCI (Rack Cooling Index)
and RHI (Return Heat Index), respectively. The present numerical model can capture distributions
of the efficiency metrics over the racks. Present methodology can be used to reduce power
consumption by the cooling and ventilation systems in existing data centers.

1. Introduction
Data centers have been widely deployed all over the world to meet demands for IT services, cloud computing and IoT applications.
Power consumption in data centers comes up to 1.3% of the global electricity usage [1] and about 40% of the total power consumption
in a data center originates from the cooling devices. Existing data centers need to be retrofitted in order to lower the cost of energy
consumption by cooling and ventilation systems to be able to reach net zero emissions for the ICT ecosystem [2]. Drastic increase in
power consumption by computer room air conditioning (CRAC) units in recent years has led data center managers and public au­
thorities to investigate efficiency opportunities for the cooling.
Experimental and numerical studies on the thermal distribution in data centers have demonstrated that the cooling efficiency is
mainly associated with the airflow distribution and resultant thermal structure. Therefore, cooling efficiency of a data center can be
improved by geometrical modifications such as installing air guides, isolating hot and cold regions and preventing air leakage. Tozer
and Salim [3] suggested a cold/hot aisle, return air plenum, evacuating all unused server locations, and installing space panels between
cabinets to reduce hot air recirculation. Niemann et al. [4] examined hot/cold aisle containments, as it would be more useful than the

* Corresponding author. Design and Simulation Tech. Inc., 26480, Eskisehir, Turkey.
E-mail address: [email protected] (E. Demirel).

https://doi.org/10.1016/j.csite.2022.102228
Received 11 April 2022; Received in revised form 5 June 2022; Accepted 21 June 2022
Available online 24 June 2022
2214-157X/© 2022 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

classical design. They reported that the hot aisle containment could save 43% of the energy consumption of the cooling devices in
comparison to the cold aisle containment. Zheng et al. [5] studied thermal environment and energy usage of a data center located in
Shanxi Province before and after retrofitting. Total cooling capacity of 600 kW could not meet the heat dissipation demand of fourteen
racks with high heat generation capacity and outlet temperatures of the racks were found to be above 30 ◦ C. Therefore, four rack
cabinets were strengthened with an external water cooler system and energy consumption of the cooling system could be decreased by
18% in summer months. Wang et al. [6] investigated design alternatives using CFD methods to provide a uniform airflow distribution
in a newly constructed data center. It was determined that the hot/cold aisle containment provides a uniform airflow distribution with
high cooling efficiency. Bhopte et al. [7] and Alissa et al. [8] investigated the impact of the underfloor barriers on the tile distribution
and thermal performance of a data center. Cold aisle containment was achieved with critically placed cooling pipes and a concept of
entrance tile was introduced to mitigate bypass effects observed at the entrance of the cold aisle. Ham and Jeong [9] analyzed the effect
of cold aisle containment on the thermal management of a data center along with the amount of energy consumption by the cooling
devices. Energy performance of the containment could be maximized by incorporating an economizer in a retrofitted design. Oró et al.
[10] proposed a series of retrofits for a data center in Barcelona to mitigate bypass and negative pressure that may reduce cooling
efficiency. Implementing a cold aisle, increasing the supply air temperature, and reducing the air flow rate by closing two CRAC units
in the proposed design significantly increased efficiency and resulted in economic savings. Tian et al. [11] proposed a new mathe­
matical model for a multi-scale thermal management based on entropy theory. Wang et al. [12] performed numerical simulations to
eliminate bypass and recirculation in a data center with respect to the performance indexes and showed that the Return Temperature
Index (RTI) and Return Heat Index (RHI) could be improved by retrofitting the data center. Turkmen et al. [13] reported that
geometrical features, placement of IT equipment and contact of hot and cold air might reduce efficiency of a small data center.
Therefore, the estimated Power Usage Effectiveness (PUE) value was decreased by 28.2% due the retrofitting of some features. Meng
et al. [14] analyzed thermal environment and flow field in a small-scale data center located in China by means of CFD simulations.
Some equipment was retrofitted in order to eliminate uneven temperature distribution and chaotic air supply issues. Meng et al. states
that closing empty slots, opening perforated tiles under the racks and installing an air guide at the outlet of the air conditioner can
improve the overall performance of the data center. The RHI increased from 0.918 to 0.93, and RTI increased from 0.222 to 0.342 with
the implementation of such design recommendations. Experimental and numerical studies were carried out to investigate the effects of
plenum height, perforation percentage, and positions of the CRAC units [15]. It was determined that reducing the perforation per­
centage and increasing the plenum height could improve airflow homogeneity and thermal performance. Uneven airflow and tem­
perature distributions were observed when the CRAC units were placed perpendicular to the racks. It was observed that the thermal
performance near the upper parts of the racks were significantly influenced by changing flow rate and temperature of the supply air.
Efficiency of a data center is substantially influenced by hot air recirculation and cold air bypass. On the other hand, high back pressure
in the containment may reduce airflow rate of the server and possibly raise the inlet temperature beyond the allowable maximum
temperatures suggested by the regulations [16]. Xiong and Lee [17] proposed a new aisle mechanism for air-cooled data centers that
uses vortex flows as an alternative to the traditional hot aisle approach. Average Supply Heat Index (SHI) in the vortex hot aisle
containment could be reduced to 0.11 according to the experimental results. Experimental and numerical findings from such studies in
an attempt to increase the efficiency of existing data centers show that each data center has its own dynamics and design alternatives
should be investigated considering flow and thermal structures, as well as workloads to achieve the highest enhancement.
CFD simulations of flow and thermal structures in data centers are gaining importance with the stunning advances in computational
technology in recent years. Many commercial software products are available in the market for the CFD simulations of data centers. On
the other hand, open source CFD models have proven to be very effective in the development of novel computational models for the
accurate simulation of complex engineering problems in the last decade. However, open source CFD models are not yet extensively
employed in the applications of data centers due to the complexity of modeling a complete picture of a typical data center including IT
equipment, CRAC units and perforated components that may substantially influence the accuracy of the simulation results. In this
study, thermal distribution in an open-aisle data center were simulated using an open source CFD model to examine efficiency issues.
Experimental studies were conducted for the validation of the numerical model. Then, data center is retrofitted based on the numerical
simulation results to diagnose hot air recirculation and cold air bypass. Numerical simulations are performed on the retrofitted design

Fig. 1. Three-dimensional geometry of the Bitnet data center: (a) front and (b) back views.

2
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

for two thermal scenarios and results are compared with the previous design in terms of efficiency metrics for the performance
evaluation of the new design. The methodology described in the present study can be used for the investigation of the efficiency
opportunities available in the data centers.

2. Configuration of the Bitnet Data Center


Bitnet data center was established as Radius Solution Center with a power capacity of 125 kW and 14.5 m2 area in Turkey. Three-
dimensional geometry and dimensions of the data center are shown in Fig. 1. IT components are located at four high power racks with
42 U height. An in-row CRAC unit was located between the racks for the cooling of the IT equipment. Two air conditioners mounted at
the ceiling were considered as internal solid objects in the numerical model since these cooling devices do not supply cold air in the
current cooling conditions. Small cabinets located on the left hand side of the data center consist of network components and batteries
to provide electricity in emergency cases.
Layout of the IT equipment is defined in a systematic approach in the present numerical model. Rack layout is defined in an input
file to adjust specifications of IT devices such as rack number, starting position from the bottom, power consumption and flow rate.
Python scripts are developed and incorporated into the present numerical model to implement corresponding server layout to the CFD
model.

3. Computational model
3.1. Governing equations
Buoyancy driven flow inside the data center can be represented by the following continuity, momentum and energy equations for
the compressible and turbulent flow:
∂ρ ∂
+ (ρui ) = 0 (1)
∂t ∂xi
[ ( )]
∂(ρui ) ∂ ( ) − ∂p ∂ ∂ui ∂uj
+ ρui uj = + (μ + μt ) + + ρgi + Si (2)
∂t ∂xj ∂xi ∂xj ∂xj ∂xi
[ ]
∂(ρh) ∂ ( ) ∂ ∂h
+ ρuj h = (μ + μt ) (3)
∂t ∂xj ∂xj ∂xi

Where ρ is the density of the fluid, ui is the mean velocity component in the i-direction, t is the time, p is the pressure, xi and xj are the
Cartesian coordinates, gi is the gravitational acceleration in the i-direction, h is the Favre-averaged enthalpy, Si is the momentum
source, μ and μt are molecular and turbulent viscosities, respectively. Molecular viscosity is calculated using the Sutherland viscosity
model and turbulent viscosity can be calculated from the following equation:
a1 k
νt = (4)
max(a1 ω, SF2 )
In order to account for adverse pressure gradients and boundary layer effects near the walls, k − ω Shear Stress Transport (SST)
turbulence closure model is used in the present study. The turbulence kinetic energy and specific rate of dissipation are determined
from the solutions of the following transport equations:
[ ]
∂k ∂k ∂ ∂k
+ ui = (ν + σ k νt ) + P − βkω (5)
∂t ∂xi ∂xi ∂xi
[ ]
∂ω ∂ω ∂ ∂ω σω2 ∂k ∂ω
+ ui = (ν + σ ω νt ) + αS2 − βω2 + 2(1 − F1 ) (6)
∂t ∂xi ∂xi ∂xi ω ∂xi ∂xi
Coefficients that appear in the turbulence equations can be found in the literature [18].

3.2. Modeling IT equipment


Open-box model is used for the modeling of IT equipment in this study, where a mesh is created to define the heat source inside the
server according to the power consumption. Thus, the corresponding temperature rise from the inlet to the outlet of the server can be
calculated by the solver. Numerical tests showed that the black-box modeling of server components might produce stability issues due
to the local changes in flow variables by the implementation of jump boundary conditions.
Prediction of the flow rate through the server is fundamental to the accuracy of the numerical model. The most common approach
used in the literature is to fix the flow rate between inlet and outlet boundaries to satisfy mass conservation inside the server [19]:
[ / / ]
Q = 212 m3 h kW (7)

Server geometry is modeled as a porous zone to mimic resistance effects by the internal components since the calculation of the
turbulent flow through the actual server geometry requires a high computational memory and time. Thus, active servers are modeled
as porous zones for the calculation of energy losses and pressure drops. On the other hand, the inlet and outlet of the passive servers are

3
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

closed by walls in the numerical model to overcome cold air bypass through the server. A source term is added to momentum equations
to represent resistance effects encountered in the porous zone. The Darcy-Forchheimer porosity model is the most common porosity
approach to model inertia and viscous resistance by the porous medium. The source term is calculated from the following equation:
( )
1
Si = − μD + ρ ∨ u ∨ F ui (9)
2

Where Darcy and Forcheimer coefficients are calculated from the following empirical equations suggested by Sheth and Saha [20]:

120 × (1 − Φ)2
D= (10)
Φ3 × Dh2

2.3 × (1 − Φ)
F= (11)
Φ3 × Dh
Here Φ is the porosity of the server, which is suggested between 0.35 and 0.65 for the IT equipment [20] and Dh is the hydraulic
diameter defined at the inlet of the server.

3.3. Modeling CRAC unit


The CRAC unit is modeled using the black-box approach in which inlet and outlet boundary conditions are used without solving the
internal flow field. Fixed temperature and flow rate are used at the outlet of the CRAC for the temperature and velocity, respectively,
which can be specified according to the specification of the air-conditioner. However, special attention is required for the boundary
conditions at the inlet of the CRAC since inappropriate boundary conditions may produce an unrealistic flow field at the inlet of the
CRAC unit. Two boundary conditions are defined in Table 1 and tested for the WS1 to set appropriate boundary conditions at the inlet.
The flow rate of the CRAC is 0.811 m3/s and the supply air temperature is 21 ◦ C.

3.4. Efficiency metrics


Cooling and thermal efficiencies of a data center can be evaluated with respect to the efficiency metrics as non-dimensional def­
initions. The metrics RCIHI and RCILO are the measures of the servers that have inlet temperatures above the recommended maximum
temperature and below the recommended minimum temperature, respectively:
[ ∑n ]
i=0 (Ti − Tmax− rec ) × 100
RCIHI = 1 − % (12)
n × (Tmax− all − Tmax− rec )
[∑n ]
i=0 (Tmin− rec − Ti ) × 100
RCILO = 1 − % (13)
n × (Tmin− rec − Tmin− all )

In ideal conditions, these indexes are expected to be 100% and deviations from 100% represents a reduction in the cooling effi­
ciency. As given in Table 2, Tmax− rec and Tmin− rec are the recommended maximum and minimum temperatures, Tmax− all and Tmin− all are the
allowable maximum and minimum temperatures, respectively.
Thermal performance of a data center can be evaluated using the RHI [22].
Q
RHI = (14)
Q + δQ

Where Q and δQ are the total heat dissipation and the rise in enthalpy of the air, respectively.

3.5. Computational mesh


Computational mesh was generated as hexahedral and split-hexahedral mesh using snappyHaxMesh utility with parallel
computing. The stereolithography (STL) file consists of the geometry of the data center and internal components such as racks, beams
and columns as structural components and CRAC units. A hex-dominant meshing algorithm is used for the snapping of the internal
solid objects with a high quality mesh providing maximum allowable skewness and non-orthogonality. The computational mesh
shown in Fig. 2 consists of about 1.5 million cells for the capturing of sudden variations in flow variables around walls considering
turbulence and boundary layer effects.
A mesh independence study was carried out using five mesh resolutions given Table 3. Predicted exhaust temperatures at the rear of
the Rack 3 are compared in Fig. 3. Consistency between the results of Mesh 4 and Mesh 5 shows that the numerical results are

Table 1
Boundary conditions at the inlet of the CRAC unit

BC Variable

U T p_rgh

BC1 inletOutlet zeroGradient prghPressure


BC2 pressureInletOutletVelocity zeroGradient fixedValue

4
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

Table 2
Allowable and recommended temperatures suggested by Ref. [21].

Tmax-rec (◦ C) Tmin-rec (◦ C) Tmax-all (◦ C) Tmin-all (◦ C)

27 18 32 15

Fig. 2. Three-dimensional views of the mesh from different perspectives.

independent of the mesh resolution when Mesh 4 is used. Thus, Mesh 4 is used for the rest of the simulations conducted in the present
study.
Skewness and non-orthogonality of the mesh are effective on the calculation of the gradients over the cell faces. Characteristics of
the mesh are listed in Table 4 for previous and retrofitted designs. Maximum mesh non-orthogonality is lower than the recommended
maximum value of 70◦ in OpenFOAM. However, a skewness correction was applied while calculating gradients on the cell surface from
the neighboring nodal values since the maximum skewness is higher than the recommended value of 5. A validated open source
numerical model is employed for the accurate and robust simulation of the thermal distribution in data centers considering
compressibility, turbulence and buoyancy effects [23,24]. Sutherland viscosity model and perfect gas model were used to consider the
effect of temperature on the molecular viscosity and density [23].

3.6. Assumptions in the numerical model


The present numerical model considers the effect of temperature on the viscosity and density, as well as viscous and turbulence
effects for the realistic simulations of flow and thermal structures in the data center. The following assumptions are made while
modeling the data center:
1. Porosity of the servers are assumed to be identical due to the minor variations in the porosity depending on the server model.
2. Power consumptions and flow rates of the servers are kept constant during working scenarios considered in the present study.
3. The flow rate passing through a server is proportional to the power consumption, which is consistent with the literature [19].
4. It is assumed that the total power consumption by a server is converted to heat.
Thus, proposed assumptions are valid for the thermal management of the data center.

Table 3
Mesh independence study.

Mesh Number Of Cells

Mesh 1 203,532
Mesh 2 701,294
Mesh 3 1,000,638
Mesh 4 1,612,085
Mesh 5 2,113,412

5
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

Fig. 3. Comparison of (a) inlet and (b) exhaust temperature profiles for different mesh resolutions.

Table 4
Mesh statistics for the data center.

Design Number Of Cells Maximum Skewness Maximum Non-orthogonality Minimum Volume Maximum Volume

Previous Design 1,501,803 22.81 65.35◦ 9.44e-08 0.00018936


Retrofitted Design 1,612,085 8.89 65.59◦ 9.52e-08 0.00020665

4. Results and discussions


Numerical simulations are performed using open source CFD model for two fictitious scenarios of 14.85 kW and 15.5 kW total
power consumption. Numerical simulation results are analyzed in detail in terms of velocity vectors, three-dimensional streamlines
and temperature distributions, as well as distributions of the efficiency metrics over the racks for the performance evaluation of
previous and retrofitted designs.

4.1. Analysis of the boundary conditions


Numerical simulations are performed using BC1 and BC2 under the thermal condition of WS1 in order to set up appropriate
boundary conditions for the velocity, temperature and pressure at the inlet. Comparison of the temperature distributions at the inlet in

Fig. 4. Distributions of the mean velocity at the inlet of the CRAC for: (a) BC1 and (b) BC2.

6
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

Fig. 4 shows that a reverse flow occurs near the top of the CRAC due to the buoyancy effects and high velocity regions are observed
when the BC1 is used at the inlet. On the other hand, BC2 yields a uniform velocity field with a reasonable maximum flow velocity at
the inlet. This test reveals that inlet boundary conditions should be carefully selected for the CRAC units to avoid unrealistic flow
conditions. Thus, BC2 is used at the rest of the numerical simulations conducted in the present study.

4.2. Investigation of the thermal distribution


Numerical simulations are performed for the working scenarios described in Table 5. The flow rate passing through the server is
calculated from Equation (7) for the corresponding power consumption of the server components. The server layout is read from an
input file by Python scripts and implemented to the CFD model using an open-box model for the IT equipment and black-box approach
for the CRAC unit. Although only WS1 has been evaluated for the sake of space in this paper, performance improvements are reported
for both WS1 and WS2 in the next chapter.
Streamlines are visualized coloring by the local temperature on the planes that are extracted at the vertical coordinates of z = 1.35
m and 2 m in Fig. 5, respectively. Application of the open-box model allows us to see the flow field and temperature distribution inside
the server in Fig. 5a. The white regions shown in Fig. 5a depict passive servers since inlet and outlet boundaries of the passive servers
are closed by panels in the numerical model to prevent air leakage. The cold air emerging from the CRAC impinges on the front wall
and the flow is directed to the racks at the left and right of the CRAC unit by splitting into two parts. The racks located at the left of the
CRAC (R3 and R4) benefit from the cold air more than the racks located at the right since most of the cold air is directed to the large
region in which network and battery units are located. A significant hot air recirculation is observed at the top of the R1 in Fig. 5b due
to the buoyancy effects, which increases the local temperatures up to 50 ◦ C for this working scenario. Note that servers will shut
themselves down when the CPU temperature exceeds a critical temperature specified by the manufacturer. The cabinets located at the
front of the left wall cause entrapping of a recirculated flow into an additional dead zone behind R1. Hot regions observed near the rack
consisting of batteries may also decrease the life of batteries in the long term [25,26].
Fig. 6 shows the three-dimensional streamlines colored by the velocity magnitude in the regions where the local temperature is
above the recommended maximum temperature. The cold air supplied from the in-row CRAC unit splits into two zones, where most of
the flow is directed to the open area of the data center. Interaction of the server components and cooling devices yields a three-
dimensional flow structure inside the data center and this flow field can be captured by the present computational model in detail.
Furthermore, flow and thermal distributions in a data center can be visualized to capture design issues such as hot regions where the
local temperature exceeds recommended or allowable maximum temperatures. Based on the high-resolution CFD simulation results,
the data center will be retrofitted to suppress hot regions and to increase the cooling efficiency. Note that the enhancement in the
cooling efficiency may result in a reduction in power consumption by the IT equipment since power consumption by the IT equipment
may increase when the inlet temperatures increase [27–29].

4.3. Retrofitting of the data center


Numerical simulations predicted thermal issues in the present data center such as an uneven flow distribution and hot air recir­
culation that may reduce efficiency of the cooling system. In this section, the data center is retrofitted with the implementation of a
sliding door in Fig. 7b to isolate the hot region from the rest of the data center. The sliding door enables data center managers to enter
the hot region to change the server layout according to the customer needs. Numerical simulations are performed for the retrofitted
design under the same thermal conditions to compare the results of previous and retrofitted designs.
Fig. 8 compares streamlines and temperature distributions for the previous and retrofitted designs at the vertical planes of z = 1.35
m and 2.0 m. The exhaust air resulting from the running of server components is trapped into the hot aisle. The hot air is sucked by the

Table 5
Definitions of the working scenarios.

WS1 WS2

Rack First Row PC [W] Q[m3/s] Rack First Row PC [W] Q[m3/s]

1 13 800 0.0471 1 3 800 0.0471


1 17 800 0.0471 1 7 800 0.0471
1 23 800 0.0471 1 9 800 0.0471
1 29 800 0.0471 1 13 800 0.0471
2 11 2500 0.1472 2 11 3000 0.1767
2 27 700 0.0412 2 23 800 0.0471
2 31 1000 0.0588 3 11 650 0.0383
2 35 1000 0.0588 3 21 800 0.0471
3 11 650 0.0382 3 24 900 0.0530
3 21 800 0.0471 3 28 900 0.0530
3 28 900 0.0530 3 36 650 0.0383
3 32 900 0.0530 4 13 800 0.0471
4 13 800 0.0471 4 15 800 0.0471
4 15 800 0.0471 4 20 650 0.0383
4 21 900 0.0530 4 21 900 0.0530
4 33 700 0.0412 4 24 650 0.0383
– – – – 4 27 800 0.0471

7
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

Fig. 5. Visualization of velocity vectors and streamlines for WS1 on the planes at vertical coordinates of: (a) 1.35 m and (b) 2 m.

Fig. 6. Visualization of the streamlines where local temperatures are above the recommended temperature for WS1.

Fig. 7. Three-dimensional geometry of: (a) previous and (b) retrofitted designs.

vents of the CRAC unit and blown out to the front of the servers. The large recirculation zone observed near the top of R1 could be
suppressed by the implementation of the sliding door and local temperatures were decreased near the racks consisting of network and
battery units. The geometry of the data center results in an uneven flow and thermal structure even in the retrofitted design due to the
fact that the region at the left part of the data center is anticipated to benefit from the cold air.
Volumetric distributions of the temperature field are illustrated for two cases in Fig. 9 to clearly see the effect of retrofitting on the
overall thermal structure. Hot regions covering most of the data center could be successfully suppressed by isolating hot regions from
the rest of the data center. Furthermore, the maximum local temperature could be reduced from 46 ◦ C to 42 ◦ C in the retrofitted design.
Mitigating hot regions forming near the batteries would also provide appropriate ambient conditions for the batteries. Another design

8
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

Fig. 8. Visualization of the velocity vectors and streamlines over the planes located at vertical coordinates of 1.35 m and 2.0 m for: (a) previous and (b) retro­
fitted designs.

modification would be the isolating of the cold air from the left region of the data center by using the same approach in the present
study. However, an additional cooling system will be needed for the cooling of the racks consisting of network units in that case.
Nevertheless, the present retrofitting approach provided a significant improvement in the thermal distribution, which can be proved
by the efficiency metrics.
Fig. 10 compares three-dimensional streamlines where local temperatures are higher than the recommended maximum temper­
ature for previous and retrofitted designs. Isolating the hot zone by a solid panel eliminated recirculating hot flows at both front and
rear parts of the data center. Elimination of the recirculating flows originating from the buoyancy effects will contribute to the
reduction of energy losses induced by the turbulence effects as well.
Efficiency indexes are calculated for each working scenario and compared for previous and retrofitted designs in Table 6. The RCIHI
has increased from 50.27% to 97.45% for WS1 and from 77.10% to 100% for WS2 after the data center is retrofitted. In ideal con­
ditions, the RCIHI approaches 100%, where the inlet temperatures of the servers are below the recommended maximum temperature.
Deviations from 100% in RCIHI show that inlet temperatures of the servers exceed the recommended maximum temperature. The RCILO
is calculated as 100% for each design since the inlet temperatures are higher than the recommended minimum temperatures. Although
reduction in the RTI represents an improvement in the thermal efficiency, deviation from 100% in this metric shows existence of the
hot air recirculation even at the retrofitted design. The SHI remains almost constant for each design due to the fact that the amount of
the heat absorbed from the inlet of IT equipment and the amount of cold air absorbed from the outlet remain constant. As the flow rate
of the cold air passing through the IT equipment has increased, the RHI efficiency metric value has increased in the retrofitted design.
Norouzi-Khangah et al. [30] recommend lower SHI value and greater RHI value (RHI>0.8) to achieve an efficient cooling system in
data centers.
Open-source CFD models enables to incorporate novel models to the computational model depending on the features of the en­
gineering problem. This advantage is benefited in the present study while calculating efficiency indexes not only as an overall measure
of the efficiency but also as a field to monitor distribution of the efficiency over the rack layout. The present numerical model allows us
the calculation of the efficiency metrics server by server in addition to the overall efficiency of the data center. Fig. 11 shows dis­
tribution of the RCIHI over the racks. Remember that the RCIHI shows the servers where the inlet temperature is above the recom­
mended maximum temperature, servers located at the upper parts of R1, R2 and R3 have higher RCIHI value than recommended values.
Thus, spatial variations of the efficiency metrics enable us to detect the locations where the hot air recirculation or cold air by-pass are

9
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

Fig. 9. Temperature fields inside the Bitnet data center for: (a) previous and (b) retrofitted designs.

effective. The higher power consumption by the servers produces a higher temperature raise in the data center. The rise of the heated
air due to the buoyancy effects will also increase recirculation effects in open-aisle data centers such as Bitnet Data Center.

4.4. Experimental validation of the proposed design


In order to show the applicability of the proposed design, experimental studies were carried out for previous and retrofitted design
under a working scenario described in Table 7. APC temperature sensors were located at the rear of the Rack 3 and connected to the
NetBotz rack monitor to collect temperature data during 2 h with a sampling frequency of 0.1 Hz.
Experimental measurements were conducted for previous and retrofitted designs using the measurement setup. Numerical simu­
lations are performed and predicted exhaust temperatures are compared with the measured data in Fig. 12. Time-averaged of the
measured and simulated data were evaluated for the steady-state conditions. The proposed design reduced exhaust temperature about
2.5 ◦ C at the rear of the Rack 3. Consistency between numerical and measured data reveals that the present numerical model can be
reliably used for the investigation of the thermal features of a real data center.

5. Conclusions
This study examined the thermal distribution in an air-cooled open-aisle data center using a validated open source CFD model,
which allows for the accurate prediction of thermal issues such as hot air recirculation and cold air bypass originating from an uneven
distribution of airflow. Efficiency metrics are calculated as a field in the present numerical model to capture distribution of the

10
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

Fig. 10. Comparison of streamlines where local temperatures are above the recommended temperature for: (a) previous and (b) retrofitted designs.

Table 6
Performance evaluation.

Scenario Index Previous Design Retrofitted Design Classification Enhancement (%)

WS1 RCIHI [31] 50.27% 97.45% Good 47.18


RCILO [31] 100.00% 100.00% Good 0.00
RTI [23] 106.88% 103.96% Recirculation 2.92
SHI [22] 0.0947 0.0945 * 0.21
RHI [22] 1.010 0.781 ** 22.67
WS2 RCIHI 77.10% 100.00% Good 22.90
RCILO 100.00% 100.00% Good 0.00
RTI 110.36% 101.26% Recirculation 9.10
SHI 0.0948 0.0947 * 0.10
RHI 0.880 0.982 ** 10.38

efficiency over the rack layout since a heterogeneous thermal field is formed inside the data center due to the turbulence and buoyancy
effects. Numerical simulations are performed for two working scenarios of 14.85 kW and 15.50 kW for the assessment of the thermal
and cooling efficiencies of the data center, as well as to examine the efficiency opportunities available in the present data center.
Numerical simulation results revealed that the data center was occupied by a large hot air recirculation where the local temperature
exceeded the recommended maximum temperature for the reliable operation of IT equipment.
Data center was retrofitted with the implementation of a sliding door at the rear of the rack to create a hot aisle which is isolated
from the rest of the data center. Comparison of the three-dimensional streamlines sketched at the locations where the local temperature
exceeded the recommended maximum temperature showed that hot air regions could be eliminated and the maximum temperature
inside the data center decreased by 4 ◦ C owing to the retrofitted data center. Distributions of the efficiency metrics showed that the
retrofitted design locally mitigated performance degradation of the cooling efficiency over the rack layout. Performance improvements

11
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

Fig. 11. Distribution of the RCIHI over the racks for WS1: (a) previous and (b) retrofitted designs.

Table 7
Definitions of the working scenario for the experimental study.

Rack First Row Height [U] PC [W] Q [m3/s]

1 38 1 84 0.0078
2 27 2 144 0.0239
2 29 2 144 0.0239
3 25 2 154 0.0330
3 27 2 322 0.0266
3 29 2 196 0.0406
3 31 2 112 0.0330
3 35 2 210 0.0190
4 34 2 468 0.0445
4 38 1 165 0.0140

Fig. 12. Comparison of the simulated and measured exhaust temperatures for previous and retrofitted designs.

gained in the cooling and thermal efficiencies were determined with respect to efficiency metrics. Cooling efficiency has increased by
47.2% according to the RCIHI and 22.7% according to the RHI. These improvements in the cooling efficiency will result in a remarkable
reduction in power consumption by the air cooling system. Experimental studies conducted for previous and retrofitted designs
showed that the proposed design reduced exhaust temperature about 2.5 ◦ C. Present methodology can be used to reduce power
consumption by the cooling and ventilation systems in existing data centers, as well as to reduce carbon footprint of data centers in the
future.

12
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

Declaration of competing interest


The authors declare that they have no known competing financial interests or personal relationships that could have appeared to
influence the work reported in this paper.

Acknowledgements
This paper is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation pro­
gramme under grant agreement No 956059.

Nomenclature

ρ Density of the fluid (kg/m3)


ui Mean velocity component in the i-direction (m/s)
t Time (s)
p Pressure (Pa)
xi, xj Cartesian coordinates (m)
gi Gravitational acceleration in the i-direction (m/s2)
h Favre-averaged enthalpy
Si Momentum source
μ, μt Molecular and turbulent viscosities (m2/s)
Q Flow rate (m3/s)
Φ Porosity of the server
Dh Hydraulic diameter defined at the inlet of the server (m)
D Darcy coefficient
F Forcheimer coefficient
U Velocity (m/s)
T Temperature (◦ C)
p_rgh Dynamic pressure (Pa)
Tmax-rec Maximum recommended temperature (◦ C)
Tmin-rec Minimum recommended temperature (◦ C)
Tmax-all Maximum allowable temperature (◦ C)
RCI Rack Cooling Index
RTI Return Temperature Index
RHI Return Heat Index
SHI Supply Heat Index

References
[1] J. Koomey, Growth in Data Center Electricity Use 2005 to 2010, A Report by Analytical Press, Completed at the Request of the New York Times, vol. 161, 2011.
[2] Green digital sector. https://digital-strategy.ec.europa.eu/en/policies/green-digital.
[3] S. Tozer, M. Salim, Data center air management metrics-practical approach, in: 12th IEEE Intersociety Conference on Thermal and Thermomechanical
Phenomena in Electronic Systems, 2010, Las Vegas, NV, USA, 2010, https://doi.org/10.1109/itherm.2010.5501366. June 2-5.
[4] J. Niemann, K. Brown, V. Avelar, Impact of hot and cold aisle containment on data center temperature and efficiency, Schneider Electric Data Center Science
Center White Paper 135 (2011) 1–14.
[5] Y. Zheng, Z. Li, X. Liu, Z. Tong, R. Tu, Retrofit of air-conditioning system in data center using separate heat pipe system, in: Proceedings of the 8th International
Symposium on Heating, Ventilation and Air Conditioning, 2013, https://doi.org/10.1007/978-3-642-39581-9_67. Xi’an, China, October 19-21.
[6] F.J. Wang, C.M. Lai, Y.S. Huang, J.S. Huang, Alternative layouts for air distribution improvement of a computing data center, Adv. Mater. Res. 677 (2013)
282–285. https://doi.org/10.4028/www.scientific.net/amr.677.282.
[7] S. Bhopte, B. Sammakia, M.K. Iyengar, R. Schmidt, Guidelines on managing under floor blockages for improved data center performance, in: IMECE2006–13711
Proceedings of IMECE 2006 ASME International Mechanical Engineering Congress and Exposition, 2006, https://doi.org/10.1115/imece2006-13711. Chicago,
IL, USA, November 5–10.
[8] H. Alissa, S. Alkharabsheh, S. Bhopte, B. Sammakia, Numerical investigation of underfloor obstructions in open-contained data center with fan curves, in:
ITHERM, 2014, https://doi.org/10.1109/itherm.2014.6892359. Orlando, FL, USA, May 27-30.
[9] S.W. Ham, J.W. Jeong, Impact of aisle containment on energy performance of a data center when using an integrated water-side economizer, Appl. Therm. Eng.
105 (2016) 372–384, https://doi.org/10.1016/j.applthermaleng.2015.05.069.
[10] E. Oró, A. Garcia, J. Salom, Experimental and numerical analysis of the air management in a data centre in Spain, Energy Build. 116 (2016) 553–561, https://
doi.org/10.1016/j.enbuild.2016.01.037.
[11] H. Tian, H. Liang, Z. Li, A new mathematical model for multi-scale thermal management of data centers using entransy theory, Build. Simulat. 12 (2019)
323–336, https://doi.org/10.1007/s12273-018-0479-z.
[12] F. Wang, Y. Huang, B.Y. Prasetyo, Energy-efficient improvement approaches through numerical simulation and field measurement for a data center, Energies 12
(14) (2019) 2757, https://doi.org/10.3390/en12142757.
[13] I. Turkmen, C.A. Mercan, H.S. Erden, Experimental and computational investigations of the thermal environment in a small operational data center for potential
energy efficiency improvements, J. Electron. Packag. 142 (3) (2020), 031116, https://doi.org/10.1115/1.4047845.

13
M. Kuzay et al. Case Studies in Thermal Engineering 36 (2022) 102228

[14] X. Meng, J. Zhou, X. Zhang, Z. Luo, H. Gong, T. Gan, Optimization of the thermal environment of a small-scale data center in China, Energy 196 (2020), 117080,
https://doi.org/10.1016/j.energy.2020.117080.
[15] H. Lu, Z. Zhang, Numerical and experimental investigations on the thermal performance of a data center, Appl. Therm. Eng. 180 (2020), 115759, https://doi.
org/10.1016/j.applthermaleng.2020.115759.
[16] ASHRAE, Special Publication, Thermal Guidelines for Data Processing Environments, American Society of Heating, Refrigerating and Air-Conditioning
Engineers, Inc., Atlanta, GA, USA, 2004.
[17] X. Xiong, P.S. Lee, Vortex-enhanced thermal environment for air-cooled data center: an experimental and numerical study, Energy Build. 250 (2021), 111287,
https://doi.org/10.1016/j.enbuild.2021.111287.
[18] D.C. Wilcox, Formulation of the k-ω turbulence model revisited, AIAA J. 46 (11) (2008) 2823–2838, https://doi.org/10.2514/6.2007-1408. DOI:10.2514/
1.36541.
[19] X. Han, W. Tian, J. VanGilder, W. Zuo, C. Faulkner, An open source fast fluid dynamics model for data centre thermal management, Energy Build. 230 (2021),
110599, https://doi.org/10.1016/j.enbuild.2020.110599.
[20] D.V. Sheth, S.K. Saha, Numerical study of thermal management of data centre using porous medium approach, J. Build. Eng. 22 (2019) 200–215, https://doi.
org/10.1016/j.jobe.2018.12.012.
[21] ASHRAE TC 9.9, Thermal Guideline for Data Processing Environments, American Society of Heating Refrigerating and Air-Conditioning Engineers. Inc., Atlanta,
GA USA, 2015.
[22] R. Sharma, C. Bash, C. Patel, Dimensionless parameters for evaluation of thermal design and performance of large-scale data centres, in: Proceedings of the 8th
AIAA/ASME Joint Thermophysics and Heat Transfer Conference, 2002, https://doi.org/10.2514/6.2002-3091. St. Louis, Missouri, USA, June 24-26.
[23] A. Dogan, S. Yilmaz, Kuzay, M.E. Demirel, Development and validation of an open-source CFD model for the efficiency assessment of data centers, Open
Research Europe 2 (41) (2022) 1–20, https://doi.org/10.12688/openreseurope.14579.1.
[24] A. Dogan, S. Yilmaz, Kuzay, M. E. Demirel, OpenFOAM cases of the pape “Development and validation of an open-source CFD model for the efficiency
assessment of data centers” [Data set]. Zenodo. In: Open Research Europe (Version 2) 2022. DOI: 10.5281/zenodo.6336674.
[25] S. McCluer, Battery Technology for Data Centers and Network Rooms: VRLA Reliability and Safety, vol. 39, Schneider Electric Data Center Science Center White
Paper, 2012, pp. 1–10.
[26] M.K. Patterson, The effect of data center temperature on energy efficiency, in: 11th Intersociety Conference on Thermal and Thermomechanical Phenomena in
Electronic Systems. 2008, Orlando, FL, USA, 2008, https://doi.org/10.1109/ITHERM.2008.4544393. May 28-31.
[27] S.M.M. Nejad, G. Badawy, D.G. Down, EAWA: energy-aware workload assignment in data centers, in: International Conference on High Performance Computing
and Simulation (HPCS), 2018, Orleans, France, 2018, https://doi.org/10.1109/HPCS.2018.00053. July 16-20.
[28] N. El-Sayed, I.A. Stefanovici, G. Amvrosiadis, A.A. Hwang, B. Schroeder, Temperature management in data centers: why some (might) like it hot, in: Proceedings
of the 12th ACM SIGMETRICS/PERFORMANCE Joint International Conference on Measurement and Modeling of Computer Systems, 2012, pp. 163–174,
https://doi.org/10.1145/2254756.2254778. London, England, UK, June 11-15.
[29] H. Moazamigoodarzi, R. Gupta, S. Pal, P.J. Tsai, S. Ghosh, I.K. Puri, Modeling temperature distribution and power consumption in IT server enclosures with row-
based cooling architectures, Appl. Energy 261 (2020), 114355, https://doi.org/10.1016/j.apenergy.2019.114355.
[30] B. Norouzi-Khangah, M.B. Mohammadsadeghi-Azad, S.M. Hoseyni, S.M. Hoseyni, Performance assessment of cooling systems in data centers; Methodology and
application of a new thermal metric, Case Stud. Therm. Eng. 8 (2016) 152–163, https://doi.org/10.1016/j.csite.2016.06.004.
[31] M.K. Herrlin, Rack cooling effectiveness in data centers and telecom central offices: the rack cooling index (RCI), Transactions-American Society of Heating
Refrigerating and Air conditioning Engineers 111 (2) (2005) 725.

14

You might also like