Hey there guys, Paul here, from TheEngineeringMindset.com. In this video, we're going to
be looking at Data Center HVAC with a focus on the cooling systems used. We'll compare how the
different strategies work and how to improve the efficiency, especially with a new growing trend of using computational fluid
dynamics, or CFD software, and we'll also run some simulations to show just how powerful this is. Data centers are rooms of computer servers which provide networking
and internet-based services. They range in size from
a small single room serving a single organization, and they scale all the way up
to enormous internet giants such as Google and Facebook. More and more data centers
are opening each year as we use and increasingly
rely on the internet and remote services to store,
access, and stream our data. With this growing trend, it's
important that the buildings run as efficiently as possible. As the data centers are operational 24/7, they can consume vasts
amounts of electricity, and as this electricity is
used to power the servers and process all the data,
it generates a lot of heat. This heat needs to be removed. Otherwise, the electrical
components will overheat and fail or even catch fire. The energy consumption
for a typical data center will be split with around 50% being used by the IT equipment,
35% on cooling and HVAC, 10% on electrical
infrastructure and support, and around 5% on lighting. The electrical demand for
data centers really does vary from just a few kilowatts
up into the megawatts, depending on the size and location. So we're going to look at a
few examples of data centers and their air conditioning systems, as well as the efficiency
improvements that can be made. The first part we'll briefly cover is the non-data hold areas. These are the areas where staff are normally located, the security guards, the engineers and technicians, et cetera. And these cover the areas of the offices, the toilets, the workshop and rest areas. These areas will be served by a separate mechanical ventilation system and will use either an air handling unit or a rooftop unit to
distribute conditioned air to suit the thermal comfort
needs of the humans. They might also use a
separate split unit heat pump, or VRF system, for temperature
control in these areas. I won't go into too much
detail in this segment as we've covered these
in our previous videos on chillers, HHUs, RTUs. We also have videos on VRF, heat pumps, and split AC units. If you want to learn more on these, links are in the video description below. Coming over to the server room, one of the most common
methods currently used is the place the server
racks onto a raised floor and then use computer
room air conditioners or crack units to distribute
the conditioned air to the server racks. The crack units have
heat exchangers inside which are connected to refrigeration units or chilled water systems
to remove the heat from the server racks. Some can also humidify
or dehumidify the air that's very important in order to control the static electricity in the air. They have filters inside to
remove dust from the room as well as a fan to circulate
and distribute the air. For extra efficiency, the crack units should use energy efficient filters, EC, or electronically controlled fans, and pressure sensors in the floor void to precisely control the air supply rate. Placing temperature sensors
on the intake grills of the server rack is often recommended to control the supply
temperature from the crack units as this matches the actual intake. The conditioned air will be forced by a fan in the crack unit into the void under the floor, and the small holes or
grilles in the floor tiles would allow the air to leave
the void in strategic places. This air will collect the heat and rise up towards the ceiling. The crack units then suck this warm air back into the unit to be reconditioned. In the early days, the server racks were positioned facing different ways. But engineers soon realized
this was very inefficient because the fresh cold air was just mixing with the warm discharge air of the servers and this meant that the servers were receiving different air temperatures. Some hot, some cold, and this
led to high energy consumption as well as a high failure
rate of the servers. To combat this, the
servers were positioned so that all the server racks
were facing the same way. This was a slight improved strategy, but quite often, some
of the discharged air was being pulled into the
intake of the server racks sitting behind it, which led to mixing and an increased air temperature. The next strategy used, which
is still very common today, is the use of hot and cold aisles. This is a great improvement
on the previous designs because it separates the
fresh cold air stream from the hot discharge air. The cold air rises out
from the floor grills and is pulled through the servers. All the hot discharged air
collects into the hot aisle and rises up towards the ceiling, when it's then pulled
back into the crack units. This means the servers should receive only fresh cold air, and the crack units receive the hot discharged air. This increases the
temperature differential across the crack unit's heat exchanger, and that will improve the
efficiency of the machine. This is not perfect, however,
because there would still be some mixing of the hot
and cold air streams. Cut outs in the floor
can result in air leaks. This means that the cold air can lead straight into the hot aisle. Floor grills which are too
close to the crack units result in air recirculating
straight back to the crack unit and will mix with the returned air stream. Gaps between the servers can result in air recirculating around
inside the server rack. This can easily be solved though by installing blanking plates. If more cold air is supplied than needed, it will flow over the units and
mix with the discharge line. If insufficient cold air is supplied then warm discharge will
be pulled over the top and around the side of the
server and into the cold aisle and will mix with the stream. We're gonna to look at some
CFD simulation examples shortly of this occurring. A much improved design, and
one that is very popular currently for both new
and existing data centers, is to use a physical barrier to separate the two air streams. There are a couple of ways to do this. We can use a barrier
around the server rack and they contain either the
hot air or the cold air. Cold air containment is
a very popular choice for existing data centers. That's because it is easy
and cheap to implement which means that payback is quick. The cold air fills the cold aisle and then the hot discharge
fills the rest of the room with the crack units pulling
this in for reconditioning. However, it does also
mean that any equipment located outside the cold zone,
will only receive hot air. The other containment strategy in use is the hot aisle containment. This is best suited to new builds as it costs more to install. In this strategy, the
cold air fills the room and the hot discharged air is pushed into another void within the ceiling. The intake for the crack unit is also inducted into the ceiling to pull this hot air
out for reconditioning. The hot aisle containment
provides superior performance and also allows a slight
buffer for cooling, should the power or cooling system fail. We can actually compare the performance of different server room
setups quickly and easily using CFD or computational fluid dynamics. These simulations on the screen were generated using a
revolutionary cloud-based CFD and FEA engineering platform by SimScale, who have kindly sponsored this video. You can access this
software free of charge using the links in the
video description below, and they offer a number
of different account types depending on your level. It's not just limited
to data center design, it's also used for HVAC,
AEC, electronics design, as well as thermal and
structural analysis. Just a quick browse on their website and you can find thousands
of designs for everything from race cars, heat
exchangers, pumps and valves, which can all be copied
and used as templates for your own design. They also offer free webinars,
courses, and tutorials to help you build and
run your own simulations. If like me, you have some experience with creating CFD simulations, then you know that normally
this kind of software is very expensive, and you would also need a powerful computer to run it. SimScale, however, can all be down from the internet browser, and as they're cloud based, their service do all the work, and we can access our
designs from anywhere, which I'm really pleased about, as it makes our lives as
engineers a lot easier. So if you're an engineer,
a product designer, a student or a hobbyist,
then I highly recommend you try this out, get your free account by following the links in
the video description below. Okay, so the first design
we've used a standard hot isle configuration. The arrows indicate the direction of flow, and the colors indicate the velocity, you can clearly see there's a huge amount of recirculation occurring
between the aisles, and I've highlighted these in the boxes. The second design uses
partial hot aisle containment with a third hot aisle at
the end being uncontained. You can see the first hot
aisle has a very good profile and no recirculation is occurring. The second aisle, however,
there is some recirculation occurring towards the end of the row, so some steps like blanking plates need to be installed here. The third hot aisle has some
major recirculation occurring and that's because there
is no physical wall to separate the hot and cold air streams. If we then run a simulation
for thermal analysis of the designs, we can
compare the two designs and show the result in
temperature distribution at different levels. The simulation starts at floor level and moves up to the top of the racks. From the comparison,
we can clearly observe that at the lower
levels, the second design has a much cooler cold aisle as compared to the first design. As we move to the upper
levels, the air temperatures start to mix, but the second design still maintains much cooler levels. Well below 28 Celsius or
82 degrees Fahrenheit. Whereas the first design has temperatures above 29 degrees Celsius
or 84 degrees Fahrenheit. The recommended ranges from standards require that the inlet air temperature be within 18 to 27 degrees Celsius or 64 to 80 degrees Fahrenheit. At the very top levels, the
temperature for the first design are now in the hotter range
of 40 plus degrees Celsius or 104 degrees Fahrenheit,
while the second design, the partial containment one, has only a maximum of 30 degrees Celsius, or 86 degrees Fahrenheit. Thus, the second design
performs much better in this case and further
design improvements such as for hot aisle or
cold aisle containment can be studied using cloud based CFD to improve data center cooling, as well as optimizing energy consumption of both the server
equipment and cooling units. Another type of data center design, which is becoming increasingly popular is free and evaporative cooling. It can be retrofited in
some existent designs, but it's especially popular with large new purpose-built data centers
like Facebook and Google. Some of these new designs do not use any refrigeration equipment for cooling. This can only be done in
some regions of the world where ambient conditions are right, but it allows data centers
to cool their servers without any refrigeration plan. The ambient air is
sucked into the building through some loovers and
is then heavily filtered and cooled and humidified
by evaporative coolers and then forced into the data haul into a hot aisle configuration. The exhaust of the hot
aisle is then connected into another set of fans
which pulls the hot air out and discharges this off
into the atmosphere. Some other cooling strategies which are slightly less common are the use of ducted systems with heat wheels or
heat exchangers fitted. These allow thermal
energy to be transferred from one stream to another
without introducing fresh air into the building. The fresh air can contain dust, moisture, and salt particles which deteriorate the server's electrical components. To provide cooling to the crack units, you usually find a chilled water system using a traditional chiller. Depending on the location,
some systems are able to turn off the chillers,
and use just the evaporative cooling capacity of the cooling
towers for normal operation, then use the chillers as their backup if the cooling towers are
unable to reach the set point. Some crack units contain their own small individual refrigeration system which either uses a remote dry air cooler or they dump their heat into
a condenser water system. If the condenser system is used, then you'll often find a free
cooler connected to the system or sometimes built into the chillers. This allows the heat to be removed without or with minimal
use of the compressor using just the fans to
blow cooler ambient air across the condenser
which removes the heat. Okay guys, that's it for this video, thanks for watching. Don't forget to sign up for
your free SimScale CFD Account using links in the
video description below. Also, you can follow us on
Facebook, Twitter, Google+, and Instagram, links are below. Once again, thanks for watching.