IT Handbook: Energy Efficient Cooling - Data Centers

OPTIMIZE AIRFLOW AND ELIMINATE HOT SPOTS
One of the most common problems that data centers face are hot spots; raising inlet air temperatures can exacerbate these problems. Hot spots can occur for many reasons,most often because of uneven or insufficient distribution of cool air or by mixing hot aisle air back into the cold aisle. They can also occur when hot air is recycled within the cabinet.

A variety of methods and devices can mitigate hot spots. Try to redistribute the perforated floor tiles or grates so that more cool air is available in the warmest aisles. But be careful that you don’t inadvertently create new hot spots. If you add too many more perforated tiles or grates, the overall underfloor static will drop and reduce the cooling in other areas. You can do this as a trial-and-error process or by using computational fluid dynamics (CFD)modeling. However, CFD results will be meaningless unless you have an accurate model with all under-floor cables and obstructions properly mapped.

If you aren’t using blanking plates in your cabinets to prevent cold-air bypass and recirculation, you should consider doing so. They are a low-cost investment that will pay off immediately and will be critical if you use containment systems. In addition, under-floor airflow into cabinets from cable openings should be mitigated using air-containment devices like brush collars.

TAKING INCREMENTAL TEMPERATURE STEPS
Once you’ve balanced and corrected the airflow, eliminated any hot spots and maintained a stable baseline temperature of 64.4 degrees to 80.6 degrees F for a day or more, you can begin to slowly and methodically increase the temperature. Begin by raising the temperature one or two degrees Fahrenheit. Then wait 24 hours, ideally taking measurements at the same time of day or averaged over
a 24-hour period, and compare your temperature and energy readings. You should see some improvements in energy use. Remember: You’re only looking for 1% to 3% savings of cooling energy per degree Fahrenheit.

Once you get to a room temperature of 75 degrees F and higher, raise the temperature in one-degree increments and keep a close watch on the air-inlet temps in the front of the hottest racks.Watch IT systems that begin to report any internal temperature problems.

DRAWBACKS OF RAISING THE TEMPERATURE
One of the inherent problems with pushing the temperature envelope is that your thermal reserve time will be reduced or nonexistent if you experience a loss of cooling system capacity. Consider that carefully before increasing temperatures in your data center.

The issue of time-to-recover from a cooling system failure versus thermal rise time is a complex one. Before you increase your data center’s temperature, examine the redundancy and recovery time of your cooling systems. In the end, if you don’t start with a good system or recovery plan in place, it will be a much more significant issue at hotter intake temperatures since you may have less time before your IT equipment needs to be shut down (or automatically shuts down from internal thermal overheating).

This can be a serious factor during a simple utility power outage—even without a cooling system failure. In most cases, the backup generator will start and begin to stabilize the system, and then the automatic transfer switch (ATS) will bring the power back online within 30 to 60 seconds. However, the  compressors—in the CRACs or in the chiller plant—will have a restart timer delay of three to five minutes. This can cause a problem unless you have some form of stored cooling reserve, especially if you plan to use a containment system to combat high-density heat loads. In a chilled water system, the  continue to run or restart immediately.

Remember, each data center and cooling systemis different. Your overall goal is to improve your data center’s energy efficiency while still ensuring that the IT equipment remains in its environmentally safe operating envelope.

READ the entire handbook