Beat the Heat – Effective Cooling Strategies for Today’s Data Centers and Server Rooms
Uptime and Data Center Cooling are the main concerns for many Data Center managers. Excess heat in a server room adversely affects equipment performance, shortens its lifespan, and leads to a premature end of life for equipment. Keeping a server room at the recommended 68 ° to 77 ° F (1) is not an easy task. There are many factors that make cooling today’s Data Centers a significant challenge, including high-density computing clusters and rapid changes in technology. Learn about some of the different cooling strategies that can help alleviate this challenge.
High-Density Computing Clusters
The rise in the use of blade servers and virtual servers has greatly increased the potential amount of power consumed per rack, as well as the resulting heat output. While the heat dissipated by a 2 ft by 2.5 ft rack is currently about 10 kilowatts or more, experts estimate that designs for future equipment will require dissipations of 30-50 kW in the same rack space.(2) The trend toward increased power consumption has been documented in several studies, including one recent five-year study of 19 computer rooms that showed that power consumption rose by 39% from 1999-2005.(2)
Changes in Technology
Traditionally, advances in technology have occurred at an extremely rapid rate, as illustrated by “Moore’s Law,” which predicts the doubling of semiconductor performance approximately every 24 months. Historically, increased computational abilities have led to increased power consumption and heat loads. Data Center managers should take these trends into account when planning for future expansion. According to Gartner Research, “Without careful planning and coordination between the data center facilities staff and the server procurement staff, data centers will not be able to increase power or cooling in line with increases in server deployments…through year-end 2008, heat and cooling requirements for servers will prevent 90 percent of enterprise data centers from achieving the maximum theoretical server density.” (3)
Isolating Hot Spots
Hot Spots are areas in a Data Center that are not properly cooled, often resulting in temperatures that exceed recommend conditions for maximum equipment reliability and performance. Hot spots are not necessarily caused by a lack of cooling capacity and commonly occur in Data Centers with sufficient or excess cooling capacity, but can be caused by poor circulation or improper air flow.
- Zone Hot Spots can be present over fairly large areas in a Data Center and occur when the temperature at all air intake levels of a rack or cabinet are too hot, due to expelled air flow that is not properly routed.
- Vertical Hot Spots occur over a small area and often affect a single server rack. They occur when equipment at the bottom of a rack consumes the available supply of cold air and devices higher up in the rack pull in the hot air exhaust of adjacent equipment or ambient air.
Strategies to Improve Data Center Cooling
Data Center managers can take several steps to meet Data Center cooling challenges, including choosing the right rack, increasing Data Center energy efficiency, using liquid cooling units, and taking advantage of environmental monitoring.
- Select The Right Rack And Accessories
To fully maximize equipment cooling, when selecting a server rack consider intelligent and space-efficient design features that various rack models offer, including frame profile and capacity for increased packing density. Use blanking panels to manage air flow efficiency and select a rack with built-in channels for better cable management and improved air flow. Fully perforated doors and top panels can help improve ventilation as well. Also consider server rack accessories that will improve cooling, including fans, enclosure blowers, and rack air conditioners. In addition, consider using energy-efficient power supplies, such as 220V power, which significantly increases available amperage into the server rack, using fewer circuits while providing a more balanced power load. This can reduce the overall number of PDUs needed to power equipment, leaving more space for airflow.
- Aim for Energy Efficiency
There are several steps you can take to reduce overall energy consumption and resulting heat loads in your data center. To begin with, consider hiring an expert to conduct room diagnostics, measure airflow, and correct any cooling problems identified. Next, conduct a thorough audit of your equipment and determine if any servers can be consolidated or discarded-this process can cut power consumption in some organizations by up to 30%. (4) Finally, clean up any clutter under your Data Center floor, including cabling, that might be impeding air flow.
- Deploy Liquid Cooling Units
As power-intensive applications and server densities have increased, Liquid Cooling Packages (LCPs) have become a valuable alternative to ambient air cooling and can better meet the cooling challenges presented by high-density computing clusters. These modular, temperature-neutral high-density cooling solutions utilize air/water heat exchangers to provide uniform, effective cooling. Liquid cooling units use a special horizontal airflow with constant-temperature cold air provided at the front intake and hot air removed from the rear of the enclosure. They can be mounted at the rack base, in a rack “side car.” Fully-loaded LCPs have a 30kW cooling output with three cooling modules possible per equipment rack, and controlled variable speed fan and water flow based on actual heat load generated in a cabinet.
- Use Environmental Monitoring
These devices allow administrators to proactively monitor rack and server room temperature, including hot spots, at any time and from anywhere and provide protection to mission-critical applications. They also allow administrators to continuously monitor amperage draw per circuit, water leaks, and physical security and can send alerts automatically via SMTP/SMS/SNMP when conditions exceed established thresholds. This allows IT managers to quickly respond to any irregularities before they become larger problems. Environmental monitoring devices also help administrators in future planning, as they provide valuable data that can be used for trending analysis.
When considering Data Center management, cooling is only one piece of the puzzle. Other key considerations include intelligent power management and KVM console management tools. The team at 42U specializes in assessing needs, creating solutions, and supporting our clients, to ensure that IT professionals maximize their use of current technologies to improve overall business performance.
(1) Flaherty, Escott, Soucey and Khankari. (2005). “Thermal Guidelines for Data Processing Environments” (2004) referenced in “CFD Simulation Helps Optimize Data Center Cooling Performance.” Data Center Journal.
(2) Brill, K. (2005) “2005-2010 Heat Density Trends in Data Processing, Computer Systems and Telecommunications Equipment.” Uptime Institute.
(3) Connor, D. and Mears, J. (2004). “Blades won’t cut it unless you keep them cool.” TechWorld. Quoting Gartner Research (2004).
(4) Mitchell, R. (2006) “Sidebar: Eight Tips For a More Efficient Data Center.” Computerworld.