Close-Coupled Cooling

Share: 
Close-Coupled Cooling Solutions for Server Racks, Computer Rooms & Data Centers

Close-Coupled Cooling is an ideal solution for high-density configurations, eliminating hot spots while improving energy efficiency.
With high performance, denser computing, the traditional layout of perimeter CRACs, fans, and perforated floor tiles struggles to deliver the required volume of cool air, to maintain a uniform server inlet temperature, and to expel exhaust air from the room. A potential remedy for these problems is close-coupled cooling, where the once distant air conditioner is moved closer to the actual compute load.

What is Close-Coupled Cooling?
Close-coupled cooling is a recent entry into the data center lexicon, and many manufacturers apply the term to describe their latest generation cooling products. Though their solutions vary in terms of configuration and capacity, their approach is the same. Close-coupled cooling aims to bring heat transfer closest to its source: the equipment rack. Moving the air conditioner closer to the equipment rack ensures a more precise delivery of inlet air and a more immediate capture of exhaust air.

We can divide commercially-available close-coupled solutions into two categories: open loop vs. closed loop.

Open-Loop Configuration:
Open-Loop solutions, while they bring the heat transfer closer to the equipment rack, are not completely independent of the room in which they’re installed. The air streams will interact to some extent with the ambient room environment.

These products will use either chilled water or refrigerant through their cooling coils. All will require remote heat rejection via a mechanical chiller system, condensing either chilled water or refrigerant.

In-Row, In-Line Air Conditioners
The trend in computing is densification; from chips to processors to servers, a small footprint, these days, can pack a significant punch. In some instances, the infrastructure has shrunk alongside the computers. The In-Row, In-Line air conditioner brings the functionality of the perimeter CRAC local to a data center row with dimensions similar to the actual enclosures. These units and the computer load benefit by proximity: neither the cold air nor the hot exhaust air has far to travel.

End users can deploy these products in various ways. If the infrastructure is in place, the In-Row, In-Line products may serve as supplemental cooling solutions in existing data centers. They may serve as localized cooling within new rows or pods, addressing higher density installations. They may be used with a containment strategy to achieve even greater efficiency.

Rear Door Heat Exchangers
Replacing the rear door of an existing rack, these heat exchangers leverage the front to back air dissipation of most IT equipment. The servers exhaust warm air, which passes through the heat exchanger coil and is returned to the room at a palatable temperature (often similar to server inlet temperature). These units can remedy hot spots in existing data centers, supplementing the existing air conditioning or for smaller loads and rooms, provide cooling for spaces not originally designed as data centers- data rooms, closets, labs. Installed on racks, these units do not take up floor space-an important point in small size installations.

Overhead Heat Exchangers
Overhead, in the “traditional” cooling sense, is an alternative to a raised floor plenum. The AC discharges air from the ceiling into the cold aisle; exhaust air rises into vents in the ceiling. The close-coupled version brings the air distribution and return much closer to the enclosures. Instead of the ceiling, these units sit directly on or above the server enclosure, making the cold air delivery and hot air return much more precise. Deployed overhead, the units do not occupy any floor space. End users can use as either supplemental cooling or localized cooling within the data center, depending on the overall load.

Closed-Loop Configuration:
Closed-loop cooling addresses the compute load independent of the room in which it’s installed. The rack and heat exchanger work exclusively with one another, creating a microclimate within the enclosure. Those familiar with containment strategies can think of close-coupled, closed loop as containment fully evolved: both hot aisle and cold aisle in the same rack footprint.

In-Rack Cooling
The In-Row/In-Line air conditioner, mentioned earlier, is a key part of this approach. The AC is adjoined to the server rack and both are fully sealed. The solid doors on the enclosure and In-Row AC contain the airflow, directing cold air to the server inlet and exhaust air, via fans, through the cooling coil. The close-loop design allows for very focused cooling at the rack level. Users can, therefore, install very dense equipment exclusive of the ambient environment. As a result, they have the flexibility to use unconventional rooms and spaces to house the IT equipment.

In-Rack Close-Coupled Cooling Solution
In-Rack Close-Coupled Cooling Solution

 

Close-Coupled Cooling Efficiencies
Modular and Scalable Infrastructure
Data center professionals must understand their current requirements for space, power, and cooling and predict how those needs will grow over time; otherwise, the data center can quickly become obsolete. A past approach addressed these concerns with excess-bigger spaces, more racks and air conditioners, and large chiller plants. This left ample headroom, it was thought, for redundancy and growth.

For today’s data center, immersed in discussions of high density and even higher energy costs, this approach is problematic. As Sun Microsystems states in its Energy Efficient Data center paper, “Building data centers equipped with the maximum power and cooling capacity from day one is the wrong approach. Running the infrastructure at full capacity in anticipation of future compute loads only increases operating costs and greenhouse gas emissions, and lowers data center efficiency.”

Close-coupled cooling embodies two of the industry’s favorite buzzwords: modularity and scalability. Instead of building larger spaces and installing more air conditioners, professionals can “right-size” from the very beginning. Perhaps a 40kW installation, which is ordinarily spread over 10 racks, is now spread over 5 racks, when using a rear door heat exchanger. Perhaps a planned 5000 sq ft facility becomes a 2000 sq ft facility, with fewer CRACs supplemented with overhead cooling. Due to the modularity of these products, end users can add pieces in a “building-block” fashion, increasing capacity as business needs dictate.

The close-coupled design offers consistency in cooling performance, independent of the raised floors, floor tiles, and fans associated with traditional cooling. These products scale to successfully support the full gamut of rack loads, from minimal to high density. A user with a closed-loop, close-coupled design knows that a predictable capacity is available as his operation grows.

Fan Energy
In the traditional layout, fans must move the air from the perimeter of the room, under the raised floor, and through a perforated floor tile into the server intake. This process, of course, requires energy, which varies from facility to facility. Often impediments (large cable bundles, conduits) exist under the raised floor, which require additional fan energy to move the required volume of cold air.

The In-Row air conditioner reduces fan energy by proximity. The AC unit is embedded within the row of racks, ensuring the air does not have far to go. In addition, the AC delivers air directly to the row; there are no underfloor impediments to overcome.

In addition, certain close-coupled products operate with variable speed fans, where fan velocity aligns with the installed compute load within the rack. For fans, this point is not insignificant; speed is directly proportional to energy consumption. A SearchDataCenter.com article puts the savings into perspective: “[I]f you can reduce fan speed by 10%, fan power consumption decreases by 27%. If you reduce fan speed by 20%, fan power drops by 49%” (Fontecchio, 2009).

Furthermore, certain rear door heat exchangers function without fans. These systems use the servers’ internal fans to propel hot air through the heat exchanger where it’s cooled and released into the room.

Higher Chilled Water Temperatures
As documented on this site, chilled water supply temperatures typically range from 42 to 45 degrees F. The cold water is needed to produce cold air, which offsets the mixing that occurs on the data center floor. As cold inlet air and warm exhaust air interact, the hope is the resulting inlet temperature falls somewhere in the ASHRAE recommended range of 64.6-80 degrees F.

Some close-coupled designs can accept warmer inlet water temperatures. Due to the proximity of the heat transfer and the design of the cooling coil, a warmer water temperature can provide a desired server inlet temperature within ASHRAE’s guidelines.

This point is significant for three reasons:

  • Chillers, depending on the source, are estimated to represent 33% to 40% of a facility’s energy consumption, due in large part to the mechanical refrigeration process
  • A higher inlet water temperature maximizes the number of hours in which “free cooling” is possible through the use of water side economizers if the environment permits
  • Chiller efficiencies in kW/Ton increase at a higher supply water temperature

However, the process is not as simple as raising the temperature. It may not be practical for facilities which share a chiller plant with offices or other spaces.

Installation Considerations

The close-coupled products will require connections to a chiller system; new supply and return lines will have to be run to the heat exchangers. Thus, there are numerous design and installation considerations.

Infrastructure

  • Will the supply and return pipes be run under the floor or overhead? What are the challenges and risks with either option?
  • Does the existing chiller plant have sufficient qualities- pressure differential, water quality, flow rate-to ensure the products function as advertised?
  • Does the capacity of the product vary based on the chilled water temperature? Can the facility use elevated temperatures to improve energy efficiency?
  • Will there be an isolated data center loop, fed by a coolant distribution unit (CDU)? If so, what’s the interaction between building chilled water and the CDU? Does the particular product require a CDU?
  • How is redundancy accomplished?

Compatibility
Certain close-coupled solutions are compatible with existing server rack products. The rear door heat exchangers are intended to retrofit onto 3rd party racks. The overhead heat exchangers, assuming the architecture is in place, can be deployed above any rack. Existing facilities may find these features friendly, as there seems less need to re-rack servers and other equipment into different rack.

The In-Row air conditioners, in a close-loop design, often require a proprietary rack enclosure to ensure the entire operation is properly sealed. Even in an open-loop design, the products install at the row-level, which may require new rows or disrupting existing rows. Furthermore, these products do occupy floor space-something that smaller facilities must consider.

If the facility is willing to re-rack existing equipment or is deploying new equipment, they can greatly reduce the total required floor space, due to the increased capacities. The brown field/ green field discussion is very pertinent for close-coupled cooling. Existing facilities must discern the complexity and practicality of deploying the technology within their data centers. Green field facilities have time to plan and design the entire cooling scheme from soup to nuts.

Conclusion
The considerations for close-coupled designs are many. Product selection depends largely on the individual installation and requires input from a number of groups, from IT and facilities staff, consulting engineers, and vendor representatives. Yet, the end result of this concerted effort can be considerable. Consider the following examples:

  • An EYP presentation compares conventional cooling architecture with 45 degree chilled water vs. a close-loop, close-coupled cooling strategy with 55 degree chilled water. The presentation projects annual energy savings of over $ 1 million with the closed-coupled design.
  • For facilities where it’s impractical to raise the chilled water temperature, a manufacturer study with a 45 degree chilled water supply using an In-Row product reports a $60,000 annual energy savings over perimeter CRAH technology (Bean & Dunlap, 2008).
  • The oft-cited EPA’s Report to Congress christens close-coupled liquid cooling as a “State of the Art” technology-a key element in curbing cooling plant energy consumption.
  • Close-coupled cooling can accommodate a “Zone” approach to data center expansion, addressing the higher performance loads. The approach allows for, according to Gartner, “adequate power distribution and cooling…without having to create a power envelope for the entire data center that supports high density” This zone approach reduces first-costs and the continuing electricity costs. (Cappuccio, 2008)

The final point is key. The benefits of energy efficiency are not exclusive to the environment. Close-coupled cooling products, along with other best practices, offer financial incentives. From potential utility rebates to lower utility bills, the business case is there. The ecological results are simply an added bonus.

Read More about Cooling Solutions

References

Bean, J., & Dunlap, K. (2008). Energy Efficient Data Centers: A Close-coupled Row Solution. ASHRAE Journal , 34-40.

Cappuccio, D. (2008). Creating Energy- Efficient Low Cost, High Performance Data Centers. Gartner Data Center Conference, (p. 18). Las Vegas.

EPA. (2007, August 2). EPA Report to Congress on Server and Data Center Energy Efficiency. Retrieved January 5, 2009, from Energy Star: http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Report_Exec_Summary_Final.pdf

EYP Mission Critical Facilities. (2006, July 26). Energy Intensive Buildings Trends and Solutions: Data Centers. Retrieved February 2, 2009, from Critical Facilities Roundtable: http://www.cfroundtable.org/energy/072106/myatt.pdf

Fontecchio, M. (2009, January 21). Data Center Air Conditioning Fans Blow Savings Your Way. Retrieved January 22, 2009, from Search Data Center: http://searchdatacenter.techtarget.com/news/article/0,289142,sid80_gci1345584,00.html

Sun Microsystems. (2008). Energy Efficient Data Centers: The Role of Modularity in Data Center Design. Sun Microsystems.


Tell us about your project.

* These fields are required.

×