42U > Knowledge Base > Newsroom & Blog > Webinars > Free Cooling Webinar

Free Cooling Webinar

Share: 

Air-side and water-side economizers offer the potential for significant energy savings by using the ambient environment to augment or replace mechanical air conditioning. Depending on your location, environment, data center design and existing infrastructure, implementing a “free cooling” strategy can be extremely challenging.

Scot Heath, PE, 42U’s CTO will introduce water-side and air-side techniques, discuss the costs and benefits and shed light on what experts and ASHRAE are saying about Free Cooling and Economization.

This Webinar covers the following Free Cooling / Economization topics:

  • Economization Basics
  • Water-side Techniques
  • Air-side Techniques
  • Costs & Benefits
  • Controversy

Read More about Air-side Economizers


Read the Transcription

Tanisha White: Ladies and gentlemen, thank you for standing by and welcome to today’s session in the 2011 42U Web Seminar Series.

My name is Tanisha White and I’ll be your moderator for today’s webinar on Free Cooling and Economization 101.

During this presentation all participants will be in a listen-only mode. However, we encourage your questions or comments at any time through the Chat feature located at the lower left of your screen. These questions will be addressed as time allows. Since we’ve had a high number of participants for this webinar, we will try to allocate as much time as we can to field questions.

This webinar is being recorded today, May 5, 2011, and a copy of this webinar recording will be available on our Web site at www.42U.com approximately 48 hours after our presentation.

Our presenter today is 42U’s Chief Technology Officer, Scot Heath. Scot brings more than 30 years of data center and IT equipment engineering experience, and leads 42U’s Professional Services and Engineering Division.

At this time, I would like to welcome Scot.

Scot Heath: Thank you, Tanisha, As you all know, we’re going to talk today about Free Cooling and Economization, and I put at the top, you know, Free Cooling is not exactly free. I mean, if it was free we’d all be doing it.

So really, the term economization, as it’s applied to cooling in general and data center cooling specifically refers to the ability to unload the refrigeration part of the cooling plant. So, you know, chilled water, we’ve got a big chiller out there, some place has a refrigeration cycle compressor and it cracks, same thing. They all have their own compressors in the units themselves that have a refrigeration cycle. And the objective of economization is to run those less or run those with less load.

So, the two kinds of economization we’re going to chat about are Air-Side and Water-Side, Air-Side being either – or being the exchange of heat directly from the air inside the data center to the air outside the data center. And that can be done either by exchanging the air itself, so bringing outside air and replacing the air in the data center with that outside air, either in totality or partially.

Or it can be done indirectly through a heat exchanger, either a rotating wheel, those are in the news a lot lately, or even a static heat exchanger that has, you know, plates of close proximity to alternating air streams, so the outside air passes through on one side of the plate or set of plates, and the inside air passes through on the other and heat is exchanged across that heat conductive surface.

Water-Side economization is where we exchange heat from the secondary loop, which is the nice clean water loop inside the data center directly to the primary loop, and then heat is rejected out of the primary loop in the normal manner. So, both of these techniques, again, you know, the objective is to unload that refrigeration cycle.

So, let’s start with Air-Side. We have a map here of potential Air-Side economization. This is from the GreenGrid. They’ve got some very nice resources to both calculate the ability to do economization using, you know, particulars for your facility, as well as just some general information.

And you can see here that they have chosen a dry above temperature of less than 81 degrees, and a dew point of less than 59 degrees. So, you know, trying to control humidity, bringing in that air from outside, as well as being concerned with the temperature of the air.

Now, this is a fairly aggressive temperature set point, 81 degrees, well within the (as) recommended range, but quite uncommon in most data centers, particularly data centers that don’t go to great lengths to maintain a very tight temperature tolerance across their inlets.

And as you expect, you know, the farther you go north the more hours you get free cooling-wise, and eventually of course you can get to a point where you just run 100% of the time on free cooling bringing in that outside air.

Now, if you have approach – if you’ve got a heat exchanger involved in this equation you’ll have some temperature delta that you have to assign for the amount of temperature difference between that air on the outside and the air on the inside as it exits the heat exchanger, and we call that approach. So, you have to adjust the temperature of the air that you’d like to consider appropriately.

Let’s go on.

So here, I have a diagram of a direct-exchange system, a very common method for doing, you know, Air-Side economization. I show it here with an open return plan. It doesn’t have to be open; it certainly can be above ceiling. The last one of these I worked on actually had an above-ceiling return and the, you know, vent – exhaust fans that are shown in pink there were just on the roof; completely big open space area.

The inlet fans, on the other hand, were quite some distance away and ducted down to the CRACs. And it’s important that in the case that we have ducting in place, the fans are sized appropriately, and then cost analysis is done appropriately to take into account the losses associated with that ducting.

There are certainly – it certainly takes energy to push the air the far and it takes energy to exhaust the air as well. This was part of that not free part. Not only does it cost money to install the pathways to get air in and out, but it costs money in an operating sense to keep that air moving.

You know, the diagonal lines that you see above the CRAC there are control louvers, and typically those are linked, you know, either logically or actually physically linked, so that as one set opens the other side closes. So in non-economizer mode, the set that’s on the top there, you know, horizontal line — excuse me — would be completely closed off and the set that’s in – on the vertical line on the right-hand side would be completely open, and we’d be operating in the mode that most data centers operate in.

That is that hot air would be returning back to the CRAC through that set of louvers, being cooled by the CRAC, pushed out under the floor. You know, we show here a very common under-floor delivery scheme, and in this case we have containment. Containment is certainly a benefit, again, you know, in being able to even out that temperature on the inlet side, and thus raise the temperature as much as possible so that we can take advantage of as many hours we can to bring in that fresh air, but it’s not necessary.

And then of course, when the hot air’s exhausted it would go back to the CRAC, or in the case of the economizer, some portion of that air would go outside. And when I say, “Some portion,” what we’re doing with those louvers is we typically regulate the temperature of air in that plenum above the CRAC, and the set point that I like to use is slightly lower than the delivery air that I expect out of the CRAC.

You know, I want to open those louvers and introduce free cooling anytime that I’ve got outside air temperature that is cooler and meets my humidity specifications than the return air, but I’m going to try and regulate that temperature to be the temperature going out the bottom. So, what happens in that case is if it’s warm, you know, very close to the return air temperature, those louvers that are returning air from the room will go close, all the air will be exhausted, we’ll replace that 100% free air, or very close to that.

As it gets cold outside and as it get dry outside, we use less and less outside air, and recirculate the air inside through the CRACs so that we maintain good temperature regulation. We don’t want to over-cool the equipment, and then introduce warm, moist air, or maybe not warm, but moist airs could happen. Let’s say an approaching storm front comes along and we’ve got our machinery inside cooled down to a quite cool temperature, maybe 60 degree temperature on the grills themselves.

If I suddenly introduce air that has a dew point lower than that I can condense, you know, water on the equipment, the IT equipment, and that’s a big no-no. All the manufacturers will spec quite a wide humidity range, typically as wide as the ASHRAE of outrange and sometimes wider, but they all say non-condensing. You don’t want water in your operating IT equipment.

So, this next diagram is the out – or the other air economization technique that I mentioned, which is Indirect Air-Side, and that is the exchange, but not the exchange of air. So, I show a static heat exchanger here. We’ve just got, as I say, alternating plates with channels such that the air inside the room goes through one set of channels, the air outside goes through the other set channels, and heat is exchanged through that conductive material.

It doesn’t have to be static. You know, they’re – as I said, the dynamic wheels are in the news significantly lately. In fact, I just saw an article about this being used at a university, I believe it was in Montana, and quite an effective means of taking advantage of cool air outside while maintaining the integrity of the inside environment.

So, you know, the advantages here are that we don’t have to pay as much attention to what the humidity is outside, airborne contaminants, be they, you know, particulates or gaseous state of contaminants, but we still get to take advantage of the cost savings associated with cooling without that refrigeration cycle.

So, this is just a brief comparison of those two. You know, the direct method we have the least energy used. That is it takes less energy because I’m eliminating some restriction in my loop here. If I remember back to those diagrams I’ve got that ducting, and pushing air through that ducting takes energy. Pushing air through that heat exchanger also takes energy, so on the indirect side I’m going to have more fan load than I would if I just was exhausting air to the atmosphere and bringing air in from the atmosphere and getting rid of that heat exchanger.

On the plus side, of course, I just mentioned this contamination issue, and we’ll talk more about that in just a second. Some of the things that make us more able to use Air-Side economization are “better” humidity control. And I put better in quotes because my definition of better here is, I want to be able to run with this wide a humidity range in my data center as possible.

There was an article last year, in fact in March, so just about a year ago in the ASHRAE Journal questioning the need for any kind of humidity specification at all in data centers. And I’ve got some examples later on of data centers that use no humidity control. One of my pet peeves here is that the way that we control humidity in the data center, if we do need to at all, and certainly on the high end to prevent that condensation I was talking about, there has to be some means to make sure that’s true.

But currently, the vast majority of cooling equipment out there, you know, stand-alone (CRACs), air handlers, whatever the case may be, have got a specific humidity point that they try and control to. And you’re allowed to work with a range around that somewhat by introducing deadband and sensitivity.

But, I’d like to see a succinct upper and lower limit that, you know, we control humidity between, but don’t really care where it is in there. I think this matches up very well with both the allowed range, which is 20% to 80% relative humidity in the ASHRAE Guidelines, as well as the recent article I mentioned.

The other pet peeve that I’ve got is most of the time humidity control is done at the return side of the air handler. So, if the return side is hardly interesting, let alone germane, you know, we care what temperature – or what – what – about what temperature and what humidity air are entering or IT equipment, not necessarily so much what temperature and humidity air exiting our IT equipment.

So another technique, which allows us to take advantage of economization as much as possible, is better air flows control. And the reason that better air flow control is important is to reduce recirculation. Recirculation induces temperature delta on the inlet side of the IT equipment, and as much as, you know, 20 degrees and above I see all the time in data centers, you know?

And the typical reasons for this are poor best practices, you know, we don’t use blanking panels 100%, we don’t have our equipment tightly placed together, we have gaps in rows, we don’t do a good job tuning the floor for, you know, vent tile placement. All those things add up to some range of temperatures across the face of the IT equipment.

We have margin in our data center and margin is a good thing. We need to have, you know, a temperature such that when an event occurs, I break a belt on an air handler, I you know lose power momentarily, there’s some thermal mass in there and that’s at a temperature that allows me to ride through until I reintroduce cooling.

By reducing that margin to absolute minimum necessary, I’m able to use, you know, free cooling more often than if I had to have a bigger margin. And getting that temperature to be as even as possible across the face of the IT equipment inlets is very important to be able to do that.

And finally, I have better measurements here, and that is, you know, I need to know what that delta is. I need to know what the hottest spot in my data center is. And of course, the vast majority of time my only indicator is the return temperature back to my air handling units or CRACs. A, it’s a bad place to measure; B, it gives me no idea what that temperature range is across my IT equipment.

So, I’m a big fan of pervasive temperature monitoring, you know, using any of those systems that are out there. We handle SynapSense, there are others, but the fact that there are multiple (sense) points across the data center gives me the confidence to adjust that point up as high as possible.

So, let’s talk a little bit about contamination. I think the vast majority of the questions that we got, you know, in the pre-questions we asked for the webinar had to do with either return on investment or humidity/contamination.

Contamination is a very low risk in the vast majority of the United States. It’s – you know, we have clean air in this country and part of the reason for that is that we enforce some regulations that cause that. You know, I spent a lot of time in my previous position, and part of that was spent analyzing failures on board (that had) returned, and I have seen boards that fail due to airborne contaminants.

They are from very specialized situations in the U.S., and I’ll give you an example of that. You know, we had failures that were induced by high sulfur-content air where the sulfur was put into the air by modeling clay. So, big automotive manufacturers when they create models of cars they use clay that has high sulfur contents, and that sulfur gets in the air. If there are servers or other pieces of IT equipment in close proximity to that clay that sulfur gets sucked in.

Modern board manufacturing that requires the use of lead-free solder in turn requires extensive cleaning and very aggressive fluxes to cause the lead-free solder to stick. If any of that gets left on the board, and even if it doesn’t, you know, if certain metals are exposed to that air flow then the potential for contamination is greater when we have elements like sulfur and chlorine, and some other things in the air. Mixes of these sometimes make it worse.

Foreign countries, you know, who knows what quality diesel fuel is being used in the delivery truck that’s sitting outside idling and putting exhaust into the intakes of a data center?

So, the bottom line is it’s probably going to be just fine, but I always recommend that you test. There’s a specification that I’ve got here, this ISA specification, and it uses limits based on growth on copper for contaminants that are of the types I have listed here, so chlorine, sulfur, nitrogen oxide, basically anything that causes a nitrite or oxide growth on the copper itself.

And the test consists of, you know, placing a bare high oxygen copper target in an airflow for a period of time, and I believe it’s 30 days, and then you know send that back to the testing lab that provided you the sample and they measure the amount of growth that took place on there. And it doesn’t necessarily measure the kind of contaminant, you know, was it chlorine, was it sulfur, was it some combination, but it does tell you a go, no-go point.

And there are limits listed in that ISA spec for the amount of growth that they recommend for different kinds of environments. So, the best environment is typically what we consider a data center environment, maybe the next step down is data closets and remote servers. Maybe the next step down is kind of relay rooms.

So it’s, you know, maybe a $1000 test, well worth it if you’re considering spending the money to implement an Air-Side economization system for the peace of mind that it brings. And if you do happen to have a problem, all is not lost. I mean, there are filtering techniques, activated charcoal, for instance, that can be employed to clean that air. You may want to consider using indirect where you have a heat exchanger providing a barrier between the contamination and your inside air.

Humidity, you know, plays a role in this largely because humidity can accelerate the effects of those contaminants. The humidity control itself, for the sake on controlling humidity, is not so important. And in fact, I’ve got, you know, some examples here from the paper I mentioned previously around humidity control for data centers. And that is many, many facilities that are on the West Coast, so you know, lots of different areas of humidity that run with no humidity control whatsoever.

And the authors of this paper, one of which was a recognized authority on ESD and ESD protection, and largely when I think about humidity in the data center, the two concerns I hear most often are, “What about ESD,” and the great – and the other concern of, “What about, you know, condensation?”

So to keep ESD on the low side, the argument given in the paper for extending that range quite low is, look, equipment manufacturers do very rigorous testing of the IT equipment that’s going to be placed in the data center for ESD susceptibility. And the ability to pass those tests far exceed the amount of typical ESD that can be introduced by, you know, walking up to the machine and touching it. So, no matter what humidity level you have, it – you’re safe on the side of equipment that is in its case and in a grounded rack and very well protected.

The other part of the ESD concern is, say somebody – we have to do service on the IT equipment in the room, so I have my piece of equipment opened up, I take out memory boards, whatever the case may be, I may have very sensitive circuits exposed. If I have very sensitive circuits exposed, no amount of humidity is good enough to protect against the damage that could occur, you know, with a discharge from a human body onto a very sensitive circuit.

So, other means of protection are necessary, grounding, using, you know, static dissipative mats, taking precautions when you open the machine to make sure that you drain charge from your body first. So, I – you know, I was thrilled to see this article. I personally have seen many, many data centers that run with no humidity control, and it’s quite dry here.

And, you know, I know of data centers that run in single digits quite often. And, you know, while I have no exercise in gathering data around failures there, they certainly don’t report association of ESD with any increase in failure rate. So, by all means, I say go for as wide a range as possible here.

So, let’s go on to a Water-Side and kind of leave these concerns about air for a while here, and I’m showing the same kind of map here. The big difference with Water-Side economization is that if cooling towers are being used to reject the heat from the primary loop to the atmosphere, we get to take advantage of wet bulb temperature.

And wet bulb temperature gives us some distinct advantages in the dry areas, particularly in the West. You can see that, you know, there’s kind of a large dip down of that blue and green and yellow color there with some big blue patches in the middle, and that’s due to the fact that these areas are very dry. We happen to live in one of those very dry areas right here.

You know, we’re on the Eastern side of the Rocky Mountains. When moisture comes in from the Pacific Ocean it tends to get lost in the Rockies and not make it to us; and therefore, we don’t have a lot of rain here. And the good news for us is we get to take advantage of that in wet bulb-kind of heat rejection mechanisms.

In fact, there was an article just last month, Hewlett Packard just up the road from us, built a new data center that is currently 100% evaporative air-cooled. It’s a fairly good sized facility. About 33,000 square feet and 6 megawatts, and no chillers whatsoever. They have existing chillers and heat exchanger from the plant that’s there if they have, you know, an unusually hot day and – one of those kind of 100-year things, but the plan is to never run on those.

So again here, I want to point out that the wet bulb temperature that’s been specified to generate this map is less than 50 degrees. That’s a very aggressive point for most existing data centers. Most existing data centers in the United States use chilled water around 45 to 47 degrees. And what that means is all the cooling equipment that exists in that plant is designed around that set point.

So, I probably am not just cooling what’s on the floor. I’m probably also cooling my equipment rooms. I’m (sic) probably also have some comfort cooling on there if I’ve got meeting rooms and so on and so forth in my data center. So, I have to be very careful about, you know, raising the temperature of my chilled water to take advantage of more cooling days that I don’t go past the capacity of any of the installed equipment.

Now on the flip side, the Hewlett Packard case for example, the design point for the chilled water there is 60 degrees. Sixty degree chilled water provides a lot more flexibility for being able to use free cooling. In fact, we can go another ten degrees that would just extend those, you know, cooler colors down further in the map. We’d get to take advantage of more hours of free cooling. So, much easier to implement these kind of things in aggressively designed, you know, typically newer data centers where the chilled water is (unintelligible) temperature.

So, I’ve got a diagram here of, you know, a typical wet bulb-kind of heat rejection mechanism where we use a plate and frame heat exchanger to exchange the heat between the primary and secondary loop. So, the secondary loop’s on the left-hand side there. I happen to be showing in-row cooler, but you know any kind of air handler that’s water cooled is appropriate here. And water cooled, I mean a chilled water air handler, not a water cooled compressor unit.

So, a chilled water air handler where I’ve got, you know, this big chiller plant some place providing water at whatever temperature; the higher the better, normally. When my outside temperature gets low enough I reset the set point on my cooling towers, because I’m not pumping that, you know, cooler water into cooled – the condensers on the chiller when they’re operating, so I reset that down to the temperature that I’m interested in for the operation of my heat exchanger.

I roll all those valves over so that now I’m cycling water through my plate and frame heat exchanger, which is that device there at the top, rather than cycling it through the condenser side of my chiller. And then, rather than cycle water through the evaporator side of the chiller, I cycle it through the other set of plates or the other set of channels in that plate and frame heat exchanger for the secondary loop.

And exchange heat, of course, from the left-hand side to the right-hand side, you know, either through the chiller, if I’m not on a heat exchanger, and through the heat exchanger if I am on the heat exchanger. And then, it gets rejected through the atmosphere via, you know, evaporative cooling by that cooling by that cooling tower I have over there on the right side.

Now, you don’t strictly have to use a cooling tower here. You can share, you know, a dry cooler if you’ve got an air-cooled chiller, but you don’t get to take advantage of that dry bulb kind of temperature. So, I’ve actually got, you know, a diagram of a case where we use a dry cooler, but we use it in a slightly different configuration.

It’s not shared. You know, it’s a separate loop in this case than the one that’s the primary cooler for the chiller, and I use it as a pre-cooler or it could be the 100% cooling source. And the advantage of this particular configuration is that any time the outside dry bulb temperature is lower than the return water temperature; I put this pre-cooler in the circuit.

It cuts the number of hours I can run down significantly, compared to, you know, a wet bulb heat rejection typically, but in the Northern areas of the country where it’s quite cold outside for a significant portion of the year, I get to take a load away from the chillers the whole time the outside air temperature is cooler than that return water. So, it may not be 100% free cooling, but it does offload that compressor significantly when I’m able to introduce this cooler into the circuit here.

So the, you know, kind of high level advantages and disadvantages of these two are just what I was talking about. You know, we get the wet bulb range of temperatures with – when we have a water-based heat rejection system. It’s a higher installed cost, typically because you know I’ve got to put in sizing for my cooling towers appropriately to get the sump temperature down to a much lower point than I would have if I had just condenser water coming out of those.

I have a little bit more pump restriction because I’m going through that plate and frame, so some more piping, some more horsepower that I lose for the pumps. On the pre-cooler, you know it’s very simple to install. Typically just a splice into the line, but you know I don’t get advantage of that wet bulb temperature.

How much I get to use this, as I mentioned before, is just largely dependent on what my secondary water temperature is. And just like, you know, the ability to get air in and out of the room in an Air-Side can be the limiter on existing installations, so can, you know, the water temperature set point that is required on Water-Side.

Not only, you know, the set point, but also the configuration of the equipment. Do I have room for this? You know, can I afford to shut down my loop? Can I get to things? Often times, both Air-Side and Water-Side are considered when some other change is happening. You know, I’ve got a chiller upgrade coming along or I want to increase the capacity of my floor. I can take advantage of that time when I’m going to have to be doing work anyway to minimize the impact; and therefore, expense on getting that economization installed there.

Again, better air flow control helps. It allows us to get the set point up higher. Any time we get the set point up higher, you know, I get to run all my refrigeration at a more efficient point when I’m on refrigeration, and it gives me the capability of running on – more hours on the economizer when I am on the economizer. And often times, you know, can be a big enough swing factor that it’s the deciding factor for being able to justify installation or not.

So, cost benefit. Gee, a lot of questions about cost benefits. Does this make sense for me? I can’t tell you sitting here whether it makes sense for you or not. You know, this is a significant amount of calculation to go figure this out, you know, to the level of accuracy needed to make a financial decision to typically of the magnitude we’re talking about.

Now, you know, your situation varies widely and new construction, as I say, or opportunistic times are the best for ROI. The bullet there about some CRACs having second (control), louver control, gee, you know, if you have outside walls on your data center and you happen to have CRACs that have the capability of having a little add on unit put on top, and all you have to do is cut a hole in the wall, you’re install cost may be very little. It might be quite easy for you to do.

You know, take that versus the case of the person who has a datacenter buried in the bows of a building some place with no access to outside air. The only way you get heat out of that place is through refrigerant lines. It’s going to be pretty tough to do economization in a case like that.

You know, some of the things on here that we’ve already talked about, the containment and the monitoring help us with the ability to do economization. I didn’t talk about variable frequency drives, or even ECMs, but the ability to vary air flow is another huge advantage to being able to employ pre-cooling or economization. With the ability vary air flow I get, you know, to maximize the delta-T across my cooling equipment.

And a lot of manufacturers out there, and I shouldn’t pick on manufacturers, a lot of sales guys out there make claims about being able to vary the delta-T with strategies like containment. Containment does nothing to vary delta-T. You know, delta-T is absolutely fixed by the amount of air that comes out of that cooling unit and the amount of (loads) that’s in the room. So, I have to vary one of those two things to vary the delta-T’s that is seen by my cooling equipment.

I either, you know, change the amount of air that I’m supplying, you know, more air, less delta-T, less air, more delta-T, or I change the heat load in the room. Even fans going up and down in the IT equipment in the room, other than the heat load they add, don’t vary the delta-T back at my cooling equipment. They do locally, they do at my rack if I was to measure, you know, front to back at the rack, I see a difference as the fans cycle, but I don’t see that reflected back to the cooling equipment at all.

So, I happened to have the opportunity to visit a site in California just last week where they’re considering Air-Side economization. It’s in Los Angeles, I’ve got the zip code listed up there, and the zip code is a necessary part of using that GreenGrid tool I talked about before that lets you do calculations of the expected hours to run free cooling.

In this case, they had about a 1/4 megawatt load. It’s a chilled water plant and I estimated 2.0 COP chiller, and COP is coefficient of performance. So, a coefficient of performance is just what it says, it’s a unitless coefficient that’s a ratio of the amount of power that I have to put into that compressor versus the amount of heat that I’m rejecting.

So, you know, I can express heat in kilowatts, just like I can in any other unit, tons or BTU per hour, whatever the case may be. I like kilowatts because then kilowatts of the chiller I can measure. It’s a very simple calculation. So, you know, average COP, probably their chillers weren’t quite this good, but that’s kind of a nice average number.

You know, we want 72 degree or cooler air going into our equipment, and so if we were able to use 72 degree air outside and were able to take advantage of that entire 20% to 80% allowed humidity range from ASHRAE in the particular location that we’re at, it turns out that we have 8228 hours of free cooling available, and that’s 100% free cooling.

So, you know, as I said before with that mixing action in the louvers above the air handlers, if you remember that picture, we can take advantage of not running at 100% free cooling just offloading the compressor, but I was so close to 100% here that it didn’t make much sense to look at the other 6% of the time.

So I just said, “All right, if I’m going to run, you know, 94% of the time on free cooling and I’ve got a 1/4 megawatt load with a 2.0 COP chiller, I’m going to save about $103,000 based on 10 cent a kilowatt hour power.” Now, it’s going to cost me some, because I’ve got to put in some ducting and I’ve got to put in some fans.

And so, I just did a quick back of the envelope calculation for the amount of ducting it would take and the losses that would be there, and you know the induction fans and the out-duction fans of about $18,000 to operate those fans during that 94% of the time, so that includes some additional filtering on the outside. But, you know, in the end I end up with about $85,000 a year savings.

Now, the expense to put this system in, to run the ducts and cut the holes in the wall and install the fans, as well as purchase the equipment necessary, is a pretty wide range, right? I don’t – I didn’t do a thorough examination of the pathways.

I don’t know what’s in the road, you know, the distance we have to run isn’t that great, but we do have to run some – through some – over (unintelligible) space. So, there could be, you know, existing things that have to be moved, if it’s even possible to do. What if we encounter asbestos? Lots of different factors that could drive the price, you know, over a very wide range.

So, at $85,000 savings and $250,000, let’s just use the low end of our installation, let’s say, is that a good investment? If I use, you know, a very simple, you know, break-even time kind of calculation, I may not want to make this investment. But, I would counter that that is a poor methodology to make decisions on making changes to infrastructure for the building.

The building’s lifetime is very long. This is not an IT equipment-kind of decision where I’m going to have a rollover every two or three years, or whatever my rollover cycle is. I’m going to live with this for a significant period of time, you know, often times ten years or more. So, I need to look at, you know, more sophisticated analysis techniques like, you know, net present value and money, for instance, where I have a choice.

I’ve got my $250,000, I can invest it in this, or I can invest it in the stock market, or whatever the case may be, what’s the best place to put my money? And using those techniques often times, you know, a two or three-year payback is quite attractive. And in this case, you know, we’re right at a three-year payback if the construction happens to be on that lower end. So yes, I believe that this would be a good investment, particularly if, you know, the cost increase of energy is considered.

So it’d be nice if that was the case, but that’s not the case. The case really is, as I have it here, you know, the 72 degree air is all we need to do to cool our data center. You know, that’s what the operator’s comfortable with to rid through. And unfortunately, to get 72 degree air everywhere at the inlets of that data center I have to supply about 52 degree air. And the reason for that is I have poor air flow practices in the data center as it exists today.

There are, you know, lots of missing blanking panels, lots of gaps in rows, and you know I measured from top to bottom at 20 degree delta in air on many, many racks. Probably there are some racks that have greater than this, so this may even be optimistic at the point that I’ve got it. On the other hand, I probably can run higher than 72 degrees quite successfully and make it through a power failure.

But nonetheless, if I look at, you know, the need for 52 degree air to be pulled in to run on economization, it changes the amount of hours I can run significantly. Now, I’m down around 8% and I use the same argument, “Can I get some partial cooling?” Yes, I can, but I can’t get a lot, so I didn’t take into account the partial offloading either. I said, “All right, 8% at 100% free cooling, and I’m only saving $8500 a year.” I don’t even – I don’t care what kind of analysis you use now, there is no way that a retrofit like this makes sense.

So, the point that I’m trying to get at here is that, you know, to be able to take advantage of free cooling, either Water-Side or Air-Side, you have to get, you know, best practices installed in your data center as it stands today. And it only makes sense to do that, whether or not, you know, in the end economization makes sense or not, being able to minimize the energy that you use with the current cooling infrastructure is the right thing to do. So, how do I do that?

The way I save money in a data center is two ways, and two ways only. I save money on air flow, and I do that by reducing the amount of air that I deliver, or I save money on the refrigeration cycle, and I do that by making my equipment run at a more efficient point. And a more efficient point for a refrigeration cycle is increasing the temperature, the average temperature at those evaporator coils.

So, in a data center with cracked and fixed speed fans, I don’t have any choice about the air flow. I have to run those fans at full speed all the time. That means my only choice for saving energy, or my only opportunity for saving energy, is raising the set point in the data center and making both the air going into the machines and coming out of the machines go up in temperature, so that I now run the compressors in my CRACs at a more efficient point.

The way I get there is the way that, you know, ASHRAE and others have been tufting for many, many years, I use blanking panels. I seal up all my gaps. I don’t waste air anywhere. And I would add to that that containment is a very, very useful tool in being able to take advantage of, you know, the increased efficiency that I get from higher temperatures.

And what containment does is kind of remove a lot of the air flow concerns from the equation. In a typical, you know, open non-contained data center I want about – I want to see between 15% and 20% excess air flow, you know, to account for things like air sneaking around the ends of rows and air coming back over the tops of cabinets and propagating back to the CRACs out of the exhaust, so on and so forth.

If I contain that, and it doesn’t matter hot aisle or cold aisle from an even temperature point of view, I kind of remove that from the equation. I mean, there are some other considerations that I have to maintain adequate air flow, so on and so forth, but those are really no worse with containment than without.

The other biggie, in my opinion, is monitoring and measuring. I would really like to know what the worst spot in my data center is at all times. And the only way I’m going to know that is to measure it. And I might not be able to measure the precisely worst spot, but I can measure very, very nearby. You know, monitoring systems have advanced tremendously in recent years, and pervasive monitoring of, you know, every other – every third – depending on the density of equipment that I have in the data center is very, very prudent.

It allows me, you know, to tune the data center, make the corrections that I need with the blanking panels and the floor tile placement and the containment, driven by actual data. I can see the changes I make in real time. And it also allows me to use the information I’m getting back from that monitoring system in a day-to-day scheme for controlling the temperature on my cooling equipment.

You know, in most data centers where we have return temperature control, the practice is typically, I’m just going to go around and set those all to the same temperature. That rarely results in the same temperature of air being delivered, so pay attention to that. You don’t have to have the same setting on the return point of every single air handler.

It depends on the – you know, what load it sees what temperature that’s going to be. So, adjust those individually and take advantage of the fact that I only have to have, you know, a certain temperature of air supply to the IT equipment. I don’t need to go below that point.

Now having said that, the temperature that is going to result on the maximum efficiency is going to be the temperature that results in the minimum energy use for the entire data center, not just the cooling equipment. The fans and the IT equipment are going to start ramping up at some point. Be cognizant of that and look at, you know, part of your measuring scheme should be total energy consumed, so both IT equipment and cooling equipment.

And is it significant? You bet it’s significant. I ran an experiment before I left my previous position with a unit in an environmental chamber where I measured total energy consumed by the cooling and the unit itself. And it turned out that, you know, I saw more than a 1-1/2 kilowatt swing out of about 8 kilowatts just to the fan energy in that unit as the fans ramped up and down, and it swamped the savings at about 77 degrees in that chamber.

So, the set point that you use is going to be not necessarily the hottest, and it’s going to be different whether you’re on economizers or not. The only way you’re going to be able to tell what the proper set point is to achieve minimum energy is measure it. Measure, you know, the parameters that you’d like to control.

All right, I think we have time for some questions if we have it.

Tanisha White: Yes, thank you, Scot. We do have some time for questions. And the first question is that you’ve talked about containment, but the first few slides showed pictures of economization with cold aisle containment. And (Juan Carlos) asks, “Can this work with hot aisle containment instead of cold aisle containment?

Scot Heath: Sure. You know, hot aisle containment is actually, in my opinion, a slightly better scheme just because it gives me a little bit more margin. I have a larger mass of cool air to work with in case of, you know, I have an event, and it also keeps the rest of the room cool.

Now, anytime I have hot aisle containment that means that I’ve got to have some scheme either over ceiling plenum or ducting to get that air back to my air handling units. So, the way I implement economization with hot aisle containment is tap into that return methodology. So, the data center I just mentioned previously, the one that Hewlett Packard did, in fact have got hot aisle containment over ceiling return plenum, and economization – Air-Side economization in part of that data center.

And the way that’s affected is the CRACs are ducted up to the fans, which bring air in just like they were on that slide that I showed, and that over ceiling plenum is then brought up to that set of louvers, just as in the picture. The picture I showed didn’t have an over ceiling plenum, but imagine there was one there and we had the hot aisle containment going into it; works just fine.

Tanisha White: All right. Thank you. Another question, both from (Brett) and (Dan), they have questions about whether they can use Air-Side economization in a specific geographic area. These are both areas of higher humidity. Any comments you would like to share?

Scot Heath: Well, sure. You know, the key the control of the air that you’re bringing in. So again, you know, what level of humidity are you willing to allow in your data center? I maintain that as long as you’re not condensing you should run at least as wide as the ASHRAE recommended – I mean, ASHRAE allowed range, which is up to 80% relative humidity.

Now, that 80% relative humidity is at the inlet to the IT equipment. So, it’s not at where it’s typically measured, right? If I go into a data center today, 90% of them have the sensor for both temperature and humidity at the return point to the CRAC. If I’m running 80% relative humidity at the return point to my CRAC and I take that air and cool it down from, let’s just say, 90 to 72 degrees, I’m probably saturated. I’m probably 100% relative humidity.

So, I can’t do that. I have to be measuring at the proper place, and that is the air that’s going to go into the equipment itself. So, I measure and convert to dew point, right? I know what my set point is, let’s say just for the sake of argument, is 77 degrees, 80% relative humidity at 77 degrees is something like 69 degrees. I install a dew point monitoring device, you know, in the outside air and I make sure that I never have more moist than, you know, whatever dew point that I calculate for the (in liner) temperature that I need to satisfy my data center.

So, yes, you can take advantage of it. Are you going to be shut off sometimes because the humidity gets too high? Absolutely. You know, the control schemes, the last one I worked on, you know, sensed both low and high humidity and we got shut off all the time on the low side because we’re in Colorado. You’ll get shut off on the high side as well. You just have to pay attention to those things.

Tanisha White: Okay, great. Another question here is, “Why does indirect mode provide higher operating costs?”

Scot Heath: It’s because of the fan energy necessary to force the air through that heat exchanger. So, you know, assuming that my ducting load is the same, you know, getting the air in from the outside and from the outside – or from the inside back to the outside, if I introduce the heat exchanger in that loop I have to push the air at least as far, plus (stand) the static pressure of the heat exchanger itself.

Tanisha White: Okay, great. And we have time for one more question and I think it is, “How do you deal with durations of time when the plant will cycle in and out of economizer mode due to ambient conditions near switchover set point range?”

Scot Heath: So, you know, hysteresis is the most classical way to deal with any control problem that has, you know, a slow ramp. There can be big swings in the weather, and big swings in the weather can certainly cause me to go in and out of economization when I don’t necessarily want to do that, so perhaps both a time and hysteresis factor.

So hysteresis of course is, you know, if I require the temperature of the air outside to be, let’s just say, you know, at least 75 degrees or cooler before I enable economization, I may be staying on economization until the temperature outside reaches at least 77 degrees. And so that prevents, you know, the small oscillations around that point that I’m interested in to cause me to go in and out and in and out and in and out.

So, the amount of hysteresis I want to introduce is site-specific. It depends on, you know, what kind of extremes you see, and also time, right? We already have time built into things like the compressors on the — what am I trying to say — CRAC, so the – if I shut a compressor off I can’t start it right away; things like that.

So, the same thing can be accomplished with the control system on the economizer itself. If I’ve gone into – if I’ve gone out of economization mode I may not want to enable going back into economization mode for five or ten minutes or some period of time.

Tanisha White: Great. Well, thank you, Scot. It looks like we’re out of time, but I want to thank you for a great presentation on Free Cooling and Economization and answering some of our questions.

I want to remind our participants that this webinar will be available within 48 hours on our Web site at www.42.com.

If you – or 42U.com. If you feel like your questions were not addressed today, I would like to invite you to call us at 1-800-638-2638 or send your questions via our Project Evaluation form on our Web site.

And last but not least, I would like to thank everyone for your participation. Have a great day.