Saving Energy in an Existing Data Center

Saving Energy Costs in an Existing Data Center

April 1996, Power Quality Assurance Magazine

The focus on data centers has always been on reliability, not energy efficiency. As I've often said when discussing this issue, that "nobody ever got fired because the utility bill was too high, but somebody always gets fired when there is an outage at a data center!" Is it any wonder that we spend thousands of hours trying to eliminate the single points of failure that could cause a data center outage yet only spend a fraction of that time thinking about the energy efficiency of those highly reliable systems we design.

The one constant in this industry however has been change. Now that profit margins are much smaller, (some would say more realistic) than the heady days of collocation data centers leasing space for $100 per square foot and up, energy costs in these data centers are finally being looked at. Energy consumption in a large data center can be in the millions of dollars per year. Reducing energy usage by 20% can produce dramatic savings over a period of years. One of the data centers that I am currently working with spends an average of $100,000 per month on their utility bills. A 20% reduction in energy usage would save them over $240,000 in the course of a year.

Energy usage can be reduced in almost any data center without a major redesign of the infrastructure. This article focuses on things you can do to reduce energy consumption in an existing data center without redesigning or modifying mechanical or electrical systems. There are three main areas in a data center to focus on in an effort to reduce energy consumption. These areas are (1) the HVAC or cooling systems, (2) the UPS loads and how they are distributed and (3) the lighting systems. Changes can often be made to way these systems are operated that will reduce energy usage without impacting reliability or redundancy in a negative way.

Cooling systems in a data center can be as simple as direct expansion Computer Room Air Conditioning Units, (CRAC Units) with roof mounted condensers or as complicated as chilled water systems with primary and secondary pumping loops. The technology used doesn't really matter, since for purposes of this discussion, it's already installed. What is important is how it is being utilized and whether it is operating as efficiently as possible.

One of the first steps to be taken in evaluating the efficiency of the cooling system is to look at the design parameters if available. One of the things the design documents will tell you is the cooling capacity of the equipment installed as well as the number of redundant units. Many data centers run all of their CRAC units all the time, even if 25 percent of them were designed to be redundant. This usually results in all of the units operating lightly loaded, which is their least efficient operating mode. Many times the single biggest energy reduction a data center can achieve will be to shut down the redundant CRAC units.

Another area related to cooling systems that can be a big user of energy is the humidification/dehumidification process. I have been in dozens of data centers where one CRAC units is putting moisture in the air (humidifying), while another unit 30 feet away is trying to wring the moisture out of the air (de-humidifying). I've also seen data centers where the amount of unconditioned make up air (outside air) being introduced into the raised floor area far exceeded what was required for air changes by code and resulted in problems with humidity control. Setting up the controls properly so that you avoid CRAC units fighting each other and limiting the amount of outside air introduced into the raised floor area can result in significant reductions in energy usage.

If your data center uses a chilled water system with multiple chillers, take a look at the loading of the chillers. The efficiency of chillers goes up dramatically as they are loaded closer to their capacity. Having two chillers running at 30% of capacity is using significantly more energy than having one chiller running at 60% of capacity. The same rules also apply to the cooling towers or whatever method you are utilizing for heat rejection.

Temperature settings in the data center and the chiller plant can also be an area of potential savings. Many data centers have their CRAC units set at 70 degrees, which is much cooler than is required for the operation of the data processing equipment. Raising the temperature in the data center up to 72 to 74 degrees will result in energy savings while having no impact to the operation of the data center. An additional savings can be achieved by raising the chilled water temperature of the chiller plant by a couple of degrees. Some studies have show a 3 to 5% savings in energy usage by taking these steps.

One of the biggest impediments to energy efficiency is the oversizing of the infrastructure. Most data centers have much more capacity installed than they are actually using. This results in less efficient operating modes for everything from UPS modules to chillers, cooling towers, CRAC units, etc. While the purpose of this article is not to focus on design changes, taking a more modular approach to future changes in the infrastructure can yield big savings. Installing a diversity of module sizes can allow you to operate the infrastructure equipment at loads that are closer to their most efficient operating modes.

While UPS modules have improved their energy efficiency at light loads, the most efficient operating modes are still at 80% of capacity or more. Depending on the design of your electrical distribution system, it may be possible to shut down a module or shift the load to fewer modules. A UPS system from a major manufacturer is rated at 89% efficient at a 20% load, but 96% efficient at a 80% load. With the efficiency of UPS systems varying by 6 to 8% depending on the load, having fewer modules operating can result in significant energy savings.

How many times have you walked through a data center with nobody on the raised floor, but every light on in the area? I've seen this more times than I can count. Having a BMS system control the lighting or having occupancy sensors that turn the lights on and off when people come into and leave the raised floor area can make a dramatic difference in energy usage. If there are people on the raised floor constantly during working hours, the occupancy sensors can be turned off between 8:00am and 5:00pm. Short of installing more energy efficient lighting, the single most effective thing you can do with lighting is to turn it off when it's not in use.

Reducing energy usage in an existing data center can often be achieved without spending tens of thousands of dollars modifying systems or making changes that impact the reliability of the data center. Given the higher power requirements of much of the new equipment currently being installed in data centers, the savings could be dramatic. Taking some proactive steps to review the operation and loading of your infrastructure support systems and managing the lighting in your facility are just a few of the many steps you can take to reduce energy usage. While the focus on data centers may always be primarily reliability, energy efficiency needs to become more of a consideration.

Our Clients
CDCDG was a pleasure to work with on the evaluation of our 131,000 square foot data center.
CDCDG was a tremendous help to us in figuring out our long term data center requirements.
CDCDG has been a real asset to us in helping us choose the location for our backup data center.
The commissioning process was effortless thanks to CDCDG.
From concept to commissioning CDCDG was an invaluable partner to us.
CDCDG's thorough evaluations of our statewide data center initiative has helped us to be a success.
It's been a pleasure working with you on another successful data center project.
Your support of our worldwide data center initiative has helped us to our success.
Your design oversight saved us thousands of dollars on our data center build.
We could not have done it without you.