Data Centers of the Future - Part II

Data Centers of the Future Part II

This is part 2 of a 4 part series on data centers of the future. Part 2 is a discussion about the impact of high density loads on data centers being designed today.

I will make this prediction with certainty. Long term, the impact of high density loads on data centers will be huge. With loads for fully configured racks reaching 30kw or more, getting sufficient power and cooling to the rack is a significant problem. Currently, there are few if any examples of high density data centers that are fully populated. However, if data centers ever become fully populated with nothing but blade and 1U servers, I believe that 95% or more of the data centers in existence today will have to be redesigned. Here are some of the challenges inherent in designing a data center that can support these types of loads.

Power

At loads that vary from 12 to 30 kW per rack, getting the correct number and size of circuits to each rack is a challenge. A fairly common configuration for blade servers is to supply 8 ? 50 amp, 3 phase power feeds (A & B) to a single rack. This equates to 20 to 24kW to each side. To put this in perspective, the average electrical feed to a rack 5 years ago was 1 ? 20 amp circuit. The average rack load at that time was 1 kW, single feed. We are now at 20 times that design load and dual feed! Below is a chart showing power requirements from some of the manufacturers:

These loads far exceed the capacity of most data centers if there are ever large quantities of high density racks installed in them. What has saved most data centers thus far from having to address this issue is that they still have a lot of lower power legacy equipment. As that equipment gets replaced, an awful lot of data centers are going to realize that they don't have sufficient power in place to support these loads. A fully loaded high density data center could in theory exceed 500 watts per square foot!

So what are the impacts of these loads? Here is a chart showing the utility power requirements of a large 50,000 square foot data center at various design loads. This chart assumes a 1 to 1 ratio between power and cooling. Depending upon your mechanical design, it could be significantly better or significantly worse than these ratios.

While I can't speak for utilities nationwide, I can tell you that no utility I've ever worked with has a spare 50 megawatts sitting around that they can give you without years of advance planning and construction. For many utilities anything beyond 4 to 5 megawatts would be problematic. At these densities, co-gen starts to become a very feasible alternative.

Utility Costs

Assuming that you could get this level of power from the utility, what would be the cost? The chart below shows the monthly utility cost at various densities for a 50,000 square foot data center.

At these levels of cost, energy efficiency starts to become more critical. Life cycle costing of equipment starts to make sense. The difference of 2 to 3% in energy efficiency can mean hundreds of thousands of dollars saved on electrical utility costs per year.

Cooling

"Data centers that have infrastructure designs based on providing vertical streams of cooling air through plenum floors and servers that require cooling air to be distributed horizontally are at cross purposes."

Once you've "solved" the challenge of getting sufficient power to a high density data center, its time to address cooling. There are really three issues with cooling high density loads. The first is having sufficient capacity in your overall mechanical systems to support the increased densities, the second is having sufficient airflow to the front of the racks and the third is getting rid of the heat being discharged into the hot aisle.

Here is a drawing showing the number of CRAC units required for a 10,000 square foot data center at 500 watts a square foot. As you can see, the capacity required is staggering. For a 50,000 square foot data center you could potentially have as many as 300 ? 30 ton CRAC units!

The second big challenge is having enough airflow across the front of the servers for the loads. A 1 kW server requires approximately 130 cfm of airflow for cooling. A 24 kW rack of servers requires approximately 3840 cfm of airflow across the front of the rack to cool the load. The typical perforated tile is capable of providing around 300 cfm to the bottom of the rack. To properly cool a rack of servers at these densites would require the use of 13 perforated tiles.

*Graphic and quotes provided courtesy of APC

The final challenge is getting rid of the hot air being discharged from the server. This hot air needs to be returned to the Computer room air conditioning units without being allowed to mix with the cool air being supplied to the other side of the cabinet. This short circuiting of return air with the supply air is one of the biggest problems faced by high density data centers.

Net Useable Square Footages

So assuming you get past the power and cooling problems, what is the impact of high density loads on your net useable square footages? Here is a graphic showing the impact of different densities on the net useable square footage. Of course every project is unique and would have slightly different square footages, but as this graphic demonstrates, you soon have much more space devoted to support equipment than you do servers.

Stay tuned for part 3, where we will discuss practical solutions to some of these challenges.

Ron Hughes, President of California Data Center Design Group has been involved in the design, construction and operation of data centers for over 25 years. In the last 6 years alone, Ron Hughes has managed the design of over 1,600,000 square feet of data center space in the US, Europe, Asia, the Middle East and Mexico.

Our Clients
CDCDG was a pleasure to work with on the evaluation of our 131,000 square foot data center.
CDCDG was a tremendous help to us in figuring out our long term data center requirements.
CDCDG has been a real asset to us in helping us choose the location for our backup data center.
The commissioning process was effortless thanks to CDCDG.
From concept to commissioning CDCDG was an invaluable partner to us.
CDCDG's thorough evaluations of our statewide data center initiative has helped us to be a success.
It's been a pleasure working with you on another successful data center project.
Your support of our worldwide data center initiative has helped us to our success.
Your design oversight saved us thousands of dollars on our data center build.
We could not have done it without you.
 
News & Updates
  • April 2012: Ron Hughes, Founder of CDCDG steps down to take a position appointed by Governor... » More
  • March 2012: State of CA /CDCDG are nominated for GEIT Award for... » More
  • March 2012: KIO Networks / CDCDG are nominated for GEIT Award for Cutting Edge Data Center Design Category... » More
  • Feb 2012: CDCDG completes commissioning Porto Segura's primary data center... » More
» More