100W/ft 2 Data Center Myth
April 1996, Power Quality Assurance Magazine
We're sure you've all heard about it. The predictions that the decreasing size and increasing power requirements of mainframe computer hardware would render all of the existing data centers infrastructures obsolete. How all of the old data centers designed for 40W/ft2of electrical load would not be able to accommodate the new generation of data processing hardware that would push the electrical load up to 100W/ft2or greater. These predictions have been around for ten years or more. The truth is, it has not happened yet, and it's questionable that it will happen in the near future.
If you examine most of the latest designs of CPUs, DASD and Unix type processors from the major hardware vendors, they are lower in watts per square foot than their predecessors. This trend has been occurring for the last several years, but has been exacerbated by the introduction of CMOS type processors. These processors are dramatically lower in power consumption than the old style mainframes. Combining these new processors with the conversion of many traditionally mainframe applications to Unix style processors leads to lower watts per square foot requirements than existed several years ago.
DASD has always been an area that was used to justify the predictions of increasing density and growing watts per square foot requirements. Indeed it was increasing dramatically, five or six years ago. However, with the introduction of IBM's 3990 Model9s, that trend was reversed. HDS followed suit with their version of the "Fat DASD" that showed similar reductions in the watts per square foot requirements. The increasing use of "RAID" type DASD has also reduced the power consumption per square foot in data centers. In addition, the use of A TLs (Automated Tape Libraries) to store information that previously would have been stored on DASD has also contributed to the reduction.
The result of these changes is the stabilizing or even reduction in watts per square foot requirements for data centers, particularly those implementing new technology. This could save users millions of dollars in electrical and mechanical infrastructure that does not have to be installed. In most instances, designing the entire raised floor area for 100W /sq2 no longer makes sense. A better approach might be to design certain areas, such as DASD, server and CPU areas for somewhat higher watts per square foot. Other areas with lower watts per square foot requirements such as tape libraries, command centers, network and print areas could be designed as such. The net result might be an average of 5OW/ft2, overall, with certain areas at l00 to 120W/ft2.
The Uninterruptible Uptime Users Group (UUUG) recently completed a survey of their member's data centers to determine the average watts per square foot. The result was a 34W/ft2 average. We also looked at our client's data centers to determine the average watts per square foot. The results were about the same, in the low 30s. This was exactly the opposite of the predictions we had been hearing of 100W/ft2 or even higher. Particularly since the latest announcements from the major hardware vendors were also showing a large, one time reduction in power requirements for CPUs, due to the introduction of the CMOS technology.
The consequence of over designing the electrical infrastructure is not limited to the electrical systems. The mechanical systems have to be designed to accommodate the ultimate electrical loads. This forces mechanical engineers to install greater cooling capacity than is really necessary, thereby increasing the mechanical costs. This often results in chillers running lightly loaded, which is their least efficient mode of operation, and the installation of additional air handlers in the raised floor areas, which are not needed and take up valuable floor space. The proper sizing of the mechanical systems is directly related to the design capacity of the electrical systems.
This issue is important for several reasons. First of all it's enormously expensive to over design a data center's electrical capacity. The electrical cost for a new data center can be as much as one third of the overall project cost. Over designing an electrical system may be safe engineering, but it is not necessarily good engineering. A better solution might be to design in the flexibility to add electrical and cooling capacity in the future.
Secondly, if you have an existing data center that is marginal in terms of electrical capacity, it might make sense to upgrade computer hardware instead of adding additional electrical capacity. With footprints one-tenth the size of the previous DASD and CPUs, it's not likely that users will buy enough hardware to fill up the space they have gained back with technology upgrades. Just because a data center has room to install 5,000 MIPS where there once was 500 MIPS does not mean that they will be doing it any time soon.
We have brought this issue up several times while listening to presentations by facilities consultants talking about the data center of the future. Most of them are still showing projections of greatly increased watts per square foot. They are all Surprised that anyone would challenge this perception. However, if the reaction of the users at the most recent UUUG Conference is any indication, a lot of the users doubt the validity of these projections. Many of the users at the conference were seeing a drop in the load on the UPS, while doubling or tripling their processing and storage capabilities.
As for the design of future data centers, our only prediction is that they it will be smaller and have enormous processing, storage and networking capabilities. The power may be distributed differently, i.e. dual power paths, or distributed redundant, but the overall electrical capacity may not be significantly different from what we see today. The key is to build in the flexibility to add to the capacity in an uninterrupted manner.