Data Centers of the Future Part III
This is part 3 of a 4 part series on data centers of the future. Part 3 is a discussion about solutions available to meet the challenges of supporting high density loads identified in Parts 1 and 2. Part 4 will be a discussion of future technologies that might help solve some of the existing challenges.
Now that we have identified the impact that high density loads have on the infrastructure and the operation of the data center, what kinds of solutions are available to meet this challenge. With loads for fully configured racks reaching 30kw or more, how do you get enough power and cooling to the rack to support those loads. In practical terms, how can you run 8 ? 50 amp, 208 volt circuits to a rack and how can you cool a cabinet that requires 13 perforated tiles to get the volume of air required for cooling.
There are a number of innovative approaches to solving these problems. Some of the design concepts utilize traditional designs with some minor changes that will allow you to support loads up to 6 or 7 kW per rack. Others utilize completely different approaches with supplemental cooling systems that can accommodate extremely high density loads. Water cooling is also making a comeback! What follows is a discussion of some of the solutions available. It is not intended to be all inclusive and there are a number of viable solutions that I have not included due to space limitations. While I am using some vendor specific products as examples, there are solutions available from just about every vendor out there. My recommendation is to find the solution that best fits your data center's requirements.
With dual corded equipment becoming the norm rather than the exception, 2N electrical distribution systems have become the defacto standard for data center design. A true 2N or 2N+1 electrical design provides two separate and independent sources of power (A and B) to the load. With no interconnections or interdependencies, there are no single sources of failure that could impact both sources. This provides incredibly high levels of reliability and in my opinion is much simpler and easier to manage for the user. At any time, one side could be taken down for maintenance, testing or due to a failure, while the load is supported by the other side. Of course this design works best if you are utilizing only dual corded equipment.
In order to avoid the proliferation of thousands of branch circuits underfloor, most users are now using some type of rack distribution cabinet or RDC. These cabinets typically have 2 to 4 - 42 pole panel boards in them and can be fed from several power sources. The primary advantage is that it puts the branch circuits closer to the load and minimizes the underfloor wiring that can cause air dams and impact the cooling. Highly adaptable, RDC's will allow you to run your branch circuits either above or below the floor. Here is an example of Liebert's RDC.
Almost all high density server racks require some combination of three phase 208 power from both an A and a B source. The most common approach we see is to utilize power distribution strips that break the 208 power up into 15 or 20 amp, 110 volt circuits for distribution at the rack level. Some of these power distribution strips will even provide branch circuit monitoring of loads and provide an alarm should loads exceed a predetermined level. The power distribution strips can be vertically or horizontally mounted depending upon your needs. Below is an example of an APC power distribution unit that supplies metered power to a cabinet.
Getting sufficient power to the rack is a considerable challenge. Getting sufficient cooling to the rack to support high density loads is even more of a concern. Almost all of our clients who have implemented high density racks in any quantity have had cooling problems. There are a number of design solutions that work for high density loads up to 6 or 7 kw. One is to implement a hot aisle/cold aisle configuration with returns over the hot aisles and with CRAC units returns ducted into the plenum. We are currently working with EYP-Mission Critical Facilities on this approach that we believe will support loads up to 7 kW per rack. Here is a simplified drawing showing that approach.
Once you get beyond that range, supplemental cooling systems become a requirement. There are a large number of approaches and all of them have their place. Two of the most common approaches are to partition off the hot aisle to eliminate recirculation of hot air and to provide supplemental cooling air in the cool aisle, usually from above. Here are two examples: APC's hot aisle containment systems and Liebert's supplemental cooling systems.
Another approach is to use a different medium for cooling. While I was happy to see water cooled mainframes disappear from an operational point of view, water cooled cabinets make sense from a design standpoint Water is a much more effective means of providing cooling. The amount of heat that can be removed by water is 3,500 times the amount that can be removed by the same volume of air. There are several manufacturer's making these cabinets. Here are a couple of examples of water cooled cabinets that are currently on the market.
If your company makes the decision to implement a large number of high density servers such as blades, 1U or now .5U servers, I'd recommend that you research the different technologies that are available before deciding on one. There are many solutions out there for power distribution and cooling. All of them are designed to make it easier for you to support high density loads. Don't feel like you are limited by the technology you currently use for cooling. There are high density cooling solutions that will work with chilled water, condenser water or direct expansion systems. The important thing is to have an experienced engineering company design a total solution for you that takes into account your electrical as well as mechanical requirements. Only by taking an integrated systems approach to installing high density servers will you achieve the results you are looking for.
Stay tuned for part 4, where we will discuss new technologies currently being developed for data centers of the future.