For data centers that are cooling constrained changes to the cooling system will create additional data center capacity to support IT growthPhotos courtesy Emerson Network PowerLiebert

For data centers that are cooling constrained, changes to the cooling system will create additional data center capacity to support IT growth.
Photos courtesy Emerson Network Power/Liebert

5 Ways to Help Data Center Customers Save Energy, Increase Performance

The advice presented in this article is focused on return on investment in the long- and short-term. Working with data center or facility managers to implement these changes can allow contractors to build stronger relationships in this important segment and generate incremental revenue.

Data centers today support a wide variety of business services and represent a significant investment for the businesses they support. With continued growth in data center capacities, and rising energy prices, more and more of that investment is coming in the form of energy costs. Where data center energy costs were once considered inconsequential relative to other IT costs, they have now risen to the point where they are a target of cost reduction initiatives.

This focus on operating efficiency is providing an opportunity for contractors, who can help their customers save money by implementing changes to the data center cooling system that generate energy savings and enhance performance. For data centers that are cooling constrained, this has the added benefit of creating additional data center capacity to support IT growth.

The most common measure of energy efficiency in the data center industry today is Power Usage Effectiveness (PUE). PUE shows the ratio of the total energy consumed by the facility to the energy consumed by IT systems.

For example, a data center that consumes 1 MW of facility power, of which 600 kW is used by IT equipment and 400 kW by the power and cooling that supports the data center infrastructure, would have a PUE of 1.66 (1000/600). The goal is to get PUE as close to 1 as possible, as this means a high percentage of data center energy is being consumed by the IT equipment doing the work of the data center. Because cooling is typically the second largest user of data center energy, after only the IT equipment, reducing cooling energy consumption can have a significant impact on PUE—and translate into real economic savings.

The strategies outlined in the article all represent proven approaches to increasing efficiency; the first four of which provide a faster return on investment, while the fifth offers a more long-term ROI. Working with data center or facility managers to implement these changes can allow contractors to build stronger relationships in this important segment and generate incremental revenue.

1. Raise Return Air Temperatures

The IT equipment that populates data centers today is, in general, more resilient than the equipment that you would have found in a data center ten years ago. Manufacturers have widened the window of acceptable operating conditions for their equipment and ASHRAE has responded by increasing their recommendation for server inlet temperatures to a maximum of 80.6F. This is much warmer than the mid-sixties temperatures at which you will find many data centers still operating.

Manufacturers have widened the window of acceptable operating conditions for their equipment and ASHRAE has responded by increasing their recommendation for server inlet temperatures to a maximum of 80.6F.

This creates the opportunity to raise the temperature of air returning to the cooling units, which increases cooling system efficiency, while keeping IT equipment well within safe operating conditions.

Emerson Network Power estimates that for every degree the temperature of return air is raised, energy costs are reduced by one to two percent.

For example, some systems today operate with a return air control setpoint of 72F. Since return air is usually 20-25 degrees hotter than supply air, this results in a supply air temperature of about 52F — well below the low end of ASHRAE’s rack inlet temperature range of 64.4F to 80.6F. Emerson Network Power estimates that for every degree the temperature of return air is raised, energy costs are reduced by one to two percent. In this example, raising the return air temperature by 10-12F—achieving a return air temperature of approximately 85F and a supply air temperature of  65F—can reduce energy costs from 10-20% while increasing unit capacity roughly 20%.

A new generation of wireless temperature sensors has greatly simplified the task of server inlet temperature monitoring.

To determine whether return air temperature can be increased and by how much, you first monitor the server inlet temperature at various points across the data center. A new generation of wireless temperature sensors has greatly simplified the task of server inlet temperature monitoring. Once the baseline is established you can begin to increase the return air temperature by a degree or two at a time, letting the room balance out after each change.

Continue to monitor the inlet temperatures until you reach a threshold within the ASHRAE range at which your customer is comfortable operating, or until the room temperatures become unstable. If the unit controller has various humidity control modes, you should set it to operate off of dew point temperature instead of relative humidity (RH), because the ASHRAE limits are identified by maximum and minimum dew points, not RHs. This will also help avoid unnecessary humidification and dehumidification.

2. Implement Aisle Containment
When operating at higher temperatures, the wrap-around effect of the hot air from the back of the servers getting into the cold aisle can create “hot spots” which threaten the availability of the IT equipment. Alternately, air from the cold aisle can mix with the return air from the hot aisle, lowering the temperature of the return air and reducing computer room air conditioning (CRAC) unit efficiency.

Maintaining separation between hot and cold air within the data center can be best accomplished through the use of an aisle containment system, either on the cold or hot side.

Maintaining separation between hot and cold air within the data center can be best accomplished through the use of an aisle containment system, either on the cold or hot side. This is a relatively simple system to implement as it involves an enclosure that extends across the top of two rows of racks, along with side panels with doors at each ends of the aisle. 

This creates a physical barrier between the air in the cold or hot aisle and the air in the rest of the data center, assuming your customer is using blanking panels to cover any open slots in the equipment racks, which can serve as conduits between the two aisles. If you customer is not using blanking panels, you should recommend them. This is a best practice regardless of whether containment is being implemented or not.

3. Apply Variable Capacity Technology
Cooling system loads are almost never static and one of the challenges the data center cooling system must address is operating efficiently at partial load. The introduction of digital, variable capacity compressors represented a huge step forward in this regard and these systems have been on the market long enough that any CRAC units that aren’t using variable capacity compressors should be considered a prime candidate for an upgrade.

The other opportunity for variable capacity technology is the cooling unit fan. Variable frequency fan drives represent a significant improvement over fixed-speed fans; a 20% reduction in fan speed provides almost 50% savings in fan power consumption.

Electronically commutated (EC) fans may provide an even better option for increasing cooling unit efficiency. EC plug fans eliminate belt losses that occur in traditional fans. The EC fan typically requires a minimum 24-inch raised floor to obtain maximum operating efficiency and may not be suitable for ducted upflow cooling units where higher static pressures are required. In these cases, variable frequency drive fans are a better choice.

Both options save energy and can be installed on existing cooling units or specified in new units.

An amazing amount of functionality is built into the current generation of cooling system controls.

4. Utilize Cooling System Controls
A common issue many data centers have with their cooling is that the system is usually providing more airflow than is required for certain areas of the data center and not enough in other parts of the environment. This severely impacts the efficiency of the cooling system. One of the best ways to rectify this situation is to utilize the cooling system controls.

An amazing amount of functionality is built into the current generation of cooling system controls. For instance, those offered by Emerson Network Power on its Liebert data center cooling equipment feature multi-unit teamwork control with fan coordination, coordination between external condensers and indoor cooling units, capacity and power usage monitoring, auto-tuning, economizer control and custom staging and sequencing.

With this level of functionality and integration, custom building management system programming or a separate thermal management control system is not required to optimize system performance. Unfortunately, many data centers do not completely utilize the controls. The opportunity exists to help data center personnel understand the full functionality and potential of the controls and better utilize them to increase efficiency. 

5. Employ Economization Methods
If your customer is showing an even greater appetite for energy savings, consider evaluating the potential of economization. Because data center cooling systems operate 365 days a year, they can benefit significantly from what is often referred to as “free cooling,” or the use of outside air, plate heat exchangers or new pumped refrigerant technology to support data center thermal management.

Using economizers when outside conditions and ambient temperatures are favorable reduces or eliminates the use of compressors or chillers, which are the largest consumers of energy in a cooling system. This enables typical economizer systems to lower energy usage of a cooling system from 30 to 50%, depending on the average temperature and humidity conditions of the installation site.

Pumped refrigerant, a new economizer technology, is worthy of further mention because of its ability to shut off compressors fully or partially up to 80% of the time without any use of water, outside air, louvers or dampers.

Fueling interest in economizers is the fact that energy codes are being updated to include requirements for economizers in certain regions of the U.S. ASHRAE Standard 90.1, which stipulates incorporating economizers into the cooling system design in new commercial buildings depending on geographic location and system cooling capacity, is further drawing attention to economizers. Data centers had been exempt from this standard until recently.

Pumped refrigerant, a new economizer technology, is worthy of further mention because of its ability to shut off compressors fully or partially up to 80% of the time without any use of water, outside air, louvers or dampers. This technology automatically shifts into economization mode in a matter of minutes, allowing the system to take advantage of even small periods of energy-saving weather. Additionally, there is no risk of outside air contaminants or maintenance costs associated with dampers, louvers or water treatment, which is common in most economization systems.

Taking the Next Step
Whether it’s a relatively simple shift in return air temperature or the addition of a cooling system with integrated economization, there are plenty of opportunities for HVAC contractors to work with their data center customers to reduce energy costs and lower PUE. HVAC services for the data center sector are a rapidly expanding area, and one that will keep expanding for the foreseeable future. Equipment manufacturers can help you get up to speed with training on best practices and new technologies to help you take advantage of this opportunity. 

David F. Kelley is Director of Thermal Management Application Engineering at Emerson Network Power-Liebert North America

 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish