by Patrick Fogarty…

19 May 2009 – Few people understand the impact that the computing world has on our carbon footprint. This is despite the fact that data centres, the work horses that support the internet and the growth in remote computing, are massive power users. The good news is that there are ways we can dramatically improve the energy efficiency of data centres. Innovation in data centre engineering is helping us to find ways to significantly reduce the amount of energy needed to run them. These developments can dramatically reduce the cost base and carbon footprint of a business.

The invisible problem

The impact data centres have on the environment is one that many people underestimate. Take a medium-size data centre that is typical of what is being built at the moment (such as a 2000 square metre, 4 mega watt facility). Such a data centre would have an annual energy consumption of about the same as 3000 houses.

A 5 KW rack, typical of racks in new high density computing, produces in one week the equivalent of 100kg of carbon. To put it in IT terms, each week the rack produces 7RU of carbon in volume, or in seven weeks it produces its own volume in carbon. The only reason that it is not recognised as a problem is that we can’t see it.

Inverting the issue

As engineers we typically consider the power available to a site and engineer systems such as switchboards that effectively use this power to deliver services to the operation, in this case IT equipment. This is really a back-to-front view of the world.

Let us consider the whole system in reverse starting at the point that useful work is done – the central processing unit (CPU). From this point working back up the electricity supply chain (firstly in the IT system, then in the data centre and finally at the transmission grid), a series of auxiliary systems are required in order to support the operation of the chip.

Each of these systems adds to the power overhead of the system and effectively reduces the overall efficiency.

For each component in the system, the efficiency can be considered as the power output divided by the power input. Power lost in each component, typically as heat, does not do useful work.

Figure 1 illustrates the efficiencies of a typical transmission grid/data centre/IT chain for a data centre. The blue bars indicate the efficiency for each component in the chain and the red line graphically illustrates what component (of the original 100 per cent of power at the power station) reaches each component in the series. The net result is that of the 100 per cent that leaves the power station, less than 15 per cent reaches the server CPU.

It is worth noting that it is difficult to obtain reliable data on what power usage actually occurs within servers as this is generally closely guarded commercial information. Current moves to apply EPA green star ratings to servers should help make this information more available.

It is also worth noting that this graph is drawn for a well loaded data centre. Data centres with low loads will typically be less efficient than this example.

This graph illustrates that the overall efficiency is a result of the product of a series of component efficiencies. One small saving in a component in the chain particularly on the downstream (IT) side can result in a massive net gain overall.

The orange line added in Figure 2 illustrates the net effect on the overall power usage due to one change, in this case an improvement in the server power supply efficiency from 60 per cent to 80 per cent.

This is a readily achievable goal now but one that is not typically taken. There are obviously design induced factors and latency associated with existing systems but provided the upstream design recognises and takes into account this one change, overall usage could be reduced to 75 per cent of the current value.

This cumulative effect gives real cause for optimism in the push for energy efficiency in data centres.

Figure 3 provides guidance as to where we should be heading as an industry. Significant savings can be made immediately (orange line) using known and proven technology. If we as an industry were to embrace new forward thinking technology, further savings would also be possible (green line). While there is an element of “crystal balling” in this calculation, the shortcomings in existing technology are well known. If we were to work together as an industry, the savings are achievable.

The yellow line represents further improvements that could be made if the IT/software fraternity were to make use of even a fraction of the gains in efficiency that could be realised using such technologies as virtualization and multi core processing. The yellow line represents an improvement of 3:1 in efficiency. This is very conservative. Industry experts estimate gains of up to 100:1 are achievable for some application platforms.

Industry fragmentation inhibiting change

A common misconception is that the centralisation of computing causes the problem. This is not true. The centralised IT model is the most efficient model for delivering IT capacity with current technology. The real frustration is that the industry is struggling to gain the efficiencies that such a model makes possible.

So if the tools are available to make a significant improvement to the IT carbon footprint why hasn’t it happened already? There are a number of reasons. One of the main ones is the fragmentation of the industry. The data centre “industry” is not actually an industry but a collection of disparate groups.

The IT sector and the data centre sector have traditionally had little cross fertilisation. What has tended to happen is a data centre owner/operator would provide an amount of space to a specification nominated by the IT equipment supplier. Engineers, being what they are, would make these specifications incredibly tight and based on “lore” passed down through the 40+ years of IT installations. Ironically, it has only been with the recent sharp increase in load densities, particularly with blade servers, that the limitations of physics have forced the two parties to the negotiating table.

IT systems have, since the demise of the water cooled mainframes, relied on air as a cooling medium. For low load densities this has not been an issue but as load densities have increased it has become impossible to achieve the required cooling by utilising air without a co-ordinated response. The recent “standardisation” of hot/cold aisles and front-to-back air flow equipment has been a result of this co-ordination.

Too much spin

There are a number of other factors in the industry that have hampered progress. Much of the equipment is sold on a discrete component basis. As the green discussion has (rightly) gained momentum, there has been a rush to re-package much of the existing equipment as “green”. The resultant information in the market place has, in many cases, simply added to confusion. In any case, most solutions have been on a component-to-component rather than a full system basis. The marketing blurb and the reality have been well separated.

The same is true of the rush to renewables. Many companies have been at pains to advertise their renewable credentials but have neglected to make more effective savings by effectively engineering their system. These savings, in many cases, would save two orders of magnitude more in greenhouse gases.

The commercial reality of contractual relationships has often been a disincentive for change. In many contractual models there is no mechanism for transfer of funds from an operating expenditure budget to allow capital expenditure, no matter how short the payback period. The more steps in the contractual chain the more unlikely it will be that an energy saving will be accepted, even if it is cheaper and more efficient.

Creating sustainable solutions

The recent focus on the IT carbon footprint and the collaboration between the IT and data space providers has opened the doors to new and exciting developments in “green” data centres.

Initially, the basic arrangement of IT equipment will not change and therefore solutions will be focused on optimising current technologies. As we can see in figure 1, there are “easy gains” to be made, especially in server power efficiency and air conditioning systems.

Server power supply efficiency is easy to rectify; the technology exists and, in fact, involves little, if any additional cost. In the past, IT server manufactures have appeared to resist the drive to better power supplies believing, wrongly, that a 20 per cent improvement in a 300 watt power supply is not worth pursuing.

Fortunately, customers have begun to realise that installed server efficiency is a key to data centre efficiency. Resultant pressure on manufacturers has borne results. Green Star programs for servers, which also focus on part load server efficiency, have accelerated development.

Big savings available

An air conditioning (A/C) systems’ prime function is to remove heat at board/chip level and dissipate it to the outside environment. Key improvements in the efficiency of A/C systems involve tuning of traditional recirculating systems, use of outside air conditions (via forms of fresh air cooling) or use of liquid medium (generally water or CO2) to provide “close coupled” cooling at rack level. Of these, outside air cooling for many climates, particularly in Europe, represents the simplest and cheapest way to derive significant improvement.

NDY has developed full fresh air and traditional/full fresh air hybrid designs, which enable desired conditions to be met for more than 95 per cent of the summer time without the need for chillers. This in turn represents a saving of over 40 per cent in operational energy usage over a traditional data centre.

For a four hall data centre such as that modelled in Figure 4, this represents a saving of over £5 million (A$11 million) a year in energy costs or 25,000 tonnes of CO2.

In cases where the IT loads are designed to accept wider temperature and humidity conditions, such systems, if correctly designed can remove the requirement for chillers. This will create significant cost savings. NDY has designed such a system, as part of a fully optimised design, for a very large data centre in the United Kingdom.

Many options are being pursued for “liquid to server” cooling. These will create significant improvements in cooling efficiency. They will also need less space because of the greater heat carrying capacity of liquid over air. This is not new technology. The mainframes of the 1970s were liquid cooled.

It is expected that there will be many options on the market and a period of consolidation before the industry standardises. Of the options being developed, one of the more exciting involves phase change loops and solid couplings to remove the requirement for liquid in the server. Another utilises a reduced pressure closed circuit water based evaporative loop which is currently used for high power military applications.

Improvements to software, which is often inefficiently written, would also reduce energy consumption.

Data centres that are now being built need to show an appreciation of these trends to enable simple refits as new technology becomes available. By maximising efficiencies and being mindful of new advancements, data centre owners can significantly reduce their carbon footprint and cut their on-going costs. And this is good news for all.

Patrick Fogarty is a London based director of Norman Disney & Young.