Are data centres still big energy guzzlers?

Data centres consume 3 per cent of the world’s energy so it’s fortunate these buildings are becoming increasingly energy efficient. Sadly these efficiency gains might disappear when the world’s appetite for data is expected to accelerate with the proliferation of artificial intelligence and autonomous cars.

Data centre operators are taking energy efficiency more seriously.

According to Bob Sharon, founder and chief innovation officer of Blue IoT, this is driven by the potential save on energy costs, and increasingly, the need to become better corporate citizens and decrease emissions.

Data centre energy usage is measured using a ratio metric called Power Usage Effectiveness (PUE), which is essentially how much energy is used by the computing equipment in contrast to cooling and other overhead.

Arup’s Australasia technology market leader Dave Martin told The Fifth Estate that five or 10 years ago data centres were looking at PUE’s of 2.0 or more – meaning that upkeep of the data centre was sometimes more than twice the energy needed to keep the servers themselves running. But now that ratio can be as low 1.15 for best-in-class facilities.

He says the problem is our insatiable appetite for data. Despite these improvements in energy efficiency around the world there’s been exponential growth in the size of data centres and the amount of power consumed by the rack.

He says the explosion in online gaming and streaming services such as Netflix, along with social media sites such as Instagram and Facebook, have driven a lot of this growth.

“But we haven’t really touched autonomous cars or AI yet, and the expectation is that demand for data and storage is only going to grow.”

Where there needs a to be a “step change” in energy efficiency innovation now, he says, is in the servers themselves.

The manufacturers of chips and servers are working on the problem but it doesn’t help that the silicon chip – which is the technology that’s been used in servers since the 1940s – is approaching the point of peak efficiency improvement.

The likes of quantum computing, nanomechanics and graphene technologies are all under investigation but at the moment there’s no technology commercially available yet that offers an efficiency improvement on silicon chips, Martin says.

It’s all about the cool factor

It’s traditionally been important to keep data centres cool – typically around 18 degrees Celsius – to stop the servers from overheating. But improvements in server technology mean that data rooms can now be kept to temperatures as high as 35 degrees Celsius.

This is complicated by the traditional co-location ownership model where data centre operators own and operate a centre but rent out rack space to enterprise and government customers to store their data. These customers sign a service agreement on entry that stipulates factors such as humidity and maximum temperature, which Bob Sharon says is often far lower than it now needs to be.

Sharon, who has been a judge at the Datacloud Global Awards, says although some data centres have lifted their temperatures it’s still not common to see data centres above 21 or 22 degrees Celsius.

And although data rooms can now withstand higher temperatures there’s still some cooling required, especially for centres in warmer climates.

One practice that’s been adopted by many is facing the rack the same way so that there’s a hot aisle and a cold aisle, which means better control of the airflows for less energy use overall.

It’s also important to make these rows airtight to stop the hot air sections mixing with the cold air.

Sharon says there’s a tendency for owners to be “a bit conservative in terms of cooling infrastructure”.

This is largely because energy efficiency is far from a data centre’s top priority. Uptime is always the number one concern, with centres ranked according to their resilience and ability to run 24/7.

The fixation on uptime means that there’s a lack of innovation in the cooling infrastructure space, with many operators wary of new technology in case it fails and causes an outage.

But one emerging method is immersion cooling, where the racks sit in an oil or liquid to keep the whole system cool so it won’t need airconditioning. Sharon says there’s been alot of research into the technology but it’s still “early days.”

There’s also the option of placing data centres in cold locations, which has happened a lot in the northern hemisphere. But the problem is many companies want to keep their data centres on home soil for data sovereignty reasons, which means the world’s data centres can’t just be plonked in Iceland to save on cooling.

Tasmania might offer a suitable data centre climate for Australia but links to the mainland and operational costs pose challenges, Sharon says.

Renewables are becoming a data centre priority

Another big opportunity is in renewables. Although the high-consumption 24/7 nature of data centre operations can make the buildings unsuitable for rooftop solar, some companies are looking to locate data centres closer to large scale wind or solar farms.

Dave Martin from Arup says this movement is largely driven by the large tech players such as Apple and Facebook committing to ambitious renewable energy targets for their own data centres. In 2018, Google was the largest corporate buyer of renewable energy in the world.

Are data centres owners getting rated by NABERS?

NABERS for data centres was first introduced in 2013 as the first scheme specifically for rating data centres energy efficiency in the world.

Bob Sharon says that despite operators like Fujitsu leading the way by rating all of its data centres, it’s still not commonplace to get a NABERS rating for data centres in Australia.

He suspects it’s because they “don’t want to put it out there for reasons of competitiveness”. Others are likely worried they won’t score particularly well.

Leave a comment

Your email address will not be published. Required fields are marked *