The rapid rise of AI has reshaped global digital infrastructure, causing rack densities to go through the roof.
But with that intense need for hardware comes the downside: scaling energy demands tenfold, raising urgent questions about sustainability.
For Dr. John Frey, chief technologist for sustainable transformation at Hewlett Packard Enterprise (HPE), the key to resolving that tension is simple: start at the beginning.
“Most companies try to bolt on sustainability late in the process,” Frey told Capacity. “But with AI, we actually have the rare chance to get it right from day one.”
It’s a lesson he’s spent more than two decades refining. Since founding HPE’s sustainable transformation programme 25 years ago, Frey has worked with thousands of organisations to embed efficiency into IT systems. And with AI scaling up infrastructure demands in ways few anticipated, the time to act, he argues, is now.
Before deciding which processor to deploy, Frey warns that many organisations approach AI as an isolated experiment, often ringfencing budgets and bypassing traditional planning and ROI assessments.
“Only about 10% of AI projects globally ever make it to production,” he said, citing a Forbes study. “And just a third of those have an enterprise-wide strategy for efficiency. It’s no wonder so many stall out.”
The solution, he believes, lies in full-stack thinking: from hardware selection to application design to data centre cooling and interconnects. “Optimising just one layer doesn’t cut it anymore. It has to be the whole system.”
Instead, he outlines what’s worked for HPE for the past two decades:
At the heart of HPE’s approach is a framework Frey calls the five levers — a set of efficiency principles designed to work across any technology stack, including AI workloads. Each lever tackles a different layer of the infrastructure puzzle, from raw compute to long-term storage habits.
The first of Frey’s levers focuses on getting more out of the hardware you already own. In most enterprise environments, server utilisation rates remain stubbornly low, around 30% in virtualised setups, and closer to 10% without virtualisation. That unused capacity still consumes power, generates heat, and takes up space.
“We still see organisations buying new kit without fully using what they have,” he said. “If you’re running an application that only uses one CPU core on a 64-core server, you’re wasting most of the available compute.”
Beyond utilisation, Frey points to idle energy consumption as another major blind spot. Power-saving features, like low-power states or auto-sleep functions, are often disabled by default, either for performance reasons or out of caution.
“Some customers will turn those settings off straight away,” he said. “But if your utilisation is already low, you’re just leaving gear running hot for no reason.”
Cooling and power conversion are major cost and energy drains in any data centre. Frey notes that power often gets converted multiple times, stepping down voltages, converting from AC to DC and back again, before it ever reaches the computer.
“Each conversion step comes with a loss,” he said. “Same with inefficient cooling setups, especially if they’re not tuned to workload density. You’re burning energy just to move air around.”
The fourth lever looks beyond hardware into the application layer. Programming languages vary widely in how much compute and memory they demand. Frey highlights languages like Rust and C++ as notably more efficient, in part because they’re compiled rather than interpreted, while higher-level languages like Python, though popular, can be significantly more resource-intensive.
“We’re not saying rewrite everything in Rust,” he clarified. “But if you’re building something new, or refreshing an app anyway, choosing a more efficient language can have a real impact. It’s not just about sustainability — it’s performance too.”
He also cited the Green Software Foundation’s Software Carbon Intensity (SCI) metric as a useful tool for organisations looking to quantify software emissions.
Lastly, Frey warns against what he calls ‘data hoarding’ — the tendency to collect and retain vast amounts of information without a clear purpose or strategy.
“A lot of companies store everything users ask for, just in case,” he said. “But most of that data never gets touched. It eats up storage, draws power, and complicates compliance.”
Instead, HPE encourages organisations to create explicit data strategies: deciding what to collect, how often, how long to retain it, and what format it actually needs to live in. For rarely accessed archives, even something as simple as offloading to tape, which consumes no active energy, can bring significant savings.

Of the five levers, software efficiency is perhaps the most underestimated. But according to Frey, it’s probably one of the most powerful.
“People tend to focus on hardware when they think about sustainability,” he said. “But inefficient software wastes compute, energy, and storage just as easily.”
That inefficiency shows up in a number of ways—bloated applications, mismatched workloads, and a lack of visibility into what’s actually running.
Frey recounted that many enterprise environments still host applications no one uses, simply because no one’s sure if they’re safe to turn off.
“There’s usually a graveyard of orphaned apps running in the background, taking up resources,” he said. “We work with customers to catalogue what’s still in use, what can be rehosted, retired, or rewritten. It’s basic blocking and tackling, but it adds up fast.”
Then there’s the matter of the code itself. Academic studies, Frey noted, have shown major energy differences between programming languages.
Compiled languages like Rust and C++ often come out ahead, using less memory and drawing less power than interpreted languages like Python or Java.
“If you rewrote all global applications in a more efficient language like Rust, some studies suggest it could reduce total compute energy use by as much as 50%. Even if that’s optimistic, the magnitude is clear.”
While he doesn’t advocate ripping and replacing legacy applications, Frey does advise setting company-wide language defaults going forward—and educating developers on the sustainability impact of their choices.
“Start by picking a more efficient language as your default for new projects,” he said. “Then, when it’s time to refactor or replace a system, you’re already headed in the right direction.”
HPE also recommends that customers adopt the Software Carbon Intensity (SCI) score — a metric proposed by the Green Software Foundation to track and compare software energy performance.
While picking the right server or software can unlock sustainability gains, Frey also pointed to visibility as a limiting factor, particularly in mixed or hybrid environments.
“You can’t optimise what you can’t measure,” he said. “And right now, the data isn’t always where it needs to be.”
Many organisations still rely on tools that report power use at the device level rather than the application level, making it difficult to understand which workloads are driving consumption, especially in virtualised or containerised environments, and ultimately limiting the potential for software-level optimisation.
The lack of standardised, cross-platform monitoring compounds the issue. Public cloud, on-premises, edge, and colocation setups often use siloed reporting systems, while tools are frequently optimised for specific hardware vendors.
To move forward, Frey said, organisations need integrated solutions that bring together IT analytics with building management systems (BMS), DCIM platforms, and heat reuse data, creating a single view of consumption and impact.
“We’re encouraging customers to go beyond compliance reporting and build real-time, estate-wide insight,” he added. “That’s where the next layer of savings will come from.”
Frey’s closing message reverberated across the entire conversation: efficiency can’t be bolted on later. Whether it’s software design, data strategy, workload visibility or the physical infrastructure itself, sustainable outcomes depend on choices made early and made deliberately.
“We already know what works,” he said. “The opportunity now is to design with it from day one, not try to retrofit it after the fact.”
“AI gives us a new starting point. Let’s not waste it.”





