At Datacloud Middle East, a diverse panel of operators, engineers and technology vendors convened to discuss a critical issue for the data centre sector: how to manage the cooling demands of next-generation, high-density infrastructure while maintaining a strong focus on sustainability.
The session, chaired by Dan Thompson of S&P Global, brought together industry experts including Fouad Ibrahim of Khazna Data Centres, Philip Todd of BSE 3D, Ayan Mitra of CoolIT Systems, Joe Thomas of STULZ, and Albert Puig of Submer. Early in the discussion, it became apparent that cooling has evolved far beyond a purely technical or engineering concern. Today, it sits at the intersection of multiple, often competing priorities, including power consumption, water use, carbon emissions, economic viability, and the rapidly changing requirements of chip design.
As data centres strive to accommodate increasing rack densities and new processor technologies, the challenge lies not only in delivering effective cooling, but also in doing so sustainably, without creating stranded assets or compromising on environmental goals.
Thompson opened with a challenge that framed the conversation: the near impossibility of building a data centre today for processors that have not yet been released. Rack densities have climbed from 40–50kW to 130kW, then 300kW, and are now being discussed in megawatt terms.
Ibrahim described it as the “100 million dollar question that everybody’s looking at and trying to find an answer”.
“We design, we build and operate always,” he said. “Looking at the future is something that’s very important. We’re building now for something that’s going to come in the future.”
For operators such as Khazna, the issue is not simply cooling capacity but ensuring that the fundamentals of a site do not become stranded assets. “When you are designing something, you need to make sure that your main upstream infrastructure is not going to be forced to change. Otherwise you cannot be flexible to accommodate changes.”
One constraint that is increasingly critical is structural loading. As densities increase, so does rack weight. “It’s not only the power and the cooling,” Ibrahim warned. “It’s the physical weight of the rack that’s changing dramatically that you cannot avoid in the future. You cannot go and change your structural loading.”
That emphasis on fundamentals was echoed by Philip Todd, who noted that the thermal envelope itself is evolving. New processors, including NVIDIA’s latest generations, are designed to tolerate higher inlet temperatures.
“The flow temperature to those is actually allowed to rise,” Todd explained. “That’s allowing us to be much more efficient with how we’re cooling some of those loads.”
Higher temperatures potentially unlock free cooling opportunities and reduce chiller dependency, but they also expose the limits of conventional planning. “What you design for today is very much going to change in a couple of years’ time,” he said. “They want more capacity, more density.”
Albert Puig offered a reframing of the concept of future-proofing. Instead of oversizing infrastructure for speculative chip roadmaps that frequently change, he suggested an alternative: economic-proofing.
“If you want to future-proof, that’s what we used to do in the past,” he said. “You have this challenge about the roadmap of the chip vendors and the server vendors that are unpredictable.”
Puig described two approaches. The first is to optimise for rapid deployment and lower operational costs, accepting that hardware refresh cycles are accelerating. “Instead of future-proofing, it’s economically proving that in three, four, five years you are able to get the money done.”
The second approach is modular design that enables phased upgrades without shutting down the entire facility. “Let’s design data centres that are modular in a way that you achieve the fastest deployment today with the technology today, with the lower operational cost.”
Modularity was a recurring theme. Joe Thomas acknowledged that manufacturers have historically benefited from rapid change. “First and foremost, anyone in the cooling industry will be happy if things change as quickly, because you can sell more kit,” he joked. But he stressed that this is no longer a viable strategy.
“We have to think about future-proofing. Modular – yes, it’s a good approach to think about it. It’s about the feasibility and all the locations that are there.”
As densities rise, so too does the complexity of choosing between air, water and hybrid systems. In the Middle East, the debate is particularly acute given water scarcity and high temperatures.
Todd argued that water should not be dismissed out of hand. “We should consider water in a great many places on the globe,” he said. “There’s lots of areas where we have an abundance of water, and it is certainly a very efficient way to cool data centres.”
However, regional factors complicate the equation. “In terms of the Middle Eastern market, there’s other challenges that are also very influential in selecting what type of system to use – the local climate conditions, sandstorms, etc.”
Adiabatic systems can perform well, but operators must weigh electricity consumption against water use. “The cost of water and to drain that water, and the precious resource it is in this region, has to be balanced against the electricity consumption that you’re using to cool your data centres.”
Puig reframed the debate as one of thermal equilibrium. “Everything is about a thermal equilibrium, and that maps to energy and water consumption,” he said.
Instead of seeking a universal answer, he advocated designing around workload and location. “Which type of technology are you deploying? It’s not the same AI cloud computing. Not even in AI, inferencing of some vendors are the same as other vendors.”
If facilities can operate at 48°C or 50°C water temperatures, the site’s ambient climate becomes less restrictive. “If you can have a facility with a system of 48, 50 Celsius cold water, you almost don’t care where you are located.”
Yet even as chipmakers signal higher allowable temperatures, operators remain cautious. Ibrahim noted that announcements about 45°C inlet temperatures have yet to fully materialise in practice. “We are still getting lower temperatures on the supply that’s going to be in the future,” he said. “I’m waiting for that future.”
Thomas introduced another dimension to the temperature debate: chip efficiency. “When you increase the temperatures, the operating temperatures of the chips, what about the efficiency of the chipsets?” he asked. “Even though the cooling side is getting more efficient, if you take the total PUE of the data centre, is that going to be still efficient? That’s a factor nobody has calculated yet.”
While higher facility temperatures may reduce mechanical cooling loads, they could impact IT performance or power draw. The lack of transparency around that trade-off leaves operators navigating incomplete data.
On the question of whether liquid cooling is inevitable, the panel broadly agreed that physics is pushing the industry in that direction. Puig explained that heat flux, not just total kilowatts, is the decisive factor.
“It’s about how much heat can you take out with air compared to how much heat you can take out with water or other liquids,” he said. “The physical capacity of how fast can I take out the heat and how much heat I can store – that’s the key thing.”
However, he cautioned against assuming that all workloads require direct-to-chip or immersion solutions. “Not all the chips will need liquid cooling to be efficient,” he said. “If the chip does not need that type of technology, why would you use water that is dangerous?”
Ibrahim confirmed that hyperscale customers are not dictating specific cooling technologies, but they are driving outcomes through temperature and load requirements. “They will never push you to use a certain type of cooling, but it’s very clear what temperatures they’re going to be giving you and how their planning is to load your facility.”
In practice, this results in hybrid deployments. “The mix is a must,” he said. “You cannot say it’s 100% this way or it’s 100% that way.”
Even in AI clusters, networking and storage often remain air-cooled. “There is always an ask for a mix,” Ibrahim said. “Liquid is moving… but mostly it’s going towards the liquid.”
Sustainability considerations extend well beyond cooling plant selection. For Ibrahim, it begins with site selection. “It’s not any more normal site selection,” he said. “You’re looking at PUE and WUE. What type of source of water can be available? We’re looking at TSE instead of potable water.”
Renewable integration is also central. “When we started the first data centre in Abu Dhabi in Masdar, we had seven megawatt of solar plant being there. Now we are going with more and more solar to be built.”
Ultimately, sustainability is inseparable from measurement. Khazna is developing centralised command and control capabilities to monitor performance across campuses. “You cannot enhance, you cannot be more successful… if you don’t have data and information,” Ibrahim said. “We’re going to be using the AI itself to learn and be more efficient and go towards better sustainability.”
The panel concluded with a consensus on one point: cooling strategy must be adaptable. As chip densities climb and environmental scrutiny intensifies, operators must balance energy, water and carbon in increasingly complex ways.
The next five years, as Thompson suggested, are likely to be as transformative as the last. Whether through higher operating temperatures, modular deployments, hybrid cooling or liquid dominance, the sustainability–cooling nexus will remain one of the defining challenges of modern data centre design.
Related stories
Cisco unveils powerful chip to scale up gigawatt AI data centres
Google enlists TotalEnergies for solar power at Texas data centre
HAPPENING NOW: Middle East’s AI and data centre surge spotlighted at Datacloud market address

Datacloud Middle East 2026
Discover unparalleled inspiration and potential collaborators at Datacloud Middle East, where connectivity visionaries unite to unlock solutions to the industry’s biggest challenges.





