Fast-track to scale: designing data centres for high-density growth

10 February 2026
6 minutes

As rack densities soar beyond 100kW, industry leaders reveal what engineering, operational and strategic shifts are transforming data centre development in the Middle East.

Speakers:

  • Adam Gibson, director, techvox
  • Muhammad Naveed, senior vice president, topology & global tier authority, Uptime Institute
  • Alistair Davis, director, Black and White Engineering
  • Greg Parker, senior managing director, construction, projects & assets (CP&A), FTI Consulting
  • Magide Sebtioui, regional head, sales, DataVolt
  • Lee Perrin, data centre director, MEA, CBRE

During a panel discussion at Datacloud Middle East 2026, data centre executives discussed the realities of AI data centres and how the industry can confront density challenges.

The main debate: How do you design infrastructure today for the technology demands of the future?

It’s no secret that the demands of AI are impacting the traditional model of data centre design. As AI workloads push rack densities to levels never seen before, the playbook for 2026 is being written in real time.

From construction, to cooling: Confronting industry challenges

The panel discussed that, whilst many facilities still operate comfortably at 10-20 kilowatts per rack, AI training clusters are already demanding 100kW and above. In fact, some projections have suggested 600kW racks could arrive within the next few years.

Such rapid acceleration has resulted in developers making billion-dollar infrastructure decisions without clear visibility into what their customers will need.

“Clients come to us and say they want to develop a data centre using the latest specs from the hyperscalers,” explains Alistair Davis, director at Black and White Engineering. “We then design and implement those within the building that will operate, typically in two years’ time, 18 months for construction. We’re looking at that sort of timeline.”

However, the gap between design and deployment creates a fundamental problem, the panellists shared. By the time a facility opens its doors, the technology landscape may have shifted entirely.

“I work with a lot of operators, and their biggest concern right now is predictability,” shared Greg Parker, senior managing director at FTI Consulting.

“We know how to build something in 18 months at this size. We now need to build it in 12 or 10 months, and we want the same predictability. And that’s both the challenge and the opportunity.

“You can’t sit here for 45 minutes deciding on the best option. We need to make a decision and go with it,” he continued.

Likewise, liquid cooling is rapidly becoming more essential within the data centre, as traditional air-cooling methods cannot keep up with rapid digital transformation.

“At the scale and speed that we want to go to, some form of liquid cooling is the solution,” Parker added. “If you think about liquid cooling in design, you also have to balance that with the speed and ease to market.”

However, the transition isn’t always straightforward, with liquid cooling introducing many new complexities into the data centre.

“Operating a liquid cooling system isn’t just about the air conditioning,” notes Magide Sebtioui, regional head, sales at DataVolt. “Maybe not everybody agrees with this, but it’s like operating a power plant. If you don’t have the talent tomorrow, you might struggle to operate it.”

Meanwhile, the panellists agreed that each approach to liquid cooling is different, with some involving direct-to-chip solutions, or full immersion cooling.

“Even if it is 70% liquid cooling, 30% air cooling for example, it’s still 36 kilowatts of air cooling,” points out Muhammad Naveed, senior vice president at Uptime Institute. “36 kilowatts of air cooling is a big challenge with perimeter cooling systems.”

A question of power and location

Beyond cooling, the overwhelming power requirements of AI workloads are even reshaping site selection, the panel explored. Traditional considerations, including proximity to internet exchange points (IXPs), network latency and land availability, are being overtaken by a single priority: can you secure enough power?

“The size of these facilities they want to build is so big that you can’t just connect to the local grid. You need dedicated power,” said Lee Perrin, data centre director for MEA at CBRE. “This is where the hyperscalers need to commit and they need to commit upfront with solid deployment plans over a five-year period.”

The challenge has created an unusual dynamic, the panellists said. Whilst anyone can theoretically get approval for a gigawatt site, they argued that delivering on that capacity requires commitments and infrastructure that can take years to materialise.

“If a company builds a power plant … for a campus that includes multiple data centres, and they have economies of scale in place, this is going to remove all the effort required to run generators and chillers for individual data centres,” Naveed suggested.

However, without concrete deployment plans from anchor tenants, the panel said utilities can be reluctant to invest in necessary grid upgrades.

Some industry observers have seen these challenges as an opportunity for the Middle East to take a different approach and integrate power generation, cooling infrastructure and data centre facilities from the outset.

“You’ve got to look at every data centre opportunity as its own opportunity,” Parker said. “What does the market here in Dubai need compared to somewhere else? What’s the opportunity? You’ve got to look at what your tolerance level is, what your risk profile is and your level of certainty.”

For training workloads, the panel said this model for data centres could be transformative, as Sebtioui stated: “We are more in an energy infrastructure project than a traditional data centre. We are bringing together wind power, solar power, hydrogen, battery storage and new cooling technologies as well. What’s important is the power.”

What’s next for the data centre industry in the Middle East?

Whilst much of the industry focus centres on new builds optimised for AI workloads, can existing facilities adapt and retrofit to accommodate future-leading technology?

“I think hybrid solutions are where the future is going to be,” Perrin suggested. “They’ve still got their traditional cooling, they’ve added a bit of in-row cooling with a little bit of direct-to-chip cooling and it has to be this combination of a few things, because only that can allow them to stay competitive in the AI era.”

The panellists agreed that unoccupied data halls present the most straightforward retrofit opportunities, whilst occupied spaces require more creative solutions. The key, they argued, is understanding which elements of the infrastructure can be modified and which represent hard constraints.

What’s clear, however, is that industry will continue to evolve at a pace that confronts traditional infrastructure planning.

“Our world is moving too fast now,” Perrin said. “I don’t think we’ll just have legacy technologies. I think there’s going to be new ways and means of doing things better and faster.”

For now, the panellists said the focus remains on execution: building facilities that can handle today’s demands whilst remaining adaptable enough to accommodate tomorrow’s workloads, whatever they turn out to be.