Capacity spoke with Andrea Ferro, VP power & IT systems EMEA at Vertiv about how edge data centre design is adapting to meet these requirements, and how carriers can future-proof their networks for AI workloads.
Ferro highlights the fundamental shift AI is bringing to infrastructure planning. “We are seeing a fundamental shift in expectations driven by the evolution from traditional connectivity to AI-powered network intelligence,” he explains. “AI systems require a new level of responsiveness and scale, and that is changing how digital infrastructure is planned across the telecommunications ecosystem.”
This transformation is not limited to centralised facilities. Increasingly, compute and storage must be positioned closer to users and endpoints to support applications such as network optimisation, predictive maintenance, and real-time analytics. “It is no longer just about centralised capacity,” Ferro says.
“From a technical perspective, we’re moving from traditional rack densities of 5-10kW to AI-ready configurations of 30-80kW or more. This transformation is driving the need for integrated solutions like our Vertiv 360AI reference architectures, which combine power, cooling, and monitoring in pre-engineered systems specifically designed for these demanding workloads.”
For carriers, these developments have significant practical implications. “Edge deployments are becoming mission-critical infrastructure, not just supplementary capacity,” Ferro notes. Use cases such as 5G network slicing, low-latency content delivery, autonomous network management, and AI-powered fraud detection depend on processing massive data streams locally.
Supporting distributed AI workloads without compromising the 99.999% uptime expectations of telecom networks is a major challenge. Vertiv’s PowerDirect Rack solutions integrate 33kW DC power shelves directly into rack architecture, removing traditional power distribution bottlenecks while supporting the high densities AI workloads require.
“Network operators are essentially building distributed AI factories within their infrastructure footprint,” Ferro adds.
Not all edge deployments are alike. “The key is aligning infrastructure with both the telecommunications use case and the AI workload characteristics,” Ferro explains. Edge sites range from small cell towers to regional points of presence (PoPs) serving entire metropolitan areas. Despite this diversity, core principles remain consistent: carrier-grade reliability, thermal efficiency, and scalability.
Vertiv’s 360AI reference designs provide configurations from 88kW to 115kW, allowing carriers to select densities appropriate for each site while maintaining standardisation. “Modularity is crucial because telecommunications networks often require rapid deployment and consistent performance across diverse geographic locations,” Ferro says.
AI workloads differ from traditional telecom applications in intensity and variability. “AI-ready servers, whether supporting network optimisation, real-time traffic analysis, or autonomous network management – draw significantly more power and produce more heat than legacy network equipment,” Ferro notes. Rack densities have risen from 5-15 kW to 80kW or higher, necessitating advanced cooling systems such as Vertiv’s CoolChip CDU series.
Another complexity is that AI workloads often coexist with traditional network functions, which have different power and cooling profiles. “This requires intelligent power management that can handle diverse loads while maintaining the isolation and redundancy that carrier networks require,” Ferro explains. Structured cabling also becomes critical, as edge sites must support both high-bandwidth AI processing and legacy connectivity.
Standardisation and modularity are key to scaling telecommunications infrastructure. Vertiv works with carriers to create repeatable design standards that can adapt to different deployment scenarios while maintaining operational consistency. Pre-integrated power and cooling blocks, standardised cabling, and flexible layouts reduce deployment time by up to 50% compared with custom integrations.
“Our partnership with the Open Compute Project (OCP) helps establish these standards, meaning that telecommunications operators can leverage industry best practices while meeting their specific reliability and performance requirements,” Ferro notes. This approach ensures consistent operations for carriers managing networks across multiple countries, while allowing local adaptations for regulatory and environmental considerations.
Ferro emphasises that infrastructure requirements vary by region. Energy availability, climate, and regulatory frameworks all influence how AI-enabled edge facilities are designed. In Europe, for instance, limited power capacity or strict energy regulations make high-efficiency solutions such as PowerDirect Racks especially valuable. Extreme climates also create thermal management challenges, while regulatory mandates affect data distribution and environmental compliance.
Sustainability is increasingly important. “There’s growing pressure from both regulators and corporate customers to demonstrate measurable environmental improvements,” Ferro explains.
Operators are exploring innovative approaches such as waste heat reuse, intelligent power management, and liquid cooling systems that improve PUE while enabling high-density deployments. Advanced cooling solutions also support circular economy practices such as heat recovery and water recycling, minimising environmental impact without compromising network performance.
AI deployments can be retrofitted into existing telecommunications infrastructure, but careful assessment is required. “Traditional telecom facilities weren’t designed for rack loads of 80+ kW or sophisticated cooling systems,” Ferro notes.
Vertiv’s 360AI reference designs and high-density solutions from acquisitions like Great Lakes provide retrofit-friendly options that maintain existing network functions while adding AI capabilities. A combination approach, retrofitting hub sites and deploying purpose-built edge capacity, is often optimal.
OCP standards are increasingly important for AI infrastructure. “OCP provides standardised architectures that can reduce deployment costs and complexity while improving interoperability across different vendor ecosystems,” Ferro explains. Telecommunications operators can leverage hyperscale-inspired designs while maintaining carrier-grade reliability, electromagnetic compatibility, and power distribution requirements.
Ferro offers clear guidance for operators scaling AI at the edge: “Focus on systems integration from day one. Design for today’s network AI applications but architect for tomorrow’s possibilities. Engage with partners who understand both telecommunications requirements and AI infrastructure realities. Standardise where possible, customise where necessary.”
He concludes: “AI at the telecommunications edge isn’t just about adding compute power, it’s about creating intelligent network infrastructure that can adapt, optimise, and evolve with changing demands. The infrastructure decisions you make today will determine how effectively your network can leverage AI innovations for years to come.”
RELATED STORIES
Deputy Editor Capacity Media