A growing number of senior AI researchers have left leading model developers over the past year, publicly warning that commercial pressures risk outpacing safety work, a development with implications that extend well beyond Silicon Valley.
High-profile departures from companies including OpenAI have brought internal tensions into sharper focus. In 2024, co-founder and chief scientist Ilya Sutskever exited the company, followed by alignment lead Jan Leike, who said he believed safety culture and processes had taken a back seat to product development. Both had been closely associated with long-term AI safety research.
While staff turnover is common in fast-growing technology firms, the public nature of these exits and the language used to explain them have been notable. Departing researchers have warned that competitive pressure to release increasingly capable models is compressing timelines for evaluation, red-teaming and governance. Several have called for stronger oversight and greater investment in alignment research.
Rival developer Anthropic has positioned itself as a safety-focused alternative in the frontier model race. Yet even there, the same structural forces apply: escalating compute costs, investor expectations and the race to secure enterprise customers.
For the digital infrastructure sector, these debates are far from abstract.
Generative AI systems rely on vast quantities of compute, high-capacity fibre networks and increasingly dense data centre clusters. Hyperscale campuses are expanding across Europe, North America and the Middle East to support training and inference workloads. Subsea cable investment is accelerating to handle east-west traffic flows between AI hubs, while edge deployments are being explored to reduce latency for enterprise applications.
In short, AI’s commercial trajectory is directly shaping demand for physical infrastructure.
If safety concerns slow model releases or prompt tighter regulation, infrastructure build-outs could face additional scrutiny. Conversely, if competitive dynamics continue to favour rapid scaling, demand for power, rack space and long-haul connectivity is likely to intensify.
Regulators are already paying closer attention. The EU AI Act introduces obligations around high-risk systems, while UK authorities have signalled a pro-innovation but safety-aware approach.
In the United States, policymakers continue to debate federal standards for frontier models. Greater transparency requirements could influence how developers disclose training methods, compute thresholds, and risk mitigation processes, all factors relevant to long-term infrastructure planning.
Agent autonomy enters public view
Against this backdrop, a new development has drawn attention to how AI systems may evolve beyond tightly controlled environments. Moltbook, an AI-only social network launched in 2026 by US entrepreneur Matt Schlicht, allows AI agents to post and interact with one another without direct human participation. The platform has attracted widespread interest after showcasing agents debating topics, sharing information and organising into thematic groups.
Although such behaviour does not demonstrate autonomy or self-awareness, it has intensified discussion around agent-to-agent interaction and the potential for unsupervised machine communication at scale. Security analysts have cautioned that poorly governed deployments of AI agents in enterprise settings could introduce new operational and reputational risks if safeguards are insufficient.
Experts also warn against overstating the novelty of the phenomenon, noting that much of the apparent “social” behaviour reflects model design and prompting rather than independent machine intention.
Taken together, senior staff departures and emerging agent platforms illustrate the same underlying tension: AI capability is advancing rapidly, while governance frameworks, safety processes and commercial incentives remain in flux.
RELATED STORIES

ITW 2026
Over 2000 organisations from 120 countries made their mark at ITW 2025, powering the future of global connectivity and digital infrastructure.





