A new chip by Cisco, Cisco Silicon One G300 (G300), is designed to power and scale AI data centres for the agentic AI era.
The technology giant said its Cisco AI Networking innovation is designed to address the next phase of AI buildouts, with the G300 chip able to power gigawatt-scale AI clusters for training, inference and real-time agentic workloads. It is able to do this, Cisco said, while also maximising GPU use with a 28% improvement in job completion time.
Built to power new Cisco N9000 and Cisco 8000 systems, Cisco hopes the chip will work to push the frontier of AI networking within the data centre.
“We are spearheading performance, manageability and security in AI networking by innovating across the full stack – from silicon to systems and software,” said Jeetu Patel, president and chief product officer at Cisco. “We’re building the foundation for the future of infrastructure, supporting every type of customer – from hyperscalers to enterprises – as they shift to AI-powered workloads.”
Expected to go on sale in the second half of 2026, the new chip is designed to power massive, distributed AI clusters with high performance, security and reliability. According to Reuters, the chip will be made with Taiwan Semiconductor Manufacturing Company’s (TSMC) chipmaking technology.
The system will also feature innovative liquid cooling and support high-density optics to achieve new efficiency benchmarks. This should ensure customers get the most out of their GPU investments, Cisco said, in addition to harnessing Nexus One to remove the complexity that prevents companies from scaling AI data centres.
“As AI training and inference continues to scale, data movement is the key to efficient AI compute; the network becomes part of the compute itself. It’s not just about faster GPUs – the network must deliver scalable bandwidth and reliable, congestion-free data movement,” said Martin Lund, executive vice president of Cisco’s Common Hardware Group.
“Cisco Silicon One G300, powering our new Cisco N9000 and Cisco 8000 systems, delivers high-performance, programmable and deterministic networking – enabling every customer to fully utilise their compute and scale AI securely and reliably in production.”
Cisco Silicon One is very programmable and can offer a complete portfolio of networking devices across AI, hyperscaler, data centre, enterprise and service provider use cases.
Cisco expects the GB300 chip to help some AI computing jobs get done 28% faster, partly by re-routing data around any problems in the network automatically within microseconds, Reuters said.
“Organisations need greater flexibility in where and how they run AI workloads,” the company said in its press release. “To address the diverse requirements of these environments, Cisco is advancing Nexus One with a unified management plane that brings together silicon, systems, optics, software and programmable intelligence as a single integrated solution.”
Nexus One provides networking across a range of environments to deliver data centre use cases across two fabric technologies.
Networking has become more competitive when it comes to AI and Cisco’s latest systems are positioned to rival the likes of Broadcom and Nvidia. Speaking at the Cisco AI Summit last week, Cisco CEO Chuck Robbins said the industry was seeing the largest AI transition it had ever seen.
“Those of us who embrace AI will ultimately be the winners,” he said. “We all know this moves fast and none of us can do it alone.”
He added: “We’re really seeing the enterprise start to pick up. We’re doing this through partnerships with Nvidia, AMD, OpenAI, and many others.”
Related stories
Cisco CEO: AI is ‘bigger than the internet’, but warns of ‘carnage’ ahead
Nvidia CEO: AI threat to software industry is “illogical”
AI chip crunch continues, as Intel and AMD confront CPU delays

Metro Connect USA 2026
Metro Connect USA is the largest executive-level digital infrastructure event in the U.S. The only one of its kind, this 25-year-strong gathering is where decision makers come together to make deals happen.





