Cisco Silicon One combines uniform chip design with specific deployments

Silicon photonics not reliable enough yet

Cisco Silicon One combines uniform chip design with specific deployments

Cisco recently announced the Silicon One G300, a 102.4 terabit networking chip that represents one of the most advanced switching solutions available for AI data center infrastructure. During Cisco Live EMEA, we had the chance to dive a bit deeper into this new development and what it means for the industry with Martin Lund, Executive Vice President of Cisco’s Common Hardware Group. We didn’t only talk about how Cisco built this enormous piece of silicon, but spent a good amount of time talking about what’s next too, including silicon photonics.

The G300 is the latest addition to the Silicon One portfolio. It’s a 3-nanometer chip built by TSMC that delivers a lot of networking capacity for connecting large-scale GPU clusters. With the G300, Cisco joins an exclusive group of only three companies worldwide capable of producing networking silicon at this performance level, alongside Nvidia and Broadcom, according to Lund.

The Common Hardware Group consolidates all silicon development, hardware engineering, and silicon photonics research across Cisco’s product lines. This unified approach has the benefit of enabling Cisco to build complete vertical solutions spanning from campus Wi-Fi access points and video cameras to Catalyst switches, core routers, and hyperscale AI infrastructure switches.

Cisco Silicon One G300

The G300 chip powers 64 ports of 1.6 terabit Ethernet, delivering a total switching capacity of 102.4 terabits. This represents a doubling of capacity compared to its predecessor, the G200, from just a couple of years ago. To put this into some more perspective, the chip offers 10,000 times more bandwidth than the 10 gigabit standard introduced almost 25 years ago.

Built on TSMC’s 3-nanometer process technology, the G300 pushes the physical limits of current semiconductor manufacturing. According to Lund, manufacturers cannot currently build chips significantly larger than this device due to fabrication constraints. The extreme performance comes with substantial thermal challenges. The chip generates so much heat that liquid cooling is required for deployment. This in itself is interesting because this means that switching will now also enter the realm of liquid cooling.

Programmable architecture for AI workload optimization

What distinguishes Silicon One from competing solutions is its programmable architecture, we hear from Lund. This of course means that the chip can be programmed to carry out exactly the tasks it is supposed to carry out. More importantly, though, the G300 can be reconfigured after deployment to adapt to changing network requirements. This capability proves particularly valuable for AI infrastructure, Lund argues, where workload patterns and protocols continue to evolve rapidly.

The programmability allows network operators to modify chip behavior, implement new protocols, and adjust load balancing architectures without replacing hardware. This extends device lifetime and enables continuous optimization as AI technologies mature. For large AI factories that might deploy 100,000 of these chips, or smaller installations with hundreds of units, this flexibility represents significant operational and financial advantages.

Ethernet wins the AI networking standard debate

The networking industry has witnessed an extended debate between Ethernet and InfiniBand technologies for AI infrastructure connectivity. According to Lund, this question has been definitively settled in favor of Ethernet, particularly following the Ultra Ethernet Consortium’s formation and Nvidia’s public endorsement of Ethernet technology despite the company’s ownership of InfiniBand through its Mellanox acquisition.

InfiniBand served effectively for narrow use cases requiring low latency and high performance. However, the technology has some serious scalability limitations. Most notably, InfiniBand’s addressing space supports only 65,000 nodes, according to Lund. That sounds like a lot, but is inadequate for AI compute clusters expanding to hundreds of thousands or millions of compute nodes. Ethernet provides the addressing capacity, interoperability, and ecosystem support required for these massive deployments.

The shift to Ethernet as the universal standard enables disaggregated AI compute architectures. In these types of architectures different types of processors and accelerators connect over a common network fabric. This flexibility will prove increasingly important as the AI hardware landscape diversifies beyond current GPU-centric designs, Lund argues.

Deployment targets and customer adoption

The initial deployment focus for the G300 is on AI data centers interconnecting very large GPU clusters. These include both massive AI factories and smaller enterprise AI installations. Lund also notes that five of the six major hyperscalers have already adopted Cisco’s Silicon One technology. Efforts are underway to secure the sixth too.

The rise of neocloud providers and sovereign cloud initiatives expands the addressable market beyond traditional hyperscalers. Enterprises are also beginning to deploy dedicated AI factories that require the switching capacity the G300 provides. These deployments have GPU counts in the thousands rather than hundreds of thousands, but the G300 can still make a difference there, according to Lund.

The G300 sits at the top of Cisco’s five-member family of Silicon One chips. This family (besides the G-series, there’s P, K, A and E too) spans from campus switching through service provider infrastructure. Technologies developed for the highest-performance chips trickle down across the product line too, Lund says. For example, features like the 1.6 terabit Ethernet ports on the G300-enabled devices will eventually appear in service provider and enterprise equipment as those markets mature.

Silicon photonics and the future of optical networking

Up until this point in the conversation, we talked about what is available now (or very soon, at least). However, there’s also quite a bit of buzz around photonics nowadays. So we asked Lund what he sees happening in that area.

Silicon photonics should be the next major technological transition. The first phase involves co-packaged optics, where optical engines mount very close to networking chips with light signals emanating directly from the substrate. This approach could reduce power consumption by up to 70 percent compared to current electrical-to-optical conversion methods.

The question becomes how deeply optical technology can penetrate into chips themselves. Current optical switching uses small mirrors to reconfigure network path, but this approach lacks the speed for packet-by-packet switching. True optical domain packet switching remains several years away, Lund expects. He suggests quantum computing might arrive before fully optical switching becomes practical.

Reliability challenges for photonic systems

Photonics faces inherent reliability challenges compared to copper-based electrical systems. Lasers have finite lifespans and introduce additional points of failure. One approach to deal with this that Lund mentions involves external pluggable lasers that can be replaced without removing entire switches. This is to a certain extent similar to hot-swappable power supplies in modular equipment.

The industry continues working to solve these reliability issues while balancing the power efficiency gains photonics offers. As networking speeds continue to increase, copper transmission distances shrink dramatically. What once worked at 10 meters for earlier speeds might only function at 3 meters when bandwidth doubles, and possibly just 1.5 meters or 1 meter with subsequent doublings. This physical limitation drives the transition to optical connectivity. However, the optimal balance point between copper and photonics continues to evolve with each generation of the technology.

Silicon One is the foundation for vertical approach

Cisco’s approach mirrors Apple’s vertical integration model. The company designs its own silicon, builds its own hardware, develops its own software and platforms, creates its own management tools, and implements its own security stack. While companies like Google, Microsoft, and AWS build custom data center chips, they retain those technologies for internal use. Cisco sells its silicon-based solutions to millions of customers worldwide. This requires different design considerations around programmability, lifecycle management, and broad ecosystem compatibility. The Silicon One architecture functions like an instruction set that scales across different optimization points and use cases, from campus networking through hyperscale AI infrastructure.

Also read: Silicon One is the engine under the hood of Cisco’s AI story