Google Rejects CPO: $185B Bet Reshapes AI Infrastructure
Google’s latest infrastructure strategy is not just a product update—it is a decisive architectural shift. Announced alongside a massive $175–$185 billion capital expenditure plan for 2026, the company made one message unmistakably clear: Co-Packaged Optics (CPO) is not part of its near-term future.
Instead, Google is standardizing around Near-Packaged Optics (NPO) and Optical Circuit Switching (OCS), redefining how hyperscale AI systems are interconnected, powered, and scaled.
⚙️ A Single Sentence That Rewrote a Roadmap #
At a major industry event, CEO :contentReference[oaicite:0]{index=0} delivered a statement with far-reaching consequences:
“CPO is not our answer right now.”
This was not a tentative evaluation—it was a clear rejection. Given the scale of Google’s infrastructure investments, this effectively redraws the near-term roadmap for optical interconnect technologies across the industry.
🌐 Optical Modules: The Backbone of AI Data Centers #
Modern AI data centers operate like massive distributed systems, where tens of thousands of accelerators continuously exchange data.
Optical modules enable this communication by converting electrical signals into optical signals for transmission over fiber, then back into electrical form at the destination. Over the past decade, bandwidth scaling has progressed from:
- 100G → 400G
- 400G → 800G
- Now moving toward 1.6T and 3.2T
At this transition point, three architectural paths have emerged:
Competing Interconnect Approaches #
-
Pluggable optics
Mature and flexible, but power-hungry and physically bulky -
Near-Packaged Optics (NPO)
Places optics close to the ASIC, reducing distance and power -
Co-Packaged Optics (CPO)
Integrates optics directly with the chip for maximum efficiency
CPO has long been considered the endgame. Google just challenged that assumption.
🔌 NPO + OCS: Google’s Chosen Architecture #
Rather than pursuing CPO, Google is standardizing on a combination of NPO and Optical Circuit Switching (OCS).
What is OCS? #
OCS eliminates electrical switching from the data path. Instead of converting optical signals into electrical form for routing, it uses micro-mirror arrays to redirect light directly.
Key Advantages #
- ~40% reduction in power consumption
- ~90% reduction in latency
- No electrical-optical conversion overhead
- Simplified data paths at scale
This fundamentally changes the switching model inside AI clusters, especially at hyperscale.
🚀 Deployment Scale and Technology Choices #
Google’s roadmap is already operationalized:
- TPU v8 clusters will standardize on OCS
- ~15,000 OCS switches deployed globally in 2026
- Scaling to over 100,000 units within three years
At the optical component level, Google is also enforcing strict requirements:
- 1.6T and 3.2T modules must use Indium Phosphide (InP) EML
- Silicon photonics solutions are excluded from this generation
This is not exploratory—it is prescriptive. The supply chain implications are immediate.
📉 What This Means for CPO #
CPO is not dead—but it is delayed.
Google’s decision signals:
- Commercial deployment of CPO is at least several years away
- NPO + OCS becomes the dominant near-term architecture
- Investment priorities across the ecosystem will shift accordingly
For vendors aligned with CPO-first strategies, this creates both timing risk and capital allocation pressure.
🧠 The Axion CPU: A Strategic Shift in Compute #
Alongside networking changes, Google introduced its first custom data center CPU: Axion.
Built on ARM architecture and optimized for AI workloads, Axion delivers:
- ~50% higher performance vs. x86 alternatives
- ~60% better energy efficiency
This is not just about replacing CPUs—it reflects a deeper shift in workload characteristics.
AI Agents Redefine CPU Demand #
AI systems are evolving from single inference tasks to multi-step agent workflows:
- Coordinating multiple model invocations
- Managing tool calls and data pipelines
- Handling real-time decision logic
These orchestration-heavy tasks are CPU-dominated.
Axion is designed specifically for this layer, increasing concurrency under fixed power budgets—something traditional x86 platforms struggle to optimize for.
🏁 Hyperscaler Trend: Vertical Integration #
Google’s move mirrors a broader industry pattern:
- Google → TPU + Axion
- Amazon → Trainium + Graviton
- Microsoft → Maia + Cobalt
The implication is clear: custom silicon is no longer optional. It is a core capability for controlling performance, cost, and supply chain risk.
💰 $185 Billion Reality: Constraints Beyond Capital #
Despite massive investment, Google highlighted two critical bottlenecks:
- Advanced wafer capacity
- High-bandwidth memory (HBM) supply
This underscores a key constraint in the AI era: scaling infrastructure is no longer just about capital—it is about access to scarce manufacturing resources.
🔮 Redefining Search and AI Systems #
Beyond infrastructure, Google is also redefining how AI systems interact with users.
Instead of returning links, future systems will execute tasks:
- Planning events
- Coordinating services
- Managing multi-step workflows
This shift from “information retrieval” to “task execution” increases backend complexity—and reinforces the need for efficient, scalable infrastructure.
🧭 Industry Impact: A Clear Signal #
Google’s decision sends a strong signal across multiple domains:
- Optical networking → NPO + OCS is the near-term standard
- Component suppliers → InP EML demand increases; silicon photonics faces pressure
- Chip ecosystem → Custom silicon becomes mandatory
- AI infrastructure → Power, interconnect, and orchestration dominate design constraints
🧩 The Real Shift: Decision-Making Under Uncertainty #
What ultimately stands out is not just the technology choice, but the decisiveness behind it.
In a landscape where multiple viable paths exist, the competitive edge comes from:
- Making high-stakes architectural bets early
- Aligning the entire ecosystem behind them
- Executing at hyperscale
What was considered the “optimal” path yesterday can be sidelined overnight by a single roadmap decision.
That is not instability—it is the defining characteristic of the modern semiconductor and AI infrastructure industry.