OCI MSA Explained: Optical Interconnects for AI Infrastructure
In a major shift for AI infrastructure, industry leaders—including hyperscalers and chip vendors—have formed the Optical Compute Interconnect (OCI) Multi-Source Agreement (MSA). Announced in March 2026, this initiative aims to replace traditional copper links with standardized optical interconnects inside AI racks.
As AI clusters scale to tens of thousands of accelerators, copper-based connectivity is reaching its physical limits. OCI represents a coordinated effort to overcome these constraints using light-based communication.
🌐 The Vision: A Universal Optical Physical Layer #
At the heart of OCI is a fundamental architectural shift: separating the communication protocol from the physical transmission medium.
Key Idea #
- Define a common optical Physical Layer (PHY)
- Allow different interconnect protocols to run on top of it
Why This Matters #
-
Protocol Agnostic
Proprietary and open standards—such as NVIDIA’s NVLink and emerging alternatives—can coexist on the same optical infrastructure. -
Vendor Interoperability
Data center operators can mix GPUs, CPUs, and switches from different vendors without redesigning the physical interconnect layer. -
Plug-and-Play Ecosystem
Reduces integration complexity and accelerates deployment cycles.
This model mirrors how Ethernet standardized networking, but applied to intra-rack AI connectivity.
⚙️ Technical Roadmap: Scaling to Terabit Speeds #
OCI focuses on bringing optics closer to compute silicon through Co-Packaged Optics (CPO) and chiplet-based designs.
| Feature | OCI Gen 1 | OCI Gen 2 (Roadmap) |
|---|---|---|
| Throughput | 200 Gbps (per direction) | 400 Gbps (BiDi) / 800 Gbps total |
| Modulation | 50G NRZ | 100G+ PAM4 or advanced NRZ |
| Wavelengths | 4-channel WDM | 8–16 channel DWDM |
| Fiber Capacity | Up to 800 Gbps | Up to 3.2 Tbps per fiber |
Key Technologies #
-
Wavelength Division Multiplexing (WDM)
Multiple data streams transmitted simultaneously over different wavelengths -
Advanced Modulation (PAM4)
Higher data density per signal -
Silicon Photonics Integration
Embedding optical components directly alongside compute dies
These advances enable massive bandwidth increases without proportional growth in power consumption.
⚡ Redefining the “Scale-Up” Domain #
AI infrastructure typically distinguishes between:
- Scale-Out → Connecting racks (Ethernet, InfiniBand)
- Scale-Up → Connecting accelerators within a rack
Historically, scale-up has relied on copper—but this approach is hitting hard limits.
Limitations of Copper #
- Distance: ~2–3 meters at high speeds
- Power: Increasing sharply with bandwidth
- Signal integrity: Degrades rapidly over distance
OCI Advantages #
-
Extended Reach
Up to ~100 meters, enabling rack-to-row scale coherence -
Improved Power Efficiency
Targeting ~9W per link, comparable to copper but with far higher bandwidth -
Low Latency
Maintained through integrated photonics and efficient signal conversion
This effectively expands the “scale-up” boundary, allowing larger clusters of accelerators to behave like a single logical system.
🏗️ Industry Dynamics: Hyperscaler-Driven Standards #
Unlike traditional standards bodies (e.g., IEEE or JEDEC), OCI is being driven through a Multi-Source Agreement (MSA) model.
What Makes MSA Different? #
- Faster development cycles
- Engineering-first collaboration
- Direct alignment with hyperscaler needs
Strategic Implications #
- Hyperscalers take control of infrastructure direction
- Faster innovation cycles compared to formal standards bodies
- Competitive positioning among chip vendors becomes more visible
Notably, some major players are absent from the founding group, highlighting potential fragmentation in how optical interconnect standards evolve.
🧩 Conclusion #
The OCI MSA marks a turning point in data center architecture. By standardizing optical interconnects at the physical layer, the industry is:
- Breaking the bandwidth and distance limits of copper
- Enabling multi-vendor interoperability
- Scaling AI systems more efficiently within and across racks
As adoption grows, the internal wiring of AI systems will increasingly shift from electrical signals to photonic pathways.
In the coming years, the “computer” will no longer be defined by a single box—but by a fabric of light connecting thousands of processors into one unified system.