Skip to main content

Why Ethernet Is Taking Over AI Data Center Networking

·607 words·3 mins
AI Infrastructure Data Centers Networking Ethernet Semiconductors
Table of Contents

The Dominance of Ethernet: The End and Rebirth of AI Infrastructure Rivalry

For years, AI data center networking has been defined by a fundamental tension between Scale-Up and Scale-Out architectures. Ethernet—once dismissed as a legacy technology unsuited for high-performance computing—has now re-emerged as the unifying force across both domains.

By 2030, the global AI data center networking market is projected to approach $200 billion, with Ethernet positioned at the center of this transformation.


🧱 Architectural Divergence and Key Breakthroughs
#

Understanding Ethernet’s resurgence requires separating three foundational pillars: Scale-Out, Scale-Up, and Co-Packaged Optics (CPO).

Scale-Out vs. Scale-Up
#

  • Scale-Out (Horizontal Expansion)
    Connects thousands of servers or racks using low-latency fabrics. It dominates large distributed AI training clusters and traditionally favored InfiniBand or Ethernet.

  • Scale-Up (Vertical Expansion)
    Aggregates multiple GPUs into a tightly coupled “Super GPU” with a shared memory space. This model demands extreme bandwidth and ultra-low latency, historically dominated by proprietary interconnects such as NVLink.

These two approaches were once treated as mutually exclusive networking domains.


🔌 CPO: The Enabler of Ethernet’s Expansion
#

As switch bandwidth pushes toward 800G, 1.6T, and beyond, conventional pluggable optics face growing limitations in power efficiency and signal integrity.

Co-Packaged Optics (CPO) directly integrates optical interfaces with switch ASICs, delivering:

  • Lower power consumption per bit
  • Higher bandwidth density
  • Shorter electrical traces and improved signal quality

CPO provides the physical foundation that allows Ethernet to extend beyond Scale-Out and penetrate the high-performance Scale-Up domain.


🌍 Market Shift: Ethernet as the Unified Fabric
#

Since 2026, Ethernet has rapidly evolved into the default fabric for AI infrastructure.

Scale-Out: Ethernet Becomes the Default
#

Driven by cost efficiency and open standards promoted by the Ultra Ethernet Consortium (UEC), Ethernet is overtaking proprietary alternatives.

  • Market Outlook: AI Scale-Out Ethernet revenue is projected to exceed $100 billion by 2030.
  • Optics Impact: 800G and 1.6T pluggables, combined with CPO, are expected to account for over 50% of switch revenue, addressing power and reach constraints at scale.

Ethernet’s economics and ecosystem depth make it the natural choice for hyperscale deployments.


🔄 Scale-Up: Breaking the Proprietary Lock-In
#

Scale-Up networking was long dominated by NVIDIA’s NVLink, effectively locking customers into a single vendor stack. That monopoly is now under pressure.

A non-NVIDIA coalition—including AMD, Intel, and Broadcom—has pushed Ethernet into the Scale-Up domain.

From Fragmentation to ESUN
#

Competing approaches initially emerged:

  • UALink: PCIe-like, fixed-frame transport
  • SUE (Broadcom): Ethernet packet-based design

This fragmentation led to the creation of ESUN (Ethernet for Scale-Up Networking) in late 2025.

ESUN Key Properties:

  • Standardizes Ethernet as the base transport layer
  • Allows Scale-Up traffic to run on commodity Ethernet switch silicon
  • Enables multi-vendor interoperability

2030 Revenue Outlook
#

Technology Estimated Revenue
NVLink ~$25B (Still dominant)
Ethernet (ESUN) ~$8B+ (Fastest growth)
PCIe / UALink ~$3B

Ethernet is not replacing NVLink overnight—but it is ending exclusivity.


⚖️ Opportunities and Constraints
#

Established Vendors (Broadcom, Cisco, etc.)
#

  • Opportunity: Ethernet + CPO opens access to high-margin Scale-Up systems.
  • Challenge: Sustained R&D investment is required to integrate optics and compete with NVIDIA’s vertically integrated roadmap.

Startups
#

  • Opportunity: Hyperscalers actively seek alternatives to reduce vendor lock-in.
  • Challenge: Survival requires delivering a 10× advantage—in port density, power efficiency, or total cost of ownership—to justify switching risk.


🌐 Conclusion: The Era of Unified AI Fabric
#

AI infrastructure is no longer a zero-sum contest between architectures. Ethernet, empowered by CPO and open standards, is converging Scale-Up and Scale-Out into a single fabric.

In 2026 and beyond, Ethernet is not just catching up—it is redefining the rules. By connecting tightly coupled “Super GPUs” to massive distributed clusters under one interoperable network, Ethernet is becoming the universal backbone of AI computing.

Related

IFEC Explained: Memory-Semantic Acceleration Over Ethernet Scale-Up
·663 words·4 mins
AI Infrastructure Networking Ethernet MoE Data Centers
Server CPU Prices Poised to Rise as Lead Times Stretch to Six Months
·508 words·3 mins
Semiconductors CPUs Data Centers AI Infrastructure Supply Chain
ByteDance Unveils 102.4T AI Switch and HPN 6.0 to Power 100K-GPU Clusters
·600 words·3 mins
AI Infrastructure Data Center Networking Ethernet Hyperscalers ByteDance