Skip to main content

Intel Diamond Rapids Leak: Disaggregated Xeon Architecture

·435 words·3 mins
Hardware News Intel Xeon Data Center CPU Architecture
Table of Contents

Linux kernel patches are beginning to expose key details of Intel’s upcoming Diamond Rapids Xeon processors. The most notable change is architectural rather than incremental: Intel is moving to a fully disaggregated server CPU design, separating compute from memory and I/O at the chiplet level.

Intel Diamond Rapids

This marks a clear break from earlier Xeon generations and signals Intel’s response to escalating core counts, bandwidth demands, and AI-driven workloads.

🧩 CBB vs. IMH: A Fundamental Split
#

Diamond Rapids introduces two clearly separated functional domains:

  • CBB (Core Building Block)
    Pure compute tiles containing Panther Cove P-cores. These blocks focus exclusively on execution resources, frequency scaling, and core density.

  • IMH (Integrated I/O and Memory Hub)
    A dedicated chiplet responsible for memory controllers, PCIe lanes, and system I/O, completely removed from the compute tiles.

This is the first Xeon design where memory and I/O are no longer tightly coupled to the cores.

⚙️ Why Intel Is Doing This
#

Decoupling compute from memory enables several strategic advantages:

  • Independent Scaling: Core count and memory bandwidth can now scale independently, with rumors pointing to 192–256 cores per socket.
  • Process Optimization:
    • CBB tiles can be manufactured on Intel 18A, maximizing performance density.
    • IMH tiles may use a more mature node, improving yield and cost efficiency.
  • Faster Iteration: Intel can revise I/O and memory features without redesigning compute silicon.

This approach mirrors broader industry trends toward system-in-package design rather than monolithic CPUs.

🔍 Platform Monitoring and Interconnects
#

The architectural split is visible even in low-level system management:

  • Separate Discovery Paths
    • IMH PMON: Enumerated through PCI configuration space.
    • CBB PMON: Accessed via traditional MSRs (Model-Specific Registers).
  • PCIe Gen6 Support:
    Diamond Rapids is designed for PCIe Gen6, doubling per-lane bandwidth and enabling next-generation accelerators and storage.
  • Extreme Power Envelope:
    The Oak Stream platform will use the massive LGA 9324 socket, with top SKUs rumored to reach ~650W TDP.

📊 Diamond Rapids vs. Granite Rapids
#

Feature Granite Rapids Diamond Rapids
CPU Cores Redwood Cove Panther Cove
Process Node Intel 3 Intel 18A
Memory Controller Integrated Dedicated IMH chiplet
PCIe Support Gen5 Gen6
Memory Channels 12 16
Socket LGA 7529 LGA 9324

🧠 What This Signals for the Data Center
#

Diamond Rapids is not a routine generational update—it is a system-level redesign. By separating the “brain” (compute) from the “nervous system” (memory and I/O), Intel is preparing Xeon for:

  • AI-heavy, bandwidth-bound workloads
  • Heterogeneous accelerator platforms
  • Future scalability beyond traditional socket limits

If these leaks are accurate, Diamond Rapids represents Intel’s clearest acknowledgment yet that the future of server CPUs lies not in bigger dies, but in modular, disaggregated architectures built for flexibility and scale.

Related

GB200 NVL72 vs MI355X: Why Systems Win MoE Inference
·528 words·3 mins
NVIDIA AMD GPU Benchmarks Data Center
The Evolution of Network Security: From Firewalls to GenAI
·674 words·4 mins
Network Security AI Data Center
Mastering SSD Tail Latency with Predictive Neural Scheduling
·742 words·4 mins
Data Center Storage SSD Machine Learning Operating Systems