Skip to main content

CES 2026: NVIDIA, Intel, and AMD Redefine AI Platforms

·730 words·4 mins
CES NVIDIA Intel AMD AI Hardware
Table of Contents

At CES 2026, NVIDIA, Intel, and AMD each revealed major new computing platforms aimed at improving AI efficiency, enabling Physical AI, and advancing modular, system-level chip design. While their targets differ—ranging from hyperscale data centers to AI PCs and super-APUs—all three vendors emphasized tighter integration between compute, memory, and interconnect.

🟢 NVIDIA: The Vera Rubin Platform
#

Named after the astronomer who uncovered evidence of dark matter, Vera Rubin represents NVIDIA’s most aggressive move yet toward full-stack AI system design. Rather than a single processor, Rubin is a six-chip, extremely co-designed platform intended to reduce AI inference costs by up to 10×.

Platform Highlights
#

  • Rubin GPU
    Delivers 50 PFLOPS of AI inference performance using NVFP4 precision. It integrates a third-generation Transformer Engine and 288GB of HBM4, providing 22 TB/s of memory bandwidth.

  • Vera CPU
    An 88-core custom processor based on Arm v9.2, delivering 176 threads. The CPU is optimized for data orchestration, agentic workflows, and large-scale inference coordination.

  • BlueField-4 DPU
    Serves as the control plane for NVIDIA’s new Inference Context Memory Storage Platform, effectively acting as a shared external memory pool of up to 150TB per system.

  • Networking Stack
    Includes the ConnectX-9 SuperNIC with 1.6 TB/s throughput and the Spectrum-6 Ethernet switch, which uses silicon photonics to improve power efficiency by up to .

System-Level Performance
#

  • NVL72 Rack
    Integrates 72 Rubin GPUs and 36 Vera CPUs into a single coherent system delivering 260 TB/s of internal bandwidth—exceeding the estimated aggregate bandwidth of the global internet.

  • Efficiency Gains
    NVIDIA claims Mixture-of-Experts (MoE) models can be trained using 4× fewer GPUs compared to the previous Blackwell generation.

🔵 Intel: Panther Lake (Core Ultra Series 3)
#

Intel used CES 2026 to officially launch Panther Lake, its first high-volume client platform manufactured on the Intel 18A (2nm-class) process. The focus is squarely on AI PCs, power efficiency, and competitive integrated graphics.

Architectural Breakthroughs
#

  • Intel 18A Process
    Combines RibbonFET (gate-all-around transistors) with PowerVia (backside power delivery), achieving up to a 40% reduction in power consumption at equivalent performance.

  • Xe3 Integrated Graphics (Arc B390)
    The flagship configuration includes 12 Xe3 cores. Intel claims a 77% uplift in gaming performance over Lunar Lake and competitiveness with a 60W RTX 4050 laptop dGPU.

  • Disaggregated Tile Design
    Panther Lake uses a three-tile architecture—Compute, Graphics, and Platform Controller—connected via Foveros packaging. The CPU and NPU reside on the 18A Compute Tile, while the GPU Tile is built on TSMC N3E.

Availability
#

  • Pre-orders: January 6, 2026
  • Global availability: January 27, 2026

🔴 AMD: Strix Halo (Ryzen AI MAX+) and Gorgon Point
#

AMD expanded its mobile portfolio with updates that blur the line between traditional laptops, gaming systems, and compact workstations.

🧩 Ryzen AI MAX+ (Strix Halo)
#

AMD Ryzen AI Max+

AMD’s MAX+ 392 (12-core) and MAX+ 388 (8-core) SKUs now both ship with the full 40-CU Radeon 8060S iGPU, previously exclusive to higher-end parts.

  • Graphics Performance
    The 40-CU RDNA 3.5 GPU delivers up to 60 TFLOPS FP16, approaching the performance of mid-range discrete GPUs.

  • Unified Memory Architecture
    Supports up to 128GB of LPDDR5X-8533, with GPUs able to access up to 96GB as VRAM—enabling large AI models to run locally without discrete GPUs.

🧠 Ryzen AI 400 (Gorgon Point)
#

AMD Ryzen AI 400

The mainstream mobile lineup received a significant AI-focused refresh:

  • NPU Performance: Up to 60 TOPS using XDNA 2
  • CPU Boost Clocks: Up to 5.2GHz
  • Focus Areas: Improved battery life, stronger ROCm support, and broader AI developer enablement

📊 Comparison: 2026 Mobile Flagships
#

Feature Intel Core Ultra X9 388H AMD Ryzen AI MAX+ 388
Architecture Panther Lake (18A) Strix Halo (Zen 5)
Core Count 16 (4P + 8E + 4LP) 8 (Zen 5)
Integrated GPU Arc B390 (12 Xe3) Radeon 8060S (40 CU)
NPU Performance 50 TOPS (NPU) / 120 TOPS (Platform) 50 TOPS (NPU) / 126 TOPS (Platform)
Differentiator PowerVia, XeSS-MFG 256-bit unified memory

🧭 Big Picture
#

CES 2026 highlighted a clear industry shift:

  • NVIDIA is redefining AI infrastructure through system-scale co-design and disaggregated memory.
  • Intel is betting on manufacturing leadership and efficiency to win the AI PC era.
  • AMD is leveraging unified memory and powerful integrated GPUs to collapse traditional device categories.

Together, these platforms signal that future AI performance gains will come as much from architecture and integration as from raw transistor counts.

Related:

Related

Intel x NVIDIA Serpent Lake: A Mega-APU Challenge to AMD Strix Halo
·635 words·3 mins
Hardware Semiconductor SOC Intel NVIDIA AMD
AMD Warns of Risks from Intel–NVIDIA Alliance
·534 words·3 mins
AMD Intel NVIDIA AI PC Semiconductors
AMD Reaffirms AI Strategy Amid Intel-NVIDIA Partnership
·326 words·2 mins
AMD Intel NVIDIA AI Semiconductors Data Center PC Processors Threadripper