SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,015-5.6%Dec 17 3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: BeenRetired12/3/2025 5:50:24 AM
   of 42711
 
AMD expands partnership with HPE to build open rack-scale AI infrastructure
AMD expands partnership with HPE to build open rack-scale AI infrastructure

19h

AMD ( AMD) has expanded its partnership with Hewlett Packard Enterprise ( HPE) to build open, rack-scale artificial intelligence infrastructure for high-performance computing and to advance Sovereign AI research.

HPE will become one of the first system providers to adopt AMD's Helios rack-scale AI architecture. It will integrate a purpose-built HPE Juniper Networking scale-up switch in collaboration with Broadcom ( AVGO), the companies announced today. The system is designed to simplify the deployment of larger-scale AI clusters.

"With 'Helios', we're taking that collaboration further, bringing together the full stack of AMD compute technologies and HPE's system innovation to deliver an open, rack-scale AI platform that drives new levels of efficiency, scalability, and breakthrough performance for our customers in the AI era," said AMD CEO Lisa Su.

"With the introduction of the new AMD 'Helios' and our purpose-built HPE scale-up networking solution, we are providing our cloud service provider customers with faster deployments, greater flexibility, and reduced risk in how they scale AI computing in their businesses," added HPE CEO Anotnio Neri.

The platform utilizes AMD Instinct MI455X GPUs, AMD EPYC Venice CPUs and AMD Pensando Vulcano NICs for scale-out networking*.

Helios is also being used to power Herder, a new supercomputer for the High-Performance Computing Center Stuttgart in Germany.

"Our scientific user community requires that we continue to support traditional applications of HPC for numerical simulation," said Center Director Michael Resch. "At the same time, we are seeing growing interest in machine learning and artificial intelligence. Herder's system architecture will enable us to support both of these approaches, while also giving our users the ability to develop and benefit from new kinds of hybrid HPC/AI workflows."

The new supercomputer is expected to go online by the end of 2027.

*Copilot:
Quick Answer:

  • AMD Instinct MI455X GPUs ? built on TSMC 3nm (CDNA 5 architecture)

  • AMD EPYC Venice CPUs (Zen 6) ? fabbed on TSMC 2nm (N2/N2P process)

  • AMD Pensando Vulcano NICs ? next-gen 800G NICs for PCIe Gen6, node size not explicitly disclosed, but expected to align with advanced 5nm-class or below networking silico

Detailed Breakdown1. AMD Instinct MI455X GPUs
  • Part of the MI400 series, launching in 2026.

  • Based on CDNA 5 architecture.

  • Uses TSMC 3nm process node (successor to MI350X on 5nm/6nm).

  • Designed for AI training and inference, with up to 432 GB HBM4 memory and 19.6 TB/s bandwidth.

  • Node shrink from 5nm ? 3nm enables higher density and efficiency, critical for rack-scale AI clusters.

2. AMD EPYC Venice CPUs
  • Next-generation Zen 6 EPYC “Venice” CPUs.

  • Manufactured on TSMC’s 2nm (N2/N2P) process node, using gate-all-around nanosheet transistors.

  • Supports up to 256 cores / 512 threads.

  • Offers 70% performance uplift and 30% higher thread density compared to Zen 5 EPYC Turin.

  • Launch window: 2026, aligned with MI400 GPU rollout.

  • The 2nm node is a major leap, enabling higher transistor density and lower power consumption.

3. AMD Pensando Vulcano NICs
  • The Vulcano NIC is AMD’s next-gen 800G networking card for PCIe Gen6 clusters.

  • Supports UltraEthernet Consortium (UEC) and UALink 1.0, AMD’s open alternative to NVIDIA’s NVLink.

  • Designed for scale-out AI networking, handling massive GPU-to-GPU and CPU-to-GPU traffic.

  • Process node not explicitly disclosed. Given industry norms, high-performance NICs in 2026 are likely built on 5nm or 3nm-class silicon, but AMD has not confirmed.

  • Focus is on bandwidth (800G) and open ecosystem interoperability, rather than node marketing.

In short: Helios is a bleeding-edge rack-scale platform—GPUs at 3nm, CPUs at 2nm, and NICs at ultra-high bandwidth (800G), likely on advanced sub-5nm silicon. This combination is engineered to simplify and accelerate large-scale AI cluster deployments.
Would you like me to also compare these node choices against NVIDIA’s equivalents (e.g., Blackwell GPUs, Grace CPUs, Spectrum-X NICs) to see how AMD/HPE are positioning Helios competitively?
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext