SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,056+1.9%Dec 19 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: BeenRetired11/18/2025 10:32:38 AM
   of 42746
 
N2P + MI430X:
AMD and Eviden to Power Europe’s New Exascale

The Alice Recoque system is powered by next-gen AMD EPYC™ CPUs, codenamed “Venice,” AMD Instinct™ MI430X GPUs—a new MI400 Series accelerator engineered for sovereign AI and scientific computing— and AMD FPGAs, interconnected by Eviden’s network solution (BXI) into its newest BullSequana XH3500 platform, with DDN storage.

Copilot:
AMD EPYC “Venice” CPUs are built on TSMC’s advanced 2nm (N2P) process node, while the AMD Instinct MI430X GPUs are part of the MI400 Series based on the CDNA 5 architecture—likely also fabricated on a 2nm-class node. The specific AMD FPGA used in the Alice Recoque system has not been publicly disclosed yet.

Here’s a breakdown of each component:

AMD EPYC “Venice” CPUs
  • Architecture: Zen 6
  • Process Node: TSMC 2nm (N2P) with nanosheet (GAA) transistors
  • Core Count: Up to 256 cores / 512 threads
  • Memory Bandwidth: Up to 1.6 TB/s per socket
  • Packaging: SP7 form factor, enabling higher CCD density and power delivery
  • Performance Gains: Claimed 70% improvement over Zen 5 EPYC “Turin”
This marks AMD’s first use of 2nm-class silicon in server CPUs, pushing density and efficiency for exascale-class workloads.

AMD Instinct MI430X GPUs
  • Series: MI400 (2026 launch)
  • Architecture: CDNA 5
  • Memory: Up to 432GB HBM4 with 19.6 TB/s bandwidth
  • Target Use: High-precision HPC (e.g., climate modeling, scientific simulation)
  • Networking: Uses point-to-point mesh topology for tight synchronization in small clusters
While AMD hasn’t confirmed the MI430X node explicitly, it’s likely aligned with the Venice CPU at TSMC 2nm, given the shared launch window and performance tier.

AMD FPGAs in Alice Recoque
  • Status: Undisclosed
  • AMD has not yet revealed which FPGA family is used in the Alice Recoque system. Possibilities include:
  • Xilinx Versal Premium (7nm TSMC)
  • Versal AI Edge (6nm TSMC)
  • Or a custom variant for HPC interconnect or acceleration
Given the system’s sovereign AI focus and BXI interconnect, the FPGA may support network offload, protocol bridging, or real-time telemetry, but node size and model remain speculative.

System Integration: BullSequana XH3500
  • Interconnect: Eviden’s BXI (Bull eXascale Interconnect)
  • Cooling: Direct Liquid Cooling (DLC), fanless, warm-water support
  • Storage: DDN parallel file system*
  • Modularity: Supports mixing CPUs, GPUs, FPGAs, and future quantum-classical nodes
If you’re mapping sovereign AI silicon stacks, this combo—Venice + MI430X + BXI + DDN—represents Europe’s push for exascale autonomy. Want to compare this to France’s Jules Verne system or Spain’s MareNostrum 5?

*Exactly, Mike—DDN is the performance juggernaut, but VAST Data and WEKA carve out distinct niches that matter depending on the AI deployment model.

PS
My story?
Vast R&D bucks being thrown at Shrink n Stack. Not just AI. From what I'm reading.
Eagerly await 4Q25 on.
Sticking with it
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext