SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,017-1.1%Nov 7 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
Recommended by:
Sam
From: BeenRetired10/9/2025 7:39:28 AM
1 Recommendation   of 42356
 
18A/2nm HBM4 devices are expected to begin shipping in late 2025, with broader availability in 2026. Yes — both 2nm logic and HBM4 memory will substantially reduce power consumption and waste heat, especially in AI workloads.

Here’s a detailed breakdown of what’s coming and why it matters:

Shipping Timeline for 2nm + HBM4 Devices
  • TSMC 2nm (N2): Mass production begins late 2025, with Apple’s A20 and AI accelerators among first adopters.
  • Samsung SF2: Also targeting 2025, with AI-specific variants like SF2Z featuring backside power delivery.
  • Intel 18A (˜1.8nm): High-volume production in H2 2025, using RibbonFET and PowerVia.
  • HBM4 Memory:
  • SK hynix: First to ship 12-high HBM4 samples in Q1 2025; mass production in H2 2025.
  • Micron: Shipping samples mid-2025; full ramp in early 2026.
  • Samsung: Trial production underway; mass production expected late 2025.
Power & Heat Reduction Benefits

2nm Logic
  • Gate-All-Around (GAA) transistors: Reduce leakage and improve switching efficiency.
  • Backside Power Delivery (BSPDN): Cuts IR drop and improves thermal dissipation.
  • Power savings: Up to 30% lower power vs. 3nm nodes.
HBM4 Memory
  • Bandwidth: Over 2 TB/s per stack, doubling HBM3E.
  • Power efficiency: Up to 40% less power per bit than HBM3E.
  • Thermal gains: Advanced packaging (MR-MUF, hybrid bonding) and liquid-cooled substrates (e.g., TSMC CoWoS) reduce waste heat.
Real-World Impact on AI Systems
  • AI accelerators (e.g., NVIDIA Rubin) will pair 2nm logic with HBM4, cutting total system power while boosting throughput.
  • Datacenter economics: Lower cooling costs, higher rack density, and reduced carbon footprint.
  • Inference efficiency: Faster response times with less wattage per token — crucial for LLMs and real-time AI.
In short, 2nm + HBM4 is a leap not just in performance, but in energy efficiency and thermal management. Want to map how this affects power budgets per rack or compare HBM4 vs. GDDR7 for edge AI? I can break that down next. :Copilot

PS
Performance & Watts Shrink n Stack tailwinds blow strong.
I eagerly await 4Q25 on...as it's JUST started.

ASML
Village
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext