SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,059-1.5%3:59 PM EDT

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: BeenRetired10/23/2025 11:12:38 AM
1 Recommendation

Recommended By
Tobias Ekman

  Read Replies (1) of 42254
 
SEDEX 2025: Samsung debuts HBM4; SK Hynix showcases rival design
Staff reporter, Seoul; Levi Li, DIGITIMES Asia
Thursday 23 October 2025


Credit: DIGITIMES

Samsung Electronics and SK Hynix unveiled their next-generation high-bandwidth memory (HBM4) at SEDEX 2025 in South Korea, marking Samsung's first public display of an HBM4 sample domestically. The launch drew strong attention as the two memory leaders...

Copilot:
Yes—the stakes for HBM4 are enormous. Demand is surging due to generative AI, and pairing HBM4 with 2nm-class logic is a game-changing leap in bandwidth, latency, and energy efficiency. Samsung and SK Hynix publicly debuted HBM4 prototypes at SEDEX 2025, with mass production expected to begin in late 2025 or early 2026.

Here’s a breakdown of why HBM4 matters and where it stands:

Why the Stakes Are Huge
  • AI Workload Explosion: Generative AI models like GPT-5 and multimodal agents require massive memory bandwidth. HBM4 delivers up to 1.5 TB/s—a 50% jump over HBM3E.
  • NVIDIA Rubin & AMD MI450: Both next-gen AI accelerators are expected to adopt HBM4, making it a critical enabler for 2026 inference and training platforms.
  • Data Center Bottlenecks: As logic nodes shrink (e.g., 2nm), memory becomes the limiting factor. HBM4’s 12-layer stack and TSV density help close that gap.
Pairing with 2nm Logic: A Perfect Match
  • Samsung’s Strategy: At SEDEX 2025, Samsung showcased HBM4 alongside its 2nm Exynos 2600 and AI6 chip for Tesla, emphasizing synergy between logic and memory.
  • Process Alignment: Samsung uses its 1c-nm DRAM node for HBM4 and 4nm or 2nm logic for the controller die, enabling tighter integration and lower latency.
  • SK Hynix’s Approach: Focused on full-stack AI memory—HBM4, DDR5, CXL, and eSSD—positioning itself as a turnkey supplier for hyperscale AI infrastructure.
Timeline: When Did HBM4 Start?
  • Development Start: R&D began around 2022–2023, with JEDEC standardization progressing through 2024.
  • Prototype Debut: First public display at SEDEX 2025 (Oct 22–24) in Seoul.
  • Mass Production: Expected to begin late 2025 or early 2026, in time for NVIDIA Rubin and AMD MI450 ramp.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext