SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV -- Ignore unavailable to you. Want to Upgrade?


To: BeenRetired who wrote (42618)12/13/2025 9:07:41 AM
From: BeenRetired  Respond to of 42701
 
HBM:


2025: Demand Explodes, Supply Falls Behind Again

Micron says:
  • All of its 2025 HBM supply is already sold out.
  • HBM demand is projected to jump from $18B in 2024 to $35B in 2025.
That’s ~94% demand growth in a single year.

Meanwhile, supply is not doubling in 2025 — only 2024 saw a 105% bit-supply jump due to TSV expansions.

Translation

Demand growth in 2025 far outstrips supply growth. The ratio goes negative again — meaning shortage.

2026+: Even With Capacity Doubling, Demand Still Outruns Supply

Bloomberg Intelligence notes that HBM production capacity is expected to double annually from 2023–2026, but demand is still projected to remain tight.

This is the key point:

Even with 2× annual capacity growth, AI accelerator demand is growing faster.



Bottom Line

HBM remains supply-constrained through at least 2026.

Even with aggressive TSV expansion and capacity doubling:

  • AI accelerator demand (Nvidia, AMD, Intel, Cerebras, DensityAI, cloud TPUs) is growing faster than HBM supply.
  • Every major vendor (SK hynix, Samsung, Micron) is sold out for 2025.
  • The sufficiency ratio only briefly touches positive territory in 2024 before going negative again.
This is why wafer-scale players like Cerebras and DensityAI are so strategically interesting — they’re trying to escape the HBM bottleneck entirely.

If you want, I can break down:

  • Vendor-by-vendor HBM output (SK hynix vs Samsung vs Micron)
  • How much HBM each AI accelerator generation consumes
  • Whether HBM4 changes the supply curve in 2026–2027
Just say the word.

Here’s the blunt truth, Mike — and it lines up perfectly with what you’ve been tracking across wafer-scale, HBM, and the broader capex cycle:

DRAM (RAM): The Plan Is… Don’t Increase Supply

The memory makers have essentially decided not to solve the RAM shortage. They’re doing the opposite.

What the big three (Samsung, SK hynix, Micron) are actually doing
  • No meaningful DRAM capacity expansion, even with “crushing shortages”.
  • Capex is being funneled into HBM, not commodity DRAM.
  • Existing DRAM lines are being converted to HBM, which reduces output of standard DDR4/DDR5.
  • They’re hedging against an AI bubble, avoiding multi-billion-dollar DRAM fabs that might go underutilized.
  • Even with 14% YoY DRAM investment growth, bit output isn’t increasing because the money is going to density improvements and HBM R&D, not wafer starts.
Why they’re doing it
  • HBM wafers generate far more revenue per mm² than DRAM.
  • AI datacenters are outbidding everyone else for DRAM and LPDDR.
  • They believe shortages and high prices will last into 2027–2028 until new fabs come online.
Translation

The DRAM shortage is intentional. The plan is to ride high margins and prioritize HBM for AI accelerators — exactly the chips you’ve been tracking (Cerebras, DensityAI, Nvidia, etc.).

PS
I thought Shrink n Stack craze would end collusion by oligarchs.
Maybe...silly me.
Maybe...just strict market reaction.

PSS
Shrink n Stack bits ONLY soar from here...the no brainer.

ASML
Village