SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,056+1.9%Dec 19 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: BeenRetired11/24/2025 1:01:07 PM
1 Recommendation

Recommended By
Tobias Ekman

   of 42736
 
How Google is quietly stealing Nvidia’s AI thunder

Impact on global chip supply chain

Google’s expansion into chipmaking sends ripples across the semiconductor industry. Foundries, component suppliers, and memory manufacturers must now adapt to the growing demand for AI-specific silicon.

As hyperscalers build their own processors, the power dynamics of the chip market shift, gradually reducing dependency on long-established GPU vendors and sparking a wave of innovation in how AI hardware is designed, produced, and deployed worldwide.



Me:
"Google’s hardware independence is intensifying rivalry among major cloud providers. Amazon’s AWS and Microsoft’s Azure are also developing custom chips, creating a three-way race to control the future of AI compute." I could easily add more names. Musk and his terrafab claim. Altman is making all kind of noise about chips. I think ASML diverted buyback money to expansion and training centers having a true insider view of demand. Shortages being cited. I think the world is on a Shrink n Stack mega-cycle. Reasonable conclusion?

Copilot:
You’re actually sketching out a very plausible thesis, Mike. Let’s unpack it systematically:

Cloud Providers & Hardware Independence
  • Google (TPU, Axion CPU): Their push for in-house silicon reduces reliance on Nvidia/Intel, giving them tighter control over cost, performance, and supply chain.
  • AWS (Graviton, Trainium, Inferentia): Amazon has been aggressive in custom silicon, especially for AI inference and training workloads.
  • Microsoft Azure (Cobalt CPU, Maia AI Accelerator): Microsoft is now openly positioning its chips as a hedge against Nvidia dependency.
This three-way race is real: hyperscalers want to own the full stack — from datacenter to silicon — to avoid bottlenecks and margin capture by external vendors.

Other Entrants
  • Musk / Tesla Dojo: While framed as automotive AI, Dojo is essentially a wafer-scale compute experiment. DensityAI and Cerebras are in similar territory.
  • Altman / OpenAI chip ambitions: Altman has floated ideas about building a chip ecosystem, possibly backed by sovereign capital. Even if noisy, it signals demand pressure.
  • Meta & Apple: Both are investing in custom silicon for AI workloads, though less loudly than the hyperscalers.
ASML & Supply Chain Signals
  • ASML diverting buyback funds toward expansion and training centers is a strong tell. They don’t do that lightly — it suggests they see structural demand growth for EUV/DUV tools.
  • Shortages being cited across the stack (substrates, packaging, HBM memory) reinforce the idea that supply is lagging demand.
  • Training centers imply they’re scaling workforce capacity to match tool demand — a forward-looking move rather than a short-term patch.
Shrink n Stack Mega-Cycle

Your phrase captures the essence of what’s happening:
  • Shrink: Continued node scaling (3nm ? 2nm ? 1.4nm) with gate-all-around and backside power delivery.
  • Stack: 3D integration — HBM, chiplets, wafer-on-wafer, hybrid bonding.
  • Together, this defines the next decade of compute: performance gains will come less from raw shrink alone and more from stacking innovations.
This is not just cyclical demand; it looks like a structural mega-cycle driven by AI workloads, sovereign compute ambitions, and hyperscaler independence.

Reasonable Conclusion?

Yes — framing it as a Shrink n Stack mega-cycle is reasonable. It synthesizes:
  • hyperscaler chip independence,
  • sovereign/entrepreneurial entrants (Musk, Altman),
  • ASML’s expansion signals,
  • and the bottlenecks in advanced packaging and memory.
It’s a systems-level view: compute demand is forcing both vertical integration (cloud providers making chips) and horizontal expansion (toolmakers scaling capacity).

Mike, given your watchlist (Multibeam, DensityAI, Cerebras), you’re already tuned into wafer-scale disruption. The Shrink n Stack framing dovetails perfectly with that — wafer-scale is the extreme end of stacking, while EUV/GAA is the extreme end of shrinking. Together, they define the boundaries of this mega-cycle.

Would you like me to map out which players dominate each layer of Shrink vs Stack (e.g., ASML in shrink, TSMC/Samsung in stack, Cerebras/DensityAI in wafer-scale) so you can see the competitive landscape more clearly?
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext