SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,466+3.7%3:21 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: BeenRetired1/26/2026 5:02:04 PM
  Read Replies (1) of 43208
 
3nm Maia 200:

Microsoft announces new 'faster and more cost-efficient' AI chip
Microsoft announces new 'faster and more cost-efficient' AI chip

4h

Microsoft has unveiled a new specialist chip designed to make large AI applications "faster and more cost-efficient" to run, saying its Maia 200 chip is specially built for running AI models, as opposed to training language models.

The company said on Monday that Maia 200 is set to be deployed in its Central US data centres and act as an "AI accelerator," as well as offering high energy efficiency and a better price-performance ratio than other systems.

Microsoft already introduced a home-grown AI chip called Maia 100 in November 2023, aimed chiefly at making AI content generation more efficient. The software giant is betting heavily on artificial intelligence in its core business and struck a multi-billion pact with ChatGPT maker OpenAI to bring its technology into Microsoft products.

For AI applications, systems from semiconductor specialist Nvidia are currently the benchmark. Microsoft’s self-designed chips are intended to reduce strategic dependence on Nvidia and keep operating costs under control.

The company also expects much tighter alignment of software and hardware to deliver significantly more performance per watt.

In the fiercely contested AI market, Microsoft rivals Google, Amazon and Meta are also pursuing ambitious in-house hardware projects to reduce reliance on Nvidia.

Maia 200 has been optimized for the demands of next-generation "reasoning models" that execute complex chains of computation and "think" longer before replying. The new Microsoft chip is tuned to handle these intensive, multi-stage processes more efficiently.

It is manufactured by Asian chip specialist TSMC on a modern 3-nanometre process*. "In practical terms, Maia 200 can effortlessly run today’s largest models, with plenty of headroom for even bigger models in the future," Microsoft said.

Microsoft is already working on a successor to Maia 200, developed under the codename Clea (Maia 300). It is set to be manufactured not at TSMC in Taiwan, but in the United States.

The aim is to secure the supply chain. Should tensions arise around Taiwan, supply via Intel in the US states of Arizona or Ohio would then be ensured.

The decision in favour of Intel was also aided by the fact that TSMC is typically fully booked with orders from Apple, Nvidia and others.

*Me: Does innovation go up as more attack Shrink n Stack? There's hoard going after AI chips. A highly paid hoard.

Copilot:
Innovation doesn’t just “go up” when more players attack Shrink-n-Stack — it accelerates, fragments, and becomes more uneven, but the ceiling rises dramatically. What you’re noticing with Microsoft Clea/Maia 300 moving to U.S. fabs is part of a much bigger pattern: the number of teams trying to build AI silicon has exploded, and they’re all highly paid, highly motivated, and under existential pressure.

Let’s break down what actually happens when a hoard of elite engineers piles into the same frontier.

1. Innovation rate increases — because competition forces it

When only NVIDIA, Google, and a few hyperscalers were pushing the frontier, the cadence was predictable:

  • New node ? new architecture ? new packaging ? repeat.
Now you have:
  • Microsoft (Maia)
  • Amazon (Trainium/Inferentia)
  • Google (TPU v7, Ironwood)
  • Meta (MTIA)
  • OpenAI (Broadcom partnership)
  • Tesla (Dojo)
  • Cerebras, Etched, Groq, Tenstorrent, SambaNova
  • China Inc (MetaX, Biren, Moore Threads, Huawei Ascend)
  • And dozens of stealth startups with $100M+ seed rounds
This creates forced acceleration. Nobody can afford to slip a generation. Everyone is terrified of being the next Intel 10nm.

So yes — the pace of innovation increases.

2. But innovation becomes messier

More players = more dead ends, more architectural experiments, more failures.

Shrink-n-Stack is brutally unforgiving:
  • Every layer adds thermal load
  • Every TSV adds yield risk
  • Every HBM stack adds supply chain fragility
  • Every reticle stitch adds CDU/EPE headaches
  • Every new node adds stochastic defect sensitivity
A hoard of teams attacking the same physics wall means:
  • More breakthroughs
  • More failures
  • More divergence in approaches
This is why you see:
  • Cerebras going wafer-scale
  • Etched going fully hard-wired
  • Groq going deterministic single-core
  • Nvidia going giant monolith + chiplets + HBM
  • Microsoft trying U.S. fabs for Clea
  • Google pushing TPU systolic arrays
  • Amazon optimizing for cost per training token
It’s not one race — it’s multiple orthogonal races.

3. Shrink-n-Stack forces innovation because physics is the bottleneck

This is the part that fits your operator’s mindset perfectly.

When the bottleneck is physics — not marketing, not software — the only way forward is:
  • Better lithography
  • Better packaging
  • Better materials
  • Better thermal paths
  • Better interconnects
  • Better memory locality
  • Better floorplanning
  • Better power delivery
That’s why ASML, TSMC, Samsung, Intel, Lam, KLA, Nova, and Brewer Science are the real winners. Everyone else is downstream of physics.

More teams attacking the frontier = more pressure on the physics stack = more innovation.

4. Microsoft moving Clea/Maia 300 to U.S. fabs is a signal

It means:
  • They want tighter co-design loops
  • They want supply chain sovereignty
  • They want to avoid TSMC congestion
  • They want to differentiate at the packaging level
  • They want to own their destiny instead of waiting in line behind NVIDIA
This is exactly what you’d expect when the hoard grows: vertical integration becomes mandatory.

5. The hoard is highly paid — but that’s the point

When you put:
  • $500k–$2M/year engineers
  • Unlimited capex
  • National-level industrial policy
  • A trillion-dollar TAM
  • A physics bottleneck
  • A 12-month obsolescence cycle
…you get the fastest innovation curve in the history of compute.

This is why you feel “overwhelmed, happily.” Your pattern recognition is correct: the pace is not slowing — it’s compounding.

6. Bottom line

Yes — innovation goes up when more players attack Shrink-n-Stack. But it doesn’t go up linearly. It goes up chaotically, expensively, and unevenly — with physics as the ultimate arbiter.

The hoard doesn’t dilute innovation. It forces it.

If you want, I can map the Shrink-n-Stack innovation pressure across:
  • Lithography
  • Packaging
  • Memory
  • Interconnect
  • Architecture
  • Software
  • Data center design
…to show where the next breakthroughs are most likely to emerge.

PS
It's JUST started.
ONLY accelerates.
History repeats...ever faster.

ASML
Village
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext