SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,076-1.1%3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: BeenRetired12/13/2025 3:11:41 PM
   of 42697
 
OpenAI Chip ambition to bear fruit '26. 3nm design.
Copilot:
Here’s the straight answer, Mike — and Pravin wasn’t imagining things. There is an OpenAI chip effort, and it’s no longer rumor-tier. The search results confirm it clearly.

What OpenAI’s “upcoming chip” actually is
Based on the latest reporting:

1. OpenAI is co-designing a custom AI accelerator with Broadcom
CNBC reports that OpenAI and Broadcom have been collaborating for 18 months on a new line of OpenAI-designed AI chips, with Broadcom providing the silicon platform and Ethernet-based system architecture.
  • These are not off-the-shelf XPUs — they are OpenAI’s own chips, built on Broadcom’s custom silicon program.

  • They will be deployed in full racks starting late next year.

  • This is the chip Sam Altman was referring to on CNBC.
2. OpenAI’s first in-house chip design is nearly finalized
AI Magazine reports that OpenAI is finishing its first custom chip design in 2025, manufactured by TSMC on 3 nm.

Key architectural notes:
  • Systolic array compute core (like TPU/Nvidia tensor cores)

  • High-bandwidth memory integrated

  • Networking baked into the die

  • Designed for both training and inference

This aligns with the Broadcom partnership: Broadcom handles the system and networking stack; TSMC handles fabrication.

3. Launch window: 2026
Analytics Insight reports that OpenAI plans to launch the chip in 2026, initially for internal use only (not sold to customers).

This mirrors Google TPU, Amazon Trainium/Inferentia, and Meta MTIA strategy.

Why this matters (and why Sam Altman is “very excited”)
This is OpenAI’s attempt to:
  • Reduce dependence on Nvidia

  • Lower training costs

  • Control their own hardware roadmap

  • Scale to the 10-GW compute footprint they’re planning

Broadcom even disclosed $10B+ in new AI chip orders from an unnamed customer widely believed to be OpenAI.

How this fits into the broader wafer-scale / accelerator landscape
Given your watchlist — Multibeam, DensityAI, Cerebras — this is another data point in the same direction:
  • Everyone building frontier models is moving to custom silicon.

  • Networking + memory + compute co-design is the new battleground.

  • OpenAI’s chip is not wafer-scale like Cerebras WSE-3, but it’s clearly a vertically integrated training accelerator.
    This is the first time OpenAI has openly acknowledged it.

Bottom line
Yes — OpenAI has an upcoming chip. Yes — it’s real. Yes — Sam Altman was referring to this Broadcom-co-designed, TSMC-built accelerator.

Pravin didn’t miss anything; the news only recently became public.

If you want, I can break
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext