Broadcom and Marvell land major AI accelerator orders, TSMC remains key manufacturer Bryan Chuang, Taipei; Emily Kuo, DIGITIMES Asia Thursday 23 October 2025
Nvidia currently holds over 90% of the GPU market. However, as Google and AWS continue to invest in self-developed AI accelerators and expand their applications, Nvidia's market share is expected to face downward pressure after 2026, potentially falling...
Copilot: Nvidia’s GPU market share—currently over 90% in AI accelerators—is projected to decline modestly after 2026, potentially falling to around 75–80% by 2028, according to industry analysts. This shift is driven by hyperscalers like Google, AWS, and Microsoft ramping up their in-house silicon.
What’s Driving the Projected Decline
- Google’s TPU roadmap: With TPU v5p and future iterations, Google is increasingly training and serving its own models (e.g. Gemini) on internal hardware.
- AWS Trainium and Inferentia: Amazon is deploying its own chips at scale for both training and inference, reducing reliance on Nvidia.
- Microsoft’s Maia and Cobalt chips: Microsoft is now deploying its own AI accelerators in Azure, targeting inference and training workloads.
- Meta and OpenAI: While still major Nvidia customers, both are exploring custom silicon to optimize cost and performance.

Why Nvidia Still Leads
- Full-stack advantage: CUDA, cuDNN, TensorRT, and ecosystem lock-in make switching costly.
- Superior performance: Blackwell and its successors (e.g. Vera Rubin) continue to lead in raw compute and memory bandwidth.
- Supply chain scale: Nvidia’s partnerships with TSMC, SK Hynix, and Foxconn give it unmatched production capacity.
Strategic Implications
- Nvidia’s TAM will still grow, even if share declines—because the AI compute pie is expanding rapidly.
- Hyperscalers will dual-source: Most will still buy Nvidia chips while deploying their own accelerators for specific workloads.
- Software moat matters: Nvidia’s dominance in developer tools and frameworks remains a key differentiator.
Let me know if you want to track specific hyperscaler chip roadmaps (like TPU v6 or Trainium 3) or benchmark their FLOPS/$ vs Nvidia’s GB200. |