Here’s How Google’s TPU Chip May Have Managed to End the Exclusive Reign of Nvidia Stock Here’s How Google’s TPU Chip May Have Managed to End the Exclusive Reign of Nvidia Stock
 Story by Annika Masrani • 20h
Google is suddenly seizing control of the artificial intelligence (AI) hardware conversation. The shift began with the celebrated release of its Gemini 3 AI model, which was trained entirely on Google’s proprietary chips. This rally gained significant momentum after reports suggested Meta Platforms (META) was in talks with Google to purchase these chips, known as Tensor Processing Units (TPUs).
Alphabet’s (GOOGL) shares are up 12% since the Gemini 3 debut, while Nvidia’s (NVDA) stock is down 3.4%. Google’s success also buoyed related companies like Broadcom (AVGO), which assists in designing the TPUs, seeing its stock rise 16%. Google is now the third-largest company globally, right behind Nvidia and Apple (AAPL), proving the market is taking the TPU threat seriously.
Google Creates its Own Hardware Solution In the 2010s, Google faced the same problem that many AI companies encounter now: traditional servers were inadequate, and Nvidia’s Graphics Processing Units (GPUs) were expensive and difficult to obtain in the massive quantities Google required. Operating at that scale demanded an internal, specialized solution.
Google launched its first-generation TPU in 2015. Before the public even knew the hardware existed, it was already powering critical back-end services across Google’s massive product suite, including Maps, Photos, and Translate. This initial internal development gave Google a decade of experience refining its custom silicon.
Customers Discover TPU’s Cost Advantage Google is now on its seventh generation of TPUs, continuing to use them extensively in-house. Crucially, the company is starting to find key external customers who might otherwise be running their workloads on Nvidia hardware.
Notable customers include Apple, which used TPUs to train its Apple Intelligence models, and the highly-valued AI startup Anthropic, which has integrated a TPU deal into its multi-cloud strategy. For many current AI workloads, the TPUs offer a much better cost structure than Nvidia GPUs, performing highly specialized matrix math for deep learning at bullet-train speed.
Nvidia Builds Defense Using Proprietary CUDA Software Despite the rise of TPUs and custom chips from Amazon (AMZN) and others, Nvidia’s dominance remains largely intact for now. Customers are still highly dependent on Nvidia hardware, which allowed the company to achieve an astounding gross profit margin of 73% in the third quarter.
Nvidia built its moat starting in 2004 with the development of CUDA software. CUDA makes GPUs programmable for tasks beyond graphics, allowing developers to use common languages like C. Today, virtually all AI researchers know how to use CUDA, which serves as a high software wall that competitors, including those designing TPUs, have not yet successfully breached.
Rivals Diversify to Limit Vendor RiskThe strategy employed by AI leader Anthropic provides a clear glimpse into the future of enterprise AI infrastructure. Anthropic holds large contracts with Amazon Web Services, Microsoft Azure (MSFT), and Google.
This approach employs a mix of Nvidia GPUs, Google TPUs, and Amazon’s custom Trainium hardware. By spreading out its vendors and chip use across a multi-cloud strategy, Anthropic is actively lowering its counterparty risk and avoiding dependence on any single provider, signaling a major headwind for Nvidia’s market share down the road.
To sum up, Google’s Gemini 3 success and the potential sale of TPUs to Meta mark the most significant challenge yet to Nvidia’s near-monopoly on AI infrastructure. While Nvidia’s market share and 73% gross margin are currently protected by its proprietary CUDA software, the industry-wide rush to build alternatives confirms that competition will eventually attack the chip giant’s highly profitable dominance. |