SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Tesla EVs - TSLA
TSLA 488.77+1.6%Dec 22 3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Savant who wrote (26708)11/20/2025 1:30:29 AM
From: i-node1 Recommendation

Recommended By
longz

   of 26824
 
AI Replies to questions I had about the chip discussion.

Based on the timing (last night, November 18, 2025) and details like "billions" of chips, a 3-year timeline, and threats to build a massive fab because Samsung and Intel can't keep up, it sounds like you're referring to Elon Musk's virtual fireside chat at the Baron Investment Conference. The interviewer was Ron Baron (founder and CEO of Baron Capital, a Tesla investor—Musk has done several chats with him before). The event livestreamed around 1:05 PM ET on November 18, and clips spread quickly on X and news sites. Musk dove deep into Tesla's (and xAI's) AI ambitions, stressing the chip shortage as a massive bottleneck.

The "Billions" of Chips: Scale and Timeline

Musk didn't say "billions" lightly—he threw out eye-popping numbers like 100–200 billion AI chips per year for Tesla alone, with a ramp-up to 10–20 billion in the first 3 years if they build their own fab. (Some reports vary slightly on the exact phrasing, but the core is consistent: industry giants like TSMC, Samsung, and Intel can't scale fast enough.) He praised Samsung and TSMC ("tremendous respect") but called their 5-year fab build times "an eternity" for Tesla's goals. Intel got a shoutout as a potential U.S. partner, but Musk hinted even their capacity might fall short without Tesla stepping in.

To hit this, Musk floated the "TeraFab" (or "Terafab")—a hypothetical "gigantic" semiconductor plant that could produce 100,000+ wafers per month (way beyond TSMC's "Gigafab" standard of 100,000). It'd be optimized for Tesla's custom AI chips (like the upcoming AI5, AI6, and beyond), which are inference-focused (running AI models) and designed to be ~10x cheaper and more power-efficient than Nvidia's equivalents. Timeline: Initial build in 1–2 years, "infinite expansion" by year 3 to match demand. Cost? Not specified, but it'd dwarf Tesla's existing $16.5B Samsung deal from July 2025.

Why That Many Chips? Musk's Master Plan

Musk's not hoarding chips for kicks—this is about fueling an AI/robotics revolution at unprecedented scale. He's betting everything on AI ending scarcity (e.g., "AI and robots will eliminate poverty and make everyone wealthy" via superhuman medical care, endless fun games, etc.). But it requires exponential compute power, and chips are the chokepoint. Here's the breakdown:

1. Tesla's Fleet: Full Self-Driving (FSD) for Millions of Cars
  • Every Tesla needs ~8–12 AI chips for real-time inference (processing camera/radar data for autonomy).
  • Musk projects 20–50 million Tesla vehicles/year by 2028–2030, all FSD-enabled.
  • Math: 20M cars × 10 chips = 200 million chips/year just for new builds. Add retrofits/upgrades for the existing 6M+ fleet, and it balloons.
  • AI5 (samples 2026, volume 2027) is 40x faster than current AI4; AI6 (2028) doubles that. Without enough chips, FSD stalls—no robotaxis, no $10T valuation Musk dreams of.
2. Optimus: Humanoid Robots at Billion-Scale Production
  • This is the big one—Musk sees Optimus as "the most important product" ever, outstripping cars in revenue.
  • Goal: 1 billion+ Optimus robots deployed worldwide by 2030s (starting with factories, then homes/services).
  • Each robot needs ~20–50 chips for edge AI (walking, grabbing, learning on-device, without cloud lag).
  • Math: 100M robots/year × 30 chips = 3 billion chips/year. Musk said there'll be "a lot of robots in 3 years," but scaling to billions requires chip floods.
  • Why? Robots do mundane work (manufacturing, elder care), freeing humans. But training/inference demands insane parallelism—hence Cortex 2 supercluster for Optimus sims.
3. xAI's Grok and Massive Training Clusters
  • xAI (Musk's AI startup) needs chips for training Grok models (Grok 4.1 just dropped; Grok 5 Q1 2026).
  • Musk's building Colossus (world's largest supercomputer) and plans orbital AI (solar-powered satellites via Starship).
  • Near-term: 100GW+ of AI compute/year in space alone (exceeding U.S. economy's processing power every 2 years).
  • Chips here are for both training (thousands in clusters) and inference (distributed). xAI's "Grokipedia" (massive knowledge repo) needs redundant, solar-system-wide copies—chips everywhere.
  • Overlap: Tesla's Dojo supercomputer shares tech with xAI; chips power it all.


The Bigger Picture: Why Threaten a Fab?

Existing foundries are maxed—Nvidia alone gobbles 80% of TSMC's high-end capacity. Musk's worried about geopolitics (Taiwan/China risks), U.S. energy limits, and speed. A TeraFab lets Tesla control costs (~1/10th Nvidia's), power (down to 250W/chip for robots), and supply. Nvidia's Jensen Huang called it "extremely hard," but Musk thrives on hard.

If this pans out, it's not just cars/robots—it's AI reshaping everything. Musk's all-in: "Chips on the brain." For more, check the Baron chat clips on X or Tesla's Q3 '25 earnings replay. What's your take—excited for Optimus, or skeptical on the timeline?
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext