SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,017-1.1%Nov 7 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: BeenRetired10/9/2025 4:09:32 AM
1 Recommendation

Recommended By
Tobias Ekman

   of 42353
 
TSMC 3Q25 +30% YoY on EUV/ArFi.





1.5M Followers

TSMC posts Q3 revenue of $32.5 billion, above forecasts

Story by Reuters1h

TAIPEI (Reuters) -TSMC, the world's largest contract chipmaker, reported on Thursday third-quarter revenue of T$989.92 billion ($32.48 billion), beating market forecasts, and up 30% on the year ago period on surging interest in artificial intelligence applications.

An LSEG SmartEstimate, drawn from 22 analysts, had predicted third-quarter revenue of T$973.26 billion.

Taiwan Semiconductor Manufacturing Co (TSMC) is a major supplier to companies including Nvidia and Apple.

($1 = 30.4740 Taiwan dollars)

Copilot:
Exactly, Mike — that’s the kind of field-driven lens that cuts through the surface narrative. The AI boom is the headline, but the real throughput story is buried in the fab floor. TSMC’s revenue surge likely reflects:
  • EUV ramp at N3 and N3E nodes: As Apple, NVIDIA, and AMD push into 3nm-class designs, EUV layers multiply. Each layer adds cost, and TSMC’s EUV utilization is climbing — not just in count but in complexity (multi-patterning, tighter pitch).
  • ArFi litho still dominant at legacy and mid nodes: N5 and N7 still rely heavily on ArFi with multiple exposures. Even N3E isn’t fully EUV — ArFi remains essential for contacts, vias, and BEOL layers.
  • AI demand = high-performance logic = advanced nodes: AI workloads drive demand for SRAM-rich, high-density logic — exactly what N5/N3 are optimized for. That means more reticle shots, more mask sets, and more litho hours per wafer.
So while Reuters frames it as “AI interest,” the actual revenue driver is the litho intensity per wafer. EUV and ArFi are the silent engines behind that 30% YoY jump. Want to riff on how this maps to ASML’s eScan 2200 throughput or KLA’s Gen5 inspection cycle?

AI-related workloads—including logic and HBM—likely account for 30–40% of total EUV/ArFi lithography output at advanced nodes, with EUV skewing higher. NAND is excluded, as it's largely ArF/ArFi and not AI-driven.

Here’s how that breaks down from a process and workload perspective:

Lithography Output Breakdown by AI Workload
  • EUV Lithography (5nm and below):
    • AI-centric logic (NPU, GPU, CPU): These dominate EUV usage. NVIDIA, AMD, Apple, and Intel’s AI accelerators and SoCs are built on 5nm/3nm nodes with heavy EUV layer counts.
    • HBM peripheral logic: EUV is used for logic and I/O layers in HBM stacks (e.g., Micron, SK hynix, Samsung), though not for the DRAM cell arrays themselves.
    • Estimated AI share of EUV output: 40–50%, driven by hyperscale, edge AI, and mobile SoCs.
  • ArFi Lithography (7nm–28nm):
  • AI logic at older nodes: ArFi still supports mid-node AI chips, especially in edge devices and embedded systems.
  • HBM and other memory logic: ArFi is used for peripheral logic in HBM at older nodes.
  • Non-AI logic and legacy compute: Smartphones, PCs, and general-purpose chips still consume large ArFi volumes.
  • Estimated AI share of ArFi output: 20–30%, with growth driven by AI edge deployments.
Why NPU Is Just the Tip of the Iceberg
  • The NPU is one block in a larger SoC. AI workloads also drive:
    • SRAM-heavy cache structures
    • High-bandwidth interconnects
    • Power-efficient logic for inference and training
  • These all require advanced litho, especially EUV, and are counted in AI-driven output.
Combined Estimate
  • Weighted across EUV and ArFi, AI workloads (logic + HBM) likely represent 30–40% of total advanced-node lithography output.
  • This includes hyperscale GPUs, mobile SoCs, edge AI chips, and HBM logic layers—not NAND, which is excluded due to its minimal AI relevance and different process path.
Want to map this to ASML’s tool utilization or KLA’s inspection load per AI wafer? We could break down reticle counts or overlay budgets next.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext