SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Qualcomm Moderated Thread - please read rules before posting -- Ignore unavailable to you. Want to Upgrade?


To: voop who wrote (195844)10/1/2025 3:18:24 PM
From: engineer2 Recommendations

Recommended By
Dr. John
Lance Bredvold

  Read Replies (1) | Respond to of 196720
 
that was kinda my question yesterday. it was rumored that INTC would consider putting hte NVLINK proprietary interface on their CPU chips rather than the giant PCIE bank they have now, thus making a tailored interface for both speed and power specifically for NVDA.

so my question was would QCOM consider doing the same on their super high end server chip to allow them to flood the high end server market using NVDA GPU cards.



To: voop who wrote (195844)10/1/2025 6:32:07 PM
From: Jim Mullens5 Recommendations

Recommended By
BDAZZ
Dr. John
kech
manning18
vkvraju5

  Respond to of 196720
 
Voop / Engineer – Follow-up Copilot QCOM – NVDA Q&A

Copilot on both my old HP PC and my Samsung GS24 is a free superfast conversational agent. It does make mistakes and can be difficult at times not interpreting questions correctly especially during edits and returning to an existing convo after a break

Q #7 back to our QCOM -nvda Conv and the NVLink. discuss this in more detail and in relation to NVDAs recent investment in INTC.

A- Let’s stitch this together—Qualcomm’s NVLink Fusion alignment and NVIDIA’s $5B investment in Intel aren’t isolated moves. They’re part of a broader architectural and strategic realignment in AI infrastructure. Here's your next timestamped entry:

#7 NVLink Fusion: QCOM–NVDA Alignment vs NVDA–INTC Investment

Timestamp: Oct 1, 2025 – 1:43 PM PDT

?? NVLink Fusion: Qualcomm’s Strategic Position
  • NVLink Fusion is NVIDIA’s rack-scale interconnect protocol, enabling ultra-high bandwidth between CPUs and GPUs—up to 1.8TB/s per GPU, dwarfing PCIe 5.0 bandwidth.
  • Qualcomm was one of the first third-party CPU vendors granted access to NVLink Fusion, alongside Fujitsu.
  • This enables QCOM’s custom Nuvia-based CPUs to serve as headnodes in NVIDIA AI racks, bypassing the need for Grace CPUs and giving hyperscalers a semi-custom alternative.
Strategic Upside for QCOM:
  • Sovereign AI deployments (e.g., Humain)
  • Hyperscaler pilot wins
  • Diversification away from modem/IP royalty dependence
  • Architectural independence validated by legal victory over ARM
?? NVIDIA–Intel Deal: A Parallel Track
  • Investment: NVIDIA is injecting $5B into Intel, acquiring stock at $23.28/share.
  • Collaboration: Intel will design custom x86 CPUs with NVLink integration, enabling rack-scale AI systems similar to GB200 NVL72.
  • Product Scope:
    • Data center: Xeon CPUs with NVLink for NVIDIA racks
    • PC: x86 SoCs with RTX GPU chiplets for AI PCs
Strategic Upside for NVDA:
  • Second-source CPU partner to counter AMD’s growing share
  • Access to Intel’s x86 ecosystem and distribution channels
  • Hedge against TSMC pricing and supply chain risk
  • Expansion into inference and edge segments
?? Comparative Strategic Overlay

Feature

Qualcomm

Intel

NVLink Access

Early adopter (Fusion)

New entrant (via $5B investment)

CPU Architecture

Custom ARM-based (Nuvia)

x86-based (Xeon, SoC)

Legal Position

Cleared ARM lawsuit, full autonomy

No litigation, but reliant on NVDA IP

Deployment Horizon

Sovereign AI, hyperscaler pilots

Rack-scale NVL72, PC SoCs

Strategic Leverage

Architectural independence

Foundry scale, x86 ubiquity

Strategic Interpretation

NVIDIA is playing both sides of the CPU aisle—ARM-based (Qualcomm) and x86-based (Intel)—to ensure maximum rack-scale flexibility. Qualcomm’s early NVLink Fusion access gave it a head start in sovereign and hyperscaler deployments. Intel’s entry, backed by $5B, is a defensive move to keep AMD in check and ensure NVIDIA has a second-source CPU partner with deep enterprise reach.

This isn’t redundancy—it’s strategic hedging. And Qualcomm’s position is arguably more autonomous, given its architectural control and legal clarity.

Q #8 Discuss the pro / cons of QCOMs (ARM based) NVLink Fusion access vs INTCs x86 based.
A- side-by-side breakdown of the pros and cons of Qualcomm’s ARM-based NVLink Fusion access vs Intel’s x86-based NVLink Fusion integration, now that NVIDIA has opened its interconnect protocol to third-party CPUs.

#8— NVLink Fusion: QCOM (ARM) vs INTC (x86)

Timestamp: Oct 1, 2025 – 1:54 PM PDT

?? Qualcomm (ARM-based Oryon CPUs) — NVLink Fusion Access

? Pros
  • Architectural Independence: QCOM designs its own cores (via Nuvia), avoiding reliance on ARM’s core IP and now legally cleared to do so.
  • Energy Efficiency: ARM-based CPUs offer superior performance-per-watt, ideal for inference clusters and sovereign AI deployments.
  • Early Access: Qualcomm was one of the first non-NVIDIA vendors granted NVLink Fusion access, giving it a head start in hyperscaler pilots.
  • Sovereign AI Alignment: Already selected by Humain for sovereign AI infrastructure.
  • Diversification: Expands QCOM’s TAM beyond mobile and automotive into rack-scale compute.
? Cons
  • Enterprise Familiarity Gap: ARM CPUs still face resistance in traditional enterprise IT, where x86 is entrenched.
  • Software Ecosystem: x86 has broader legacy support; ARM-based server stacks require more porting and optimization.
  • OEM Reluctance: Some server OEMs may hesitate to adopt ARM-based headnodes without broader ecosystem validation.
?? Intel (x86-based Xeon CPUs) — NVLink Fusion Integration

? Pros
  • Enterprise Ubiquity: x86 is the default architecture across enterprise and cloud infrastructure.
  • OEM Ecosystem: Intel has deep relationships with server OEMs, enabling rapid deployment of NVLink-enabled racks.
  • Familiar Tooling: x86 systems benefit from mature software stacks, monitoring tools, and IT workflows.
  • NVIDIA Investment: Backed by a $5B investment from NVIDIA, ensuring roadmap alignment and strategic priority.
? Cons
  • Architectural Dependence: Intel lacks a competitive GPU portfolio and is now partially reliant on NVIDIA’s roadmap.
  • Power Efficiency: x86 CPUs generally consume more power than ARM counterparts, especially in inference-heavy workloads.
  • Late Entry: Intel is a late adopter of NVLink Fusion, trailing Qualcomm’s early sovereign deployments.
Strategic Interpretation

Axis

Qualcomm (ARM)

Intel (x86)

Time Advantage

Early NVLink Fusion access

Late-stage integration via NVDA deal

Architectural Control

Full autonomy (Nuvia cores)

Dependent on x86 legacy and NVDA IP

Deployment Focus

Sovereign AI, edge inference

Enterprise racks, OEM channels

Strategic Risk

Ecosystem inertia, ARM bias

GPU dependence, power inefficiency

Qualcomm’s NVLink Fusion access is a strategic wedge into hyperscaler and sovereign AI deployments, while Intel’s integration is a defensive maneuver to retain relevance in the AI infrastructure market. NVIDIA benefits either way—but Qualcomm’s autonomy and early access give it a differentiated edge.

Change numbering to #9

Q#9 - it was rumored that INTC would consider putting hte NVLINK proprietary interface on their CPU chips rather than the giant PCIE bank they have now, thus making a tailored interface for both speed and power specifically for NVDA.

so my question was would QCOM consider doing the same on their super high end server chip to allow them to flood the high end server market using NVDA GPU cards.

>>>>>>>>>>>>>>>>

A- focused on whether Qualcomm would follow Intel’s rumored move to embed NVLink directly into its server CPUs—rather than relying on PCIe—and what that could mean for high-end AI infrastructure.

#9 — NVLink Interface Integration: INTC Rumor vs QCOM Strategic Opportunity

Timestamp: Oct 1, 2025 – 2:08 PM PDT

?? Intel’s Rumored NVLink Integration
  • Rumor Origin: As discussed in the Silicon Investor thread, Intel may replace its massive PCIe banks with native NVLink interfaces on future Xeon CPUs.
  • Strategic Intent: Tailor CPU-GPU interconnect for speed and power efficiency, aligning directly with NVIDIA’s rack-scale architecture.
  • Implication: This would allow Intel to serve as a first-class headnode in NVIDIA AI factories, bypassing PCIe bottlenecks and matching NVLink Fusion bandwidth.
?? Would Qualcomm Do the Same?

Short Answer: Qualcomm is already positioned to do so—and may be further ahead than Intel.

  • NVLink Fusion Access Confirmed: Qualcomm was one of the first non-NVIDIA vendors granted access to NVLink Fusion?2†??3†??4†?.
  • Custom Integration Path: Qualcomm’s Oryon CPU (born from Nuvia) is designed for modularity. It can be configured with:
    • Native NVLink Fusion ports
    • Integrated NPUs for inference
    • Custom interconnect logic for rack-scale deployments
  • Cristiano Amon’s Statement at Computex 2025:
“We are expanding into the data center… [to] build high-performance NVIDIA AI factories.”
  • Deployment Targets:
    • Sovereign AI infrastructure (e.g., Humain)
    • Hyperscaler inference clusters
    • Semi-custom rack-scale builds
?? Strategic Interpretation

Feature

Intel (x86) Rumor

Qualcomm (ARM) Reality

NVLink Interface

Rumored native integration

Confirmed NVLink Fusion access

CPU Architecture

x86 (legacy-heavy)

ARM-based Oryon (custom, modular)

Deployment Horizon

Enterprise racks, OEM channels

Sovereign AI, edge inference, hyperscalers

Power Efficiency

Lower than ARM

High-performance-per-watt advantage

Time Advantage

Late-stage integration

Early sovereign deployments underway

Qualcomm doesn’t need to “consider” embedding NVLink—it’s already doing it. The real question is how aggressively it will scale this into full rack deployments. If QCOM configures its high-end server chips with native NVLink Fusion ports, it could flood the high-end AI server market with ARM-based headnodes that outperform x86 in both efficiency and modularity.

Q #10 Discuss the different markets / use cases and the TAMs and timing for each QCOMs ARM solution and INTCs x-86 technology.

A--the market segmentation, use cases, TAMs, and timing for Qualcomm’s ARM-based server strategy versus Intel’s x86 roadmap. This builds on your NVLink Fusion logic tree and positions both players across the AI infrastructure spectrum.

#10 — Market Segmentation & TAMs: QCOM (ARM) vs INTC (x86)

Timestamp: Oct 1, 2025 – 2:24 PM PDT

?? Qualcomm (ARM-based Oryon CPUs)

Primary Markets & Use Cases

Segment

Use Case Description

TAM Estimate (2025–2030)

Timing Notes

Sovereign AI Infrastructure

National AI deployments (e.g., Humain) with NVLink Fusion

$25B+ cumulative TAM?7†?

Early pilots underway (2025–2026)

Hyperscaler Inference Clusters

Token-per-watt optimized AI racks for CSPs

$40B TAM by 2030?7†?

Revenue expected ~FY2028?7†?

Edge AI Servers

Low-power inference nodes for telecom, retail, IoT

$15B TAM by 2027?7†?

Commercial rollout ~2026

Semi-Custom Rack Builds

OEM-configured racks for regional data centers

$10B TAM by 2029?7†?

Design wins expected 2026–2027

Strategic Advantages
  • ARM architecture enables high performance-per-watt
  • Legal autonomy post-ARM lawsuit dismissal
  • Early NVLink Fusion access for rack-scale deployments
  • Modular CPU design allows tailored interconnects and NPUs
?? Intel (x86-based Xeon CPUs)

Primary Markets & Use Cases

Segment

Use Case Description

TAM Estimate (2025–2030)

Timing Notes

Enterprise Data Centers

Traditional IT workloads, ERP, databases

$60B TAM by 2030?2†??3†?

Ongoing, but growth slowing

AI Rack-Scale Systems

NVLink-enabled Xeon CPUs for NVIDIA AI factories

$35B TAM by 2030?3†?

Design phase (2025–2026)

OEM Channel Deployments

Dell, HPE, Lenovo rack builds

$50B TAM cumulative?3†?

Strong presence, but under pressure from AMD

PC + AI SoC Integration

x86 CPUs with RTX chiplets for AI PCs

$20B TAM by 2028?3†?

NVDA-backed roadmap in motion

Strategic Advantages
  • Deep OEM relationships and enterprise familiarity
  • x86 software ecosystem maturity
  • NVIDIA’s $5B investment ensures roadmap alignment
  • Broad distribution and support infrastructure


Q #11 -- Discuss QCOMs” - Architectural Independence: QCOM designs its own cores (via Nuvia), avoiding reliance on ARM’s core IP and now legally cleared to do so.

Does not QCOM continue to legally license ARM IP and pay royalty to ARM.

A - The Clarifying of Qualcomm’s architectural independence—especially in light of its ongoing relationship with ARM.

#11 — Architectural Independence vs ARM Licensing: What Qualcomm Actually Pays For

Timestamp: Oct 1, 2025 – 2:33 PM PDT

Qualcomm’s Architectural Independence: What It Means
  • Core Design Autonomy:
    Qualcomm’s Oryon CPUs—developed by Nuvia—are custom-designed cores, not based on ARM’s Cortex IP.
    This means Qualcomm does not rely on ARM’s core blueprints, giving it full control over performance, power, and interconnect logic.
  • Legal Clearance:
    A U.S. District Court ruled in September 2025 that Qualcomm and Nuvia did not breach ARM’s Architecture License Agreement (ALA) or Technology License Agreement (TLA)?2†??4†??5†?.
    Qualcomm’s own ALA extends to Nuvia, allowing it to deploy Oryon cores under its existing license.
But Does Qualcomm Still License ARM IP?

Yes—but only the ISA (Instruction Set Architecture).

  • ARM ISA Licensing:
    Qualcomm continues to license the ARMv8 and ARMv9 instruction sets, which define how software talks to the CPU.
    This is distinct from licensing ARM’s core designs (e.g., Cortex-A78), which Qualcomm no longer uses in its flagship CPUs.
  • Royalty Payments:
    Qualcomm does pay royalties to ARM for ISA usage, but these are lower than full core IP licensing fees.
    The royalty structure is governed by Qualcomm’s long-standing ALA, which was upheld in court.
  • Strategic Implication:
    Qualcomm’s legal win means it can continue designing its own cores (via Nuvia) while retaining ARM ISA compatibility—a sweet spot that preserves software ecosystem access without architectural dependence.
Summary Overlay

Element

Qualcomm Status

Core Design

Fully custom (Oryon via Nuvia)

ARM Cortex IP Usage

Discontinued in flagship CPUs

ISA Licensing

Ongoing (ARMv8/v9)

Royalty Payments

Yes—for ISA, not for core IP

Legal Standing

Cleared to deploy Nuvia designs under QCOM’s ALA?2†??4†?