SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Qualcomm Moderated Thread - please read rules before posting -- Ignore unavailable to you. Want to Upgrade?


To: GR8FORM who wrote (196161)10/25/2025 7:23:04 PM
From: QCOM_HYPE_TRAIN3 Recommendations

Recommended By
JeffreyHF
sbfm
vkvraju5

  Respond to of 196381
 
Wow you snatched that quickly, it looks like they took it down.



To: GR8FORM who wrote (196161)10/25/2025 10:31:05 PM
From: waitwatchwander2 Recommendations

Recommended By
JeffreyHF
manning18

  Respond to of 196381
 
Qualcomm Preps Cloud AI 200 "AIC200" Accelerator Support
Written by Michael Larabel in Hardware on 14 December 2024 at 06:22 AM EST. 2 Comments

share.google

MHI (Modem Host Interface) — The Linux Kernel documentation

share.google

MSI-X means more interrupts ...

share.google



To: GR8FORM who wrote (196161)10/26/2025 10:23:36 AM
From: Jim Mullens8 Recommendations

Recommended By
BDAZZ
Dr. John
jazzcat2000
John Hayman
Jon Koplik

and 3 more members

  Read Replies (1) | Respond to of 196381
 
GR8. Re: Qualcomm looks to take on Nvidia, AMD as it enters AI accelerator market
Looks like a Monday article was released early: (and taken down immediately per QHT)

>>>>>>>>>

And, strange wording – “....announced its entry into the.......(AI accelerator market)...” , since QCOM has been talking up AI accelerators for quite some time beginning in 2019, and originally spoke about the AI 200 last December.

Copilot >>>>>>>>>>>

1) when did QCOM first announce its entry into the AI accelerator market

Qualcomm first announced its entry into the AI accelerator market on April 9, 2019, during a San Francisco event where it unveiled the Cloud AI 100 — a server accelerator card designed specifically for AI inference workloads. This marked Qualcomm’s initial move to compete with companies like Nvidia and Intel in the data center inference space, leveraging a chip built from the ground up for running trained AI models efficiently.

2) and subsequent AI accelerator PRs / announcements

Here’s a timeline of Qualcomm’s major AI accelerator announcements and press releases, from its initial entry into the market to its latest rack-scale offerings:

?? Qualcomm AI Accelerator Timeline

Date

Product / Event

Key Highlights

April 9, 2019

Cloud AI 100

Qualcomm’s first dedicated AI inference accelerator, built on a 7nm node. Announced at an AI Day event in San Francisco. Claimed 10x performance-per-watt over competitors.

2020–2023

Cloud AI 100 Ultra / AI 80 Ultra

Iterative upgrades with higher TOPS, memory, and PCIe Gen4 support. Used in edge servers and embedded systems.

December 14, 2024

Cloud AI 200 (AIC200) Linux Driver Patch

Open-source driver support spotted for upcoming Cloud AI 200 series, signaling imminent launch.

October 27, 2025

AI200 and AI250 Launch

Qualcomm officially enters rack-scale inference with full-stack solutions: chips, cards, and racks. Durga Malladi states: “With Qualcomm AI200 and AI250, we’re redefining what’s possible for rack-scale AI inference…”?source†edge?.

October 2025

Snapdragon Summit 2025

Qualcomm outlines broader AI roadmap, including Gen AI on-device and enterprise inference strategy.

CES & MWC 2025

AI-first Edge Strategy

Qualcomm emphasizes shift from cloud to edge AI processing across PCs, automotive, and smart homes.




To: GR8FORM who wrote (196161)10/26/2025 11:41:35 AM
From: QCOM_HYPE_TRAIN20 Recommendations

Recommended By
abcs
BDAZZ
Bill Wolf
Dr. John
GR8FORM

and 15 more members

  Read Replies (4) | Respond to of 196381
 
One thing that may not be obvious to the non-technical folks here is the difference in memory between the AI200 and say all other products.

AI200 uses LPDDR (usually used in phones & laptops) while other DC products like the newly release AMD MI355X use HBM3E.

HBM memory is very fast, very expensive, very power hungry and very hot. Many of these other GPUs have 'limited' memory:

AMD MI355X 288 GB
Nvidia GB300 279 GB

As you can see Qualcomm is shipping a product with 2.6x the memory. This is important for inference as you can pack more models on this card, or have future models fit on your older cards. They didn't mention memory throughput, but I would expect it to be competitive with the competition. Anything wildly slower would probably be a non-starter for anyone looking to buy these.

Then with AI250 with their "innovative memory architecture based on near-memory computing which Qualcomm said will enable 10 times higher effective memory bandwidth and lower power consumption." If I read this correctly it means these cards will have faster significantly faster BW than HBM, with lower power, lower cost, lower heat. Truly a generational leap.

Along with that you better believe QC has patented every single breakthrough with that "innovative memory architecture". Either QC license it for tons of money to AMD and Nvidia or we keep everything close to the chest in hopes of creating a moat for inference servers. We'll see if they can bring this innovative memory to edge devices as well.
S
B of HBM3E



To: GR8FORM who wrote (196161)10/27/2025 10:33:13 AM
From: GR8FORM8 Recommendations

Recommended By
AlfaNut
Another John
Bill Wolf
Dr. John
JeffreyHF

and 3 more members

  Read Replies (1) | Respond to of 196381
 
WOW - 20 percent pop! Jump out the window. !!!!