SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Qualcomm Moderated Thread - please read rules before posting -- Ignore unavailable to you. Want to Upgrade?


To: sbfm who wrote (196066)10/16/2025 10:26:16 AM
From: QCOM_HYPE_TRAIN2 Recommendations

Recommended By
Dr. John
sbfm

  Read Replies (1) | Respond to of 196576
 
It will hopefully also filter to the analysts that no one is going to be doing inference on a CPU. Arm doesn't have quality peripheral IP like NPU or GPU like Snapdragons do.



To: sbfm who wrote (196066)10/16/2025 11:04:02 AM
From: waitwatchwander  Respond to of 196576
 
Distributing anything doesn't reduce anything. It just moves stuff elsewhere.

2nm fabrication can be applicable anywhere including moving bits to and from places of compute.

Haas is not smart. He's just painting a picture based upon hypocrisy and a lack of statement as to a full understanding.

youtu.be



To: sbfm who wrote (196066)10/16/2025 1:04:06 PM
From: Jim Mullens1 Recommendation

Recommended By
sbfm

  Read Replies (1) | Respond to of 196576
 
Sbfm , Haas planted the seed ........................................................................................

Re- Arm Holdings CEO _ Haas- : move AI workloads from cloud to reduce power

QCOM has been talking about - and developing its chips for - AI at the edge for years. Eventually, Haas' comments will begin to filter through to the analysts and someone figures out that Snapdragon's will power that move

>>>>>>>>>>>

Interestingly, Haas is promoting QCOM’s high efficiency / low power case for QCOM SoCs case for inference at the Edge, but he did not suggest performing inference within the data center. (where QCOM is also developing a solution)

1. " Over time, he suggested, a large number of multi-gigawatt data centers won't be sustainable.

2- "One is low power, the lowest power solution you can get in the cloud. Arm really contributes there. But I think even more specifically is moving those AI workloads away from the cloud to local applications."

This is exactly the point of my recent posts. QCOM is developing solutions to do exactly that- drastically low-power solutions / drastically lower TOC ( cost savings) for inference within the data center- new builds and retrofitting the existing 11,800 data centers, averaging 500 racks per DC (each rack requiring 16 Cloud AI 100/200 accelerators and 2 Oryon CPUs - $220k QCOM revenue per rack). Again, over a decade the TCO saved is $300K, and thats $80K in net savings--- more than the original $220K cost.

>>>>>>>>>>>>

Haas planted the seed for CA to expand upon this in QCOMs upcoming earnings CC.

Will CA have the foresight to follow thru ???

+ Broadcom’s stock popped 10% on their Open AI announcement - only $30B ('26- '29)

+ QCOM SP move on $1.298 Trillion revenue (DC retrofit only) over 10 years ???????

CA- the balls in your court !!! lets get this stock moving up where it belongs .