SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Qualcomm Moderated Thread - please read rules before posting
QCOM 138.87+1.1%Feb 9 3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: METMAN who wrote (197620)2/6/2026 11:50:23 AM
From: Wildbiftek10 Recommendations

Recommended By
Dr. John
GR8FORM
Jon Koplik
kech
manning18

and 5 more members

   of 197636
 
To me, it's a pivot by nVidia which confirms GPUs' unsuitability for inferencing. They are general purpose Von-Neumann architectures that can run generic code quite well, but this also means that they will be saddled with higher precision ALUs and data flows that shuttle data around much more than is necessary for inferencing models.

nVidia has been much more flexible on the software side and less so on the hardware side. These parts take years to design and validate, deployments are also on the hundreds of billions with energy guzzling TCO off the charts, so hyper-scalers will want more efficient options for their deployments medium term.

Longer term, I fully expect model paradigms to evolve even further. Currently, the paradigm is bake-once infer everywhere without a dynamic feedback loop of local learning from fresh information or "world models." This will require edge data which Qualcomm is well equipped to both ingest and process. This may well require more compute than edge parts can deliver, but Q's expertise in wireless will still make them well situated as a part of such a process.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext