SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Qualcomm Moderated Thread - please read rules before posting
QCOM 181.03-3.5%9:30 AM EDT

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Doug M. who wrote (196270)10/28/2025 4:15:05 PM
From: Doug M.7 Recommendations

Recommended By
Cooters
Dr. John
manning18
PNUT
ryhack

and 2 more members

   of 196285
 
French article - October 28, 2025

<< The next generation, the AI250 expected in 2027, promises even more significant advances thanks to a near-memory computing architecture. This innovation is expected to increase effective memory bandwidth tenfold while significantly reducing power consumption. The AI250 will also introduce disaggregated compute capacity – via CXL – allowing compute and memory resources to be dynamically shared between CPU cards.>>


<< Scott Young, senior director of consulting services at Info-Tech Research Group, pointed out that learning is related to computing power and tolerant of time, while inference is related to memory and/or latency and "presumably continuous." These two processes require different types of servers, he noted. Learning materials are expensive, and while they can be used for inference, they are not always optimal. The demand for inference will continue to grow as companies deploy more agentic AI, he said. "Hardware specifically designed for inference can be more cost-effective and can be adapted appropriately to the rapid growth in the use of these AI models," Young explained.>>

<< This seems like a natural step, as Qualcomm has found success with its AI100 accelerator, a powerful inference chip, he explained. At present, Nvidia and AMD dominate the training market, with Cuda and ROCm enjoying customer loyalty. "If I'm a semiconductor giant like Qualcomm, who understands the balance between performance and power so well, this inference market makes perfect sense," Kimball said. He also highlighted the company's plans to return to the data center processor market with its Oryon processor, which powers the Snapdragon and is loosely inspired by the technology acquired from its $1.4 billion acquisition of Nuvia. Ultimately, Qualcomm's decision demonstrates how open the inference market is, the analyst said. He stressed that the company had always been able to choose its target markets wisely and had been successful in entering these markets. "It makes sense for the company to decide to get more involved in the inference market," said Kimball. He added that, from a ROI perspective, inference will "eclipse" training in terms of volume and dollars.>>

With its AI200 and AI250 racks, Qualcomm challenges Nvidia in the AI market - Le Monde Informatique



Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext