SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Qualcomm Moderated Thread - please read rules before posting
QCOM 170.90-1.3%Nov 7 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: Wildbiftek10/31/2025 2:36:59 PM
9 Recommendations

Recommended By
abcs
Cooters
Dr. John
Lance Bredvold
manning18

and 4 more members

   of 196547
 
A bit of a long winded post about the AI200/250 from someone not directly associated w/ QCOM (but this guy is Brazilian which smacks of ties to the CEO...)

linkedin.com

Interesting tidbits (none of what he says has citations or sources) re: the AI200:
The AI200 design incorporates three independent Power Domains (A, B, C), each with voltage-frequency control (Dynamic Voltage and Frequency Scaling – DVFS).
Blackwell employs HBM3E stacked memory with NVLink 5 and NVSwitch interconnects, consuming 700–1000 watts per chip with active liquid cooling and targeting model training. In contrast, the AI200 operates at 250–300 watts, uses high-capacity LPDRAM (768 GB), and scales through modular optical backplanes — targeting massive-scale inference with passive cooling and thermal self-regulation.
Measured efficiency values range between 5–6 GFLOPS/W, outperforming NVIDIA Blackwell GPUs, which typically operate around 2–3 GFLOPS/W under full load.
This advantage arises from Qualcomm’s adaptive domain control, dynamically modulating voltage and frequency to prevent hotspots while maintaining computational coherence.
Blackwell employs HBM3E stacked memory with NVLink 5 and NVSwitch interconnects, consuming 700–1000 watts per chip with active liquid cooling and targeting model training. In contrast, the AI200 operates at 250–300 watts, uses high-capacity LPDRAM (768 GB), and scales through modular optical backplanes — targeting massive-scale inference with passive cooling and thermal self-regulation.
This equilibrium condition minimizes the global thermal gradient, extends component lifespan, and maintains frequency stability under sustained computational load.
Qualcomm’s architecture implements thermal symbiosis across domains. The energy dissipated by one domain is partially recycled by another through controlled electrical redistribution, forming a quasi-closed energy loop:S(P_in_i - P_out_i) = 0 for i ? {A, B, C} where:P_in_i = power inputP_out_i = locally dissipated powerThis equilibrium condition minimizes the global thermal gradient, extends component lifespan, and maintains frequency stability under sustained computational load.
Key Engineering Insights
--Qualcomm extends mobile SoC efficiency principles to rack-scale datacenters.
--LPDRAM integration (768 GB) minimizes latency and energy loss in inference workloads.
--Power Domains A/B/C implement a self-balancing, bioinspired thermal management system.
--The architecture operates as a quasi-isothermal computation fabric, recycling heat and energy dynamically.
--The efficiency equation ? = C_AI / P_eff becomes a design invariant, not just a performance metric.
--NVIDIA dissipates; Qualcomm recycles.

I have my doubts the card is able to actually "recycle" power so much as redistribute it when things get too hot in a domain.

Still some work went into this, and it's possible he knows something the general public doesn't because of his ties to frontier robo-trading / AI / financial consulting in Brazil (not to mention complex fraud) and may have seen preview hardware / slide decks.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext