SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Artificial Intelligence, Robotics, Chat bots - ChatGPT
NVDA 195.18-1.7%3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: Frank Sully7/18/2025 9:49:29 PM
   of 5482
 
Beth Kindig on NVIDIA's Blackwell ramp:

We continue to own Nvidia – yet six months ago, we built bigger positions in other AI stocks. With that said, I believe Nvidia will return to lead the market in the second half of the year as Blackwell is (finally) arriving.

Nvidia's premiere Blackwell SKU called the GB200 NVL72 delivers real-time trillion-parameter LLM inference, 4X LLM training, 25X energy efficiency, and 18X data processing. The GB200 also provides 4X faster training performance than the H100 HGX systems and includes a second-generation transformer engine with FP4/FP6 Tensor core. The 4nm process integrates two GPU dies connected with 10 TB/s NVLink with 208 billion transistors.

According to management commentary, the ramp is happening very quickly: “On average, major hyperscalers are each deploying nearly 1,000 NVL72 racks or 72,000 Blackwell GPUs per week and are on track to further ramp output this quarter.” The rough math here implies hyperscalers are deploying $3 billion every week right now since each rack goes for $3 million.

Nvidia will also be a leader in AI inference with the B200 helping startups to triple their token generation rate with Nvidia Dynamo on Blackwell NVL72s stated to “turbocharge inference throughput 30X for the new reasoning models” [...] with the CEO later stating: “in the latest MLPerf Inference results, we submitted our first results using GB200 NVL72, delivering up to 30X higher inference throughput compared to our 8-GPU H200 submission on the challenging Llama 3.1 benchmark.”

To put it plainly, you haven’t seen anything yet in terms of Nvidia’s hardware capabilities. The generation that is shipping now is by far Nvidia’s most ambitious and with a $3 million price tag for its largest systems, any Big Tech company that goes elsewhere will lag its peers, and that’s something Big Tech is not willing to chance. Look for not only Blackwell, but also the very rapid release of Blackwell Ultra in quick succession to be the moment when Nvidia defies the markets (yet again).
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext