SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Semi Equipment Analysis
SOXX 306.55+0.4%Oct 31 4:00 PM EDT

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: Julius Wong3/18/2025 11:00:12 PM
1 Recommendation

Recommended By
Return to Sender

   of 95333
 
Nvidia GTC 2025: Rubin, reasoning, robotics and more

Mar. 18, 2025 4:32 PM ET
By: Chris Ciaccia, SA News Editor

Nvidia (NASDAQ: NVDA) Chief Executive Jensen Huang kicked off the tech giant's annual GTC event with a rousing keynote touching on a wide array of topics, including how artificial intelligence has drastically changed the world in just the past few years.

“AI has made extraordinary progress,” Huang said at the event, which he coined as the “Super Bowl of AI,” adding that the world has really only become enthralled with artificial intelligence over the past decade. And now with generative AI, it has “fundamentally changed how computing is done,” Huang added.

The rise of agentic AI, where AI systems can pursue goals and make decisions on their own, will require “easily 100 times more” computing power than was believed just last year, Huang said. To emphasize that, Huang said that the top four cloud service providers — Amazon Web Services, Azure, Google Cloud Platform and Oracle Cloud have bought 3.6M Blackwell GPUs, compared to the 1.3M Hopper GPUs purchased last year. Blackwell GPUs only just started shipping in the latter part of 2024.

By the end of the decade (and perhaps sooner), data center build out should top $1T, Huang added.

AI is going through an inflection point and there will be CUDA-X libraries for every field of science, Huang said, including quantum computing, physics, 5G and 6G networks, gene sequencing, computational lithography and more.

Huang showed off a new CUDA library, CUD-SS, which is for computer aided engineering. Partners include Ansys ( ANSS), Cadence Design Systems ( CDNS) and more. “We've now reached the tipping point of accelerated computing,” Huang said, with Nvidia's CUDA software being a large part of that.

Partnerships

As part of the keynote, Huang announced several new partnerships, including that Nvidia would team with Cisco ( CSCO), T-Mobile ( TMUS), and Cerberus ODC to build full-stack radio networks in the U.S.

In addition, GM ( GM) has selected Nvidia to help it build its self-driving autonomous fleet, Huang added.

Several other partnerships were announced, including ones from IBM ( IBM), Micron ( MU), GE Healthcare ( GEHC), Super Micro Computer ( SMCI) and ServiceNow ( NOW), among others.

Blackwell boost

With Grace Blackwell now in full production, Huang also showed off Nvidia Dynamo. Dynamo is a distributed inference serving library, which Huang described as “essentially the operating system of an AI factory.”

Dynamo can aid Blackwell (and Hopper) with a “giant leap” in inferencing performance, Huang said.

Blackwell to Rubin to Feynman

Huang also took the wraps off Nvidia's next two generations, including Blackwell Ultra and the widely speculated Rubin.

Blackwell Ultra, which has more memory and twice the bandwidth of Blackwell, will be coming in the second half of 2025, Huang said.

Rubin, which has HBM-4 memory, will be coming in the second half of 2026, Huang added. And the next-generation after that, Rubin Ultra, will be available in the second half of 2027. Rubin Ultra has 15 times more exaflops for inferencing, 5 exaflops for training and 14 times more GB300 NVL72.

"Rubin is going to drive the cost down tremendously," Huang said.

A third generation, named after American theoretical physicist Richard Feynman, will be available in 2028, Huang added.

New products and co-package optics

Huang also showed off Nvidia's new Spectrum-X "Supercharged" Ethernet switch, along with its entry into co-package optics, Nvidia's silicon photonics offering, Quantum-X.

The Quantum-X package, which Nvidia worked with Taiwan Semiconductor ( TSM) on, will be available in the second half of 2025, while the Spectrum-X option will be available in the second-half of 2026, Huang said.

Huang also showed off new offerings for the enterprise: DGX Spark and DGX Station. These are geared towards data scientists and data researchers and will be available by all PC makers, such as HP ( HPQ), Dell ( DELL) and others, Huang added.

Huang also announced that Nvidia is working with the storage industry to bring artificial intelligence to data storage.

"For the very first time, your storage stack will be GPU-accelerated," Huang explained.

He also showed off Nvidia's new Llama Nemotron reasoning model, an AI model that any company can run anywhere.

Robots

Huang also showed off Groot N1, Nvidia's foundational open-source model for robots.

"Physical AI and robotics are moving so fast," Huang said. "Everybody pay attention: this could be the largest industry of all."

Huang also showed off a collaboration between Google's ( GOOG) ( GOOGL) DeepMind, Disney ( DIS) Research and Nvidia, called Newton. A robot, Blue, demonstrated the collaboration between the three companies.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext