SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,059-1.5%Oct 31 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: BeenRetired10/17/2025 5:32:28 AM
1 Recommendation

Recommended By
Stock Puppy

   of 42261
 
Intel/Nvidia to join AMD, Apple, Qualcomm unified memory club?
Soaring RAM bits.

AMD, Apple, and Qualcomm all have shared memory already
On the Mac side, Apple Silicon is the poster child for unified memory. You can buy a MacBook with 128GB of unified memory, and that memory can be used by both the MacBook’s CPU and GPU.

Meanwhile, an Nvidia GeForce RTX 5090 has 32GB of VRAM. For AI workloads where memory capacity is the bottleneck, a MacBook can outperform a beefy desktop PC with a high-end Intel CPU and an RTX 5090, even though Nvidia’s RTX GPU is much faster than Apple’s.

You also have AMD’s APU architecture (which offers system memory shared between CPU and GPU), Qualcomm’s Snapdragon X (which has unified memory like Apple), and Intel’s Lunar Lake platform (which has on-package memory shared between CPU and GPU).


Nvidia and Intel could win the AI PC war with this secret weapon© PC World

Apple

AMD’s Ryzen AI Max+ supports up to a maximum of 128GB of memory, but many AI tools and GPU compute workloads are designed for CUDA, Nvidia’s software platform. Without CUDA, that AMD system may not be compatible with the tools you want to run. (And those integrated Qualcomm and Intel GPUs just aren’t that fast. Right now, only AMD and Apple are truly in the running.)

Though they have unified memory, Apple, AMD, Qualcomm, and Intel don’t have the powerful GPUs of Nvidia or the mature CUDA platform that makes Nvidia the standard for GPU compute tasks. Yet, while CUDA is the de facto standard, the industry is trying to change this with technologies like Windows ML. Competitors are chipping away at Nvidia’s lead here.

To pull ahead again, Nvidia needs to find a way for its GPUs to access a system’s main pool of memory—just like Apple, AMD, and Qualcomm already allow on their SoCs. It doesn’t really matter if Nvidia has the most powerful GPU compute solution if the AI models and data can’t fit into the VRAM of said solution.

Not official, but the hints are there
In Nvidia’s official announcement, the company says “Intel will build and offer to the market x86 system-on-chips (SOCs) that integrate Nvidia RTX GPU chiplets. These new x86 RTX SOCs will power a wide range of PCs that demand integration of world-class CPUs and GPUs.” And in Intel’s official announcement, the two will “focus on seamlessly connecting Nvidia and Intel architectures using Nvidia NVLink.”

So, yeah, there aren’t many details here, and neither Nvidia nor Intel are talking about memory. Will the memory be on the GPU itself? Well, Nvidia describes this as “a new class of integrated graphics.” At this point, all major SoCs with integrated graphics—from Apple, AMD, Qualcomm, and even Intel itself—use unified or pooled memory. (Some platforms have memory unified at the hardware level, while others just make it fast for the GPU to share access to the system’s RAM. The key is that the GPU isn’t only stuck with its own small amount of VRAM.)

Nvidia and Intel could win the AI PC war with this secret weapon

PS
My story?
Shrink n Stack wars JUST started.
Great for ASML and Village.
Sticking with it.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext