SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : ASML Holding NV
ASML 1,071+1.8%3:59 PM EDT

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: BeenRetired who wrote (42117)10/24/2025 8:26:44 AM
From: BeenRetired   of 42215
 
Yes, IBM’s Power11 systems use far more RAM than typical x86 servers—up to 8TB per socket—and no, they do not use CAMM modules. Instead, they rely on a proprietary form factor called OMI D-DIMM, built around the Open Memory Interface.

Here’s how IBM’s memory architecture stands out:

IBM’s OMI Memory Architecture
  • OMI (Open Memory Interface) is a high-speed serial memory protocol developed by the OpenCAPI consortium (IBM, AMD, Micron, Samsung, etc.).
  • It replaces traditional wide DDR buses with narrow, high-speed serial links, allowing up to 32 DDR5 ports per socket.
  • This enables >1TB/sec bandwidth and up to 8TB of DRAM per socket, far exceeding what most x86 systems support.
Why OMI Beats Traditional DDR
  • Bandwidth per die area: OMI delivers near-HBM bandwidth while using commodity DDR5, making it more scalable and cost-effective.
  • Capacity per channel: Each OMI channel can support up to 256GB, compared to 64GB for DDR5 and 24GB for HBM2E stacks.
  • Latency tradeoff: OMI adds ~6–8ns latency due to buffering, but IBM compensates with massive parallelism and bandwidth.
No CAMM Modules Used
  • CAMM (Compression Attached Memory Module) is a JEDEC-standardized form factor aimed at thin laptops and some servers, offering better density and signal integrity.
  • IBM does not use CAMM in Power11 or Power10 systems. Instead, they use OMI D-DIMM, a custom form factor optimized for their serial memory architecture.
Strategic Implications
  • IBM’s memory subsystem is tuned for AI, HPC, and large-scale analytics, where capacity and bandwidth matter more than latency.
  • Their approach is system-centric, not just CPU-centric—designing memory, interconnect, and accelerators (like Spyre) as a unified stack.
If you want to riff on how OMI compares to CXL.mem, or how IBM’s glueless 16P scaling affects memory topology, I’m ready to dive deeper.

Sources: Blocks and Files – OMI vs DDR vs HBM ServeTheHome – IBM Power11 at Hot Chips 2025
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext