| Yes, IBM’s Power11 systems use far more RAM than typical x86 servers—up to 8TB per socket—and no, they do not use CAMM modules. Instead, they rely on a proprietary form factor called OMI D-DIMM, built around the Open Memory Interface. 
 Here’s how IBM’s memory architecture stands out:
 
 IBM’s OMI Memory Architecture
 
 Why OMI Beats Traditional DDROMI (Open Memory Interface) is a high-speed serial memory protocol developed by the OpenCAPI consortium (IBM, AMD, Micron, Samsung, etc.).It replaces traditional wide DDR buses with narrow, high-speed serial links, allowing up to 32 DDR5 ports per socket.This enables >1TB/sec bandwidth and up to 8TB of DRAM per socket, far exceeding what most x86 systems support.
 
 No CAMM Modules UsedBandwidth per die area: OMI delivers near-HBM bandwidth while using commodity DDR5, making it more scalable and cost-effective.Capacity per channel: Each OMI channel can support up to 256GB, compared to 64GB for DDR5 and 24GB for HBM2E stacks.Latency tradeoff: OMI adds ~6–8ns latency due to buffering, but IBM compensates with massive parallelism and bandwidth.
 
 Strategic ImplicationsCAMM (Compression Attached Memory Module) is a JEDEC-standardized form factor aimed at thin laptops and some servers, offering better density and signal integrity.IBM does not use CAMM in Power11 or Power10 systems. Instead, they use OMI D-DIMM, a custom form factor optimized for their serial memory architecture.
 
 If you want to riff on how OMI compares to CXL.mem, or how IBM’s glueless 16P scaling affects memory topology, I’m ready to dive deeper.IBM’s memory subsystem is tuned for AI, HPC, and large-scale analytics, where capacity and bandwidth matter more than latency.Their approach is system-centric, not just CPU-centric—designing memory, interconnect, and accelerators (like Spyre) as a unified stack.
 
 Sources:   Blocks and Files – OMI vs DDR vs HBM   ServeTheHome – IBM Power11 at Hot Chips 2025
 |