To: BeenRetired  who wrote (42117 ) 10/24/2025 8:26:44 AM From: BeenRetired  Respond to    Yes, IBM’s Power11 systems use far more RAM  than typical x86 servers—up to 8TB per socket—and no, they do not  use CAMM modules. Instead, they rely on a proprietary form factor called OMI D-DIMM, built around the Open Memory Interface. OMI (Open Memory Interface)  is a high-speed serial memory protocol developed by the OpenCAPI consortium (IBM, AMD, Micron, Samsung, etc.).It replaces traditional wide DDR buses with narrow, high-speed serial links , allowing up to 32 DDR5 ports per socket .   This enables >1TB/sec bandwidth  and up to 8TB of DRAM per socket , far exceeding what most x86 systems support.   Bandwidth per die area : OMI delivers near-HBM bandwidth while using commodity DDR5, making it more scalable and cost-effective.Capacity per channel : Each OMI channel can support up to 256GB , compared to 64GB for DDR5 and 24GB for HBM2E stacks.Latency tradeoff : OMI adds ~6–8ns latency due to buffering, but IBM compensates with massive parallelism and bandwidth.CAMM (Compression Attached Memory Module)  is a JEDEC-standardized form factor aimed at thin laptops and some servers, offering better density and signal integrity.IBM does not  use CAMM in Power11 or Power10 systems. Instead, they use OMI D-DIMM , a custom form factor optimized for their serial memory architecture.   IBM’s memory subsystem is tuned for AI, HPC, and large-scale analytics , where capacity and bandwidth  matter more than latency.   Their approach is system-centric , not just CPU-centric—designing memory, interconnect, and accelerators (like Spyre) as a unified stack.   Blocks and Files – OMI vs DDR vs HBM    ServeTheHome – IBM Power11 at Hot Chips 2025