SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : WDC, NAND, NVM, enterprise storage systems, etc.
SNDK 175.49-0.6%Oct 28 3:59 PM EDT

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: BeenRetired10/6/2025 9:41:46 AM
2 Recommendations

Recommended By
Lance Bredvold
Sam

   of 4818
 
Kioxia and Nvidia develop SSDs 100x faster for next-gen servers
Chiang, Jen-Chieh, Taipei; Sherri Wang, DIGITIMES Asia
Sunday 5 October 2025

Kioxia, Japan's leading memory chip maker, is partnering with Nvidia to develop a revolutionary solid-state drive (SSD) nearly 100 times faster than conventional models. The new drive, designed for generative AI servers, is scheduled for a 2027 debut...


Copilot expansion:
Kioxia and NVIDIA are co-developing ultra-fast SSDs for AI workloads that deliver up to 100 million IOPS—nearly 100× faster than today’s SSDs—by directly connecting to GPUs and bypassing CPUs. These SSDs are expected to launch in 2027 and support PCIe 7.0.

Here’s the deeper breakdown:

What Makes These SSDs 100× Faster?
  • IOPS Target: Kioxia is building SSDs that hit 100 million IOPS (Input/Output Operations Per Second)—a massive leap from today’s top-tier SSDs, which typically max out around 1 million IOPS.
  • GPU-Direct Architecture: Instead of routing data through CPUs, these SSDs will connect directly to NVIDIA GPUs, drastically reducing latency and boosting throughput for AI inference and training.
  • PCIe 7.0 Interface: They’ll support PCIe Gen 7, which offers up to 512 Gbps of bandwidth—4× faster than PCIe 5.0—enabling ultra-low latency and high-speed data exchange.
Why This Matters for AI
  • HBM Alternative: NVIDIA is exploring these SSDs as partial replacements for HBM (High Bandwidth Memory), expanding memory pools without relying solely on expensive DRAM.
  • Memory Expansion Strategy: With two SSDs per GPU, NVIDIA aims to hit 200 million IOPS, enabling larger model contexts and faster inference cycles.
  • AI-Driven NAND Demand: Kioxia projects that AI workloads will drive nearly half of NAND demand by 2029, reshaping the flash memory market.
Under the Hood: XL-Flash and NAND Tech
  • Kioxia may leverage Gen 2 XL-Flash in SLC mode (1 bit/cell), which offers:
    • <5 µs read latency
    • Up to 14 GB/s sequential read
    • 3.5 million IOPS per SSD in current prototypes
  • To reach 100M IOPS, Kioxia will likely scale plane-level parallelism, controller architecture, and NAND die density.
Timeline and Market Impact
  • Sample Shipments: Expected in late 2026
  • Commercial Launch: Targeted for 2027
  • Market Disruption: Could trigger a NAND shortage by 2026–2027 as AI clusters shift from HDDs and DRAM to flash-based memory expansion
This move could redefine GPU memory architecture, especially for GenAI and LLM workloads. If you're benchmarking XL-Flash vs HBM4 or mapping PCIe 7.0 rollout across AMD/NVIDIA platforms, I can help chart that next. Want to dive into NAND vs HBM economics or SSD controller architecture?
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext