Solid-state drives: HDD replacements? Huge disappointments? Neither. By Brian Dipert, Senior Technical Editor -- 5/1/2008 EDN
Brian DipertSSDs will be in 25% of all notebook PCs within three years, according to bullish claims from Toshiba's president. Or not, according to a contrarian response from StorageMojo. Seeing these contending positions on the topic in recent days—as well as recently published underwhelming (albeit noncomprehensive) SSD (versus HDD) performance and power-consumption benchmark results—prompted me to weigh in with some points I believe the pundits are missing.
I've long followed developments in the flash-memory-as-mass-storage arena in my role with EDN (see the list of related links at the end of this column). Prior to that, I spent my first eight years out of college in Intel's nonvolatile memory group, all but the first 1.5 years' worth working with flash memory. During that short time, I personally experienced at least three complete oscillations of the constrained-supply-then-oversupply-then-constrained-again pendulum. Why this occurred, and how it affects SSDs' chances going forward, centers on two fundamental factors:
* Human psychology, and * Economics
Flash memory, like many other semiconductor products, is (generally, with the exception of some specialized product flavors) a commodity. Near-identical chips from multiple high-volume suppliers are available for sale. As such, commodity flash memory garners widespread application usage, by virtue of its ubiquity and competition-driven attractive prices.
The commodity marketplace requires a symbiotic relationship between sellers and buyers (albeit one that's generally uncoordinated in a formal sense, for antitrust-avoidance and other reasons). If high-volume, low-priced supply wasn't available, widespread demand wouldn't exist, and vice versa. Therefore, the two "invisible hands of the marketplace" need to work in sync in order that both can succeed in the long term. The need for suppliers to achieve sustainable long-term fiscal success is, however, a concept that their customers frequently overlook in the short-term frenzy to secure the best-possible deal.
ADVERTISEMENT As a result, companies frequently broker deals (and break them, in favor of alternative sources) over price differences of fractions of a cent per chip. Considering the billions of dollars needed to construct a new semiconductor-fabrication facility (along with the hundreds of millions of dollars needed just to retrofit an existing fab shell with leading-edge fabrication capability), along with the overall uncertainty of demand in key high-volume markets (case study: Apple iPods and iPhones), suppliers are understandably cautious about making new manufacturing-, packaging-, and testing-facility investments. And once the decision to invest is made, it takes several years to make a facility fully operational.
So what happens? The decision to invest in the next generation of manufacturing capability happens at the peak of constrained supply, when customers are therefore paying the highest per-chip price premium. And every supplier makes that decision at roughly the same time. Several years later, a substantial amount of incremental supply capability is poised and ready. And, because it's almost always more fiscally attractive to sell a full fab's worth of chips at a loss rather than run a half-empty fab, every supplier dumps all of its capability into the supply chain. The resultant price crash not only satisfies existing high-volume markets, it also creates new ones...like SSDs. And a year or a few later, we're back in a constrained-supply situation.
Right now, we're at the point where both existing NAND flash memory suppliers are ramping incremental manufacturing capability and where new NAND players, such as the Intel/Micron joint venture alliance, are also muscling into the market. Apple's demand isn't enough to soak up all this added supply, and none of the flash-memory vendors want to shutter their fabs. So they'll do everything in their pricing power to create substantial SSD demand, as quickly as possible.
What about technical factors? Where do SSDs cleanly fit, and in contrast where are their capabilities (and limitations) clearly out of sync with market demands? I feel a bit like a broken record as I write the words that'll follow, because they echo observations I made last fall...or for that matter, throughout my EDN career following flash memory...or for that matter, my entire near-20-year time-to-date as a participant in the flash-memory market. The fundamental decision criteria haven't changed, although plummeting cost-per-Gbyte for both HDDs and SSDs has heavily influenced the capacity threshold at which the one-or-other choice becomes clear. At times like these, I smile as I recall that in 1992, Intel introduced a 1-Mbyte flash memory (the 28F008SA) at what was then an awe-inspiring price of $30.
Where do SSDs clearly win?
* Where application demand for storage capacity is moderate, and when the constructed cost of that capacity in flash-memory trends below the alternative cost of the smallest-possible HDD (case study, ASUS's Eee PCs). * Where the available space for a storage subsystem is less than the volume consumed by the smallest-available HDD form factor (assuming that application requirements don't demand that the SSD be supplied in that same form factor). * Where application demand for storage ruggedness (temperature and moisture tolerance, shock resistance) exceeds demand for lowest possible cost-per-Gbyte (considering the lack of moving mechanical parts in a flash-memory array). * Where application demand for fastest-possible random read performance exceeds demand for lowest possible cost-per-Gbyte, especially when it's also possible to eliminate the performance bottleneck of legacy mass storage interfaces such as ATA, SCSI, and their serial equivalents (random read speeds for NAND flash are slower than sequential read speeds, but they're still orders of magnitude faster than the seek and rotation latencies of an HDD). * Where application demand for lowest-possible peak power consumption and heat dissipation exceeds demand for lowest possible cost-per-Gbyte (note the emphasis on peak; in conjunction with RAM buffering both within the drive and within the system containing the drive, HDDs have made impressive improvements in average power consumption, fueled by the historical requirements of iPods and other portable-electronics devices).
Conversely, where do HDDs clearly win?
* Where lowest cost-per-Gbyte is paramount, where the application's capacity demands can be served by the range of HDD sizes available at the time, and where an HDD can adequately address form-factor, ruggedness, power-consumption, and heat-dissipation concerns. Perhaps, if folks quit shooting video, and don't start widely downloading movie and TV show rentals and purchases, HDDs' edge here will tangibly diminish. Frankly, though, I see quite the contrary occurring, in both mass-storage usage areas. * Generally speaking, where the majority of read accesses are sequential. Especially in this era of high-RPM platters coupled with high areal per-platter bit-packing density courtesy of PMR (perpendicular magnetic recording) technology, HDDs (in conjunction with their integrated buffers) can stream an impressive amount of data in a given amount of time as long as seek and rotation latencies aren't a notable factor. * Again generally speaking, whenever write performance is paramount. Whenever the data set to be written can fit entirely within an HDD's RAM buffer and whenever that buffer's contents can be transferred to the platter in the background prior to the next buffer fill, an HDD will always win out. In more obscure cases (large data sets, and/or constant writes), the HDD-vs-SSD criteria largely depends on how fragmented the HDD is at the time, therefore to what degree seek and rotation latencies factor into the overall HDD write-performance equation.
In-between the "where SSDs clearly win" and "where HDDs clearly win" ends of the spectrum, there's a large "either/or" grey zone. Here's where incremental flash memory supply will go, on an impermanent basis based on whether we're in a constrained-supply or oversupply state at a given point in time. On that note, I'd like to comment on a few concerns Robin Harris mentioned in his StorageMojo missive:
* Flash-memory manufacturers have substantial leeway in modulating read/write performance-versus-cost/Gbyte. They can, for example, read and write multiple flash-memory components in parallel, along with beefing up the on-chip RAM buffers (at a die-size penalty). They can also selectively incorporate single-bit-per-cell flash memories (for access speed) or multilevel-cell products (for lowest per-bit cost). This is conceptually no different from, for example, a 4200-versus-5400-versus-7200-versus-10,000 RPM HDD, or one with a 2-versus-8-versus-16 Mbyte buffer. * I'm not as concerned about flash-memory reliability as Robin is, although I acknowledge that the bulk of his audience's focus is on enterprise applications where the issue is particularly critical. Toshiba gave its first public presentations on NAND and NOR flash memory in late 1984, Intel announced its first NOR flash-memory device in 1988, and multilevel cell technology rolled out in 1996, right before I left Intel. (Aside: my former boss and then-peer Saul Zales, who's now vice president and general manager at Intel/STMicroelectronics joint venture Numonyx, personally coined the StrataFlash marketing moniker for Intel's MLC technology achievement). Mass-storage applications were front-and-center in both companies' (and partners') focuses from the very beginning; I personally worked on the ASIC designs for the second generation of Intel's PCMCIA flash-memory cards, as well as advising the team that developed Intel's first-generation (albeit never formally introduced) SSD. Flash file systems, both hardware- and software-based, and incorporating concepts like wear-leveling, EDAC, and block retirement and replacement at extended cycle counts, are well understood by this point. * For the bulk of notebook PC designs going forward, I don't believe that the design decision will be exclusively HDD-or-SSD. Seagate just shipped its billionth hard drive, after all, and the company is not going to go quietly into the night (as its recent legal saber-rattling exemplifies). Rather, as the MacBook Air showcases, the end customer will be able to order an otherwise-identical product with the choice of either an HDD or SSD inside. HDD-versus-SSD pricing will be determined by the computer supplier, based both on its interpretation of perceived consumer value of one versus other, as well as the comparative costs of HDDs and SSDs (driven by their supply situations, since both are commodities) at any particular point in time.
It's amusing (as well as a bit frustrating) to see some companies hedging their magnetic-versus-semiconductor bets. Samsung, for example, unveiled a 128-Gbyte MLC flash-memory-based SSD at CES—on the very same night that the company launched 30- and 40-Gbyte, 1-in. HDDs...and a 500-Gbyte 2.5-in. HDD. The 1-in. HDD announcement is particularly baffling to me; as my last-fall article predicted, and as subsequent supplier actions confirmed, most HDD vendors are abandoning ultrasmall form factors in acknowledgement of the formidable flash-memory alternative (remember my earlier floor cost comparison discussion?). While on the one hand I understand Samsung's near-term desire to service all market opportunities to the best of its abilities, the Blu-ray-versus-HD DVD case study also provides a valuable lesson in the market paralysis that can occur in the presence of multiple contending technology options.
And the big losers in this magnetic-versus-semiconductor tussle, at least for the moment? Hybrid hard drives, as well as their "Robson" (and AMD-equivalent) system-based flash-memory array alternatives. Not because there's anything conceptually wrong with either approach (although, as I've pointed out before, the Robson approach certainly plays better to computer OEMs' desires for continued HDD commoditization and for system implementation flexibility). No, it's because both techniques are intimately tied to Microsoft's Windows Vista operating system. Need I elaborate on its continued underwhelming industry embrace? I thought not.
References
My past coverage of flash memory includes a recent look at the MacBook Air, specifically its SSD option, my late-September 2007 feature article and its online addendums, and my abundant blog focus on the topic over the past three-plus years.
[Harry: Permanence of data remains the issue for me. Flash memory has a finite number of read/write cycles. Though designed to last on average a few years given the typical read/write usage when transferring data, applications that use data storage media for scratch memory see a significant higher number of Read/write cycles which significantly reduce the usable life of a solid state drive. I currently don't see solid state drive replacing hard drives on anything but disposal devices. Obviously, lap top manufacturers would love to see you replacing your system every three years. ] |