Hi all; A short critique of a Rambus white paper:
As for DDR SDRAM, there are forecasts for reasonable volumes of these devices as well, in particular for server main memory. It is important to distinguish that for the server market, the volume configuration is a x4 200/266 MHz device (need x4 for system reliability reasons). This organization is NOT usable in the high performance consumer/networking market (see more detailed discussion below). Because this segment requires much more bandwidth per device, a x16 and often a x32 organization are necessary. These wider data paths are niche, premium parts due to its inherently smaller volumes, more expensive die, package and test costs. Rambus estimates the volumes for the standard (x4) and niche DDR SDRAMs as follows (figure 2). <<<Figure>>> rambus.com
It is true that servers want the x4 devices, while others want the x8 or larger, but the conclusion is false. The x8 and x16 chips are produced with only one metal layer difference from the x4, and on the same equipment. Since the metal layer is the last metal layer, this means that the memory maker can choose which chip to produce at the last minute.
The situation is identical to the one that SDRAM currently uses, and you will notice no difference in pricing between x4, x8, and x16 memory chips in that market. In essence, the various widths are the same chip, with the same die, and in the same package. They are priced the same, except for brief differences in the spot market. Go to any distributor of DRAM and you will see this pattern of pricing in the x4, x8 and x16 DDR chips, as well as regular SDRAM chips. It is true, on the other hand, that the x32 chips are in a different package and cost more. But they are still way cheaper than RDRAM, and besides, they won the graphics market, and so can't be horribly expensive. On to system costs of DDR and RDRAM:
Increased number of pins impacts: 1. Packaging costs, both on the controller and the memory devices, 2. Silicon costs, in terms of minimum die area for pad limited designs, 3. PCB material costs, both area and layers due to routing complexity.
While it is possibly true that DDR increases the cost of the controller because of the package, this is not the case for the memory chip. (I say possibly, because RDRAM has its own set of high costs. These include Rambus licensing, as well as the more complicated and larger silicon area used in the RSL I/O buffers, as compared to DDR or SDRAM pins. In addition, the RDRAM chips require more expensive testers, and, because of the tighter timing requirements, are probably showing lower yields.) The package that RDRAM has to use, (because of electrical considerations) is a specialty BGA, and is more expensive than the one used by standard memory chips. Later DDR chips will also go to a BGA pinout, but by then the technology will be more cost comparable. DDR is designed to be cheap.
Silicon costs would go up on designs that are pad limited, but memory chips are certainly not in that category. So the statement has to be in regard to the controller chip. After all, we know that Rambus memory chips already carry a die area penalty of at least 10%. But the typical controller chips are a lot cheaper than the total memory in a system, (anyone doubting this should compare the cost of a motherboard, which includes the controller chip, and a lot of other stuff, with the cost of memory), so adding a relatively small number of pins to a single controller chip is much less expensive than requiring all of the memory chips to be 10% larger. In short, there is a lot more memory silicon in the typical computer than controller silicon, so in order to minimize total system silicon, concentrate on the memory chips, not the controller chip.
I've already shown that the number of pads in the motherboard of a DDR design is comparable to the number in an RDRAM design. RDRAM designs typically use more layers, not fewer, because of the sensitive nature of the RSL interfaces. In addition, there are more restrictions on where components may be placed, not fewer, and on how traces may be routed. The trade press is replete with articles talking about how the industry is trying to reduce the high cost of Rambus motherboards and RIMM modules. The article continues:
Memory granularity is the other major cost consideration. Although system memory requirements have increased, they have not kept up with the increases in DRAM densities, which typically quadruple every 3 years. The net effect is that the number of devices shipped per system actually decreases over time as systems utilize the higher density, lower cost memories. Memory granularity is especially critical in consumer/networking applications since they only require a fixed and small amount of DRAM (typically 16-64MB); anything over this adds to cost. Since 128Mb (16MB) is presently the cheapest/bit, the minimum granularity to build a high performance DDR SDRAM system is 64-128MB! This increases to a minimum 128-256MB when 256Mbit generation becomes the most cost effective in the 2002 timeframe.
While it is true that the number of chips per system is likely to continue to decrease, this hasn't bothered SDRAM much. The industry has responded to this trend by packaging memories in "wider" packages. Thus the x16, which size would once have been a specialty DRAM, is now a standard commodity width, with identical pricing to the x4 and x8 sizes. In the future, as this trend continues, DRAM will continue to be put into wider and wider sizes. But the trend in packaging costs is a decrease in the cost per pin, and this prevents this trend from making DRAM much more expensive. The arguments in the above paragraph match identically to the situation in the graphics market, where bandwidth per device is of the highest concern, and Rambus has lost that market pretty much completely.
That Rambus would note that the minimum granularity for a high performance DDR SDRAM system is "64-128MB!" is kind of strange. Go look on the Dell website. If granularity below 64MB is such a big deal, why don't we see RDRAM in granularities smaller than 64MB? It's pretty clear that 64MB is a small enough amount of memory that choosing a granularity of that size is not an engineering problem. And if it were, the DDR makers would build x32 or x64 chips, and those chips would still be cheaper than RDRAM. As an example, note that the Hitachi x64 SDRAM is in a 108-pad BGA, not much more than an RDRAM chip.
Link to Samsung Rambus page: usa.samsungsemi.com Note in the above link that the 144Mb RDRAM chips is in a 62-pin uBGA. That is only 42 pins less than a x64 SDRAM chip from Hitachi. With the marginal cost of pins well under a cent per, the cost difference is under $0.42. To put this in perspective, note that a 2% license cost on a $50 memory chip amounts to $1.00.
In short, the packaging issue for memory is a non-issue. Modern packages are providing pins for darn near free, compared to how much they used to cost.
In a nut shell, Rambus is a technology before its time. While one would think that it would be great to be ahead of the industry, this is actually not the case. If we have to wait until 2002 for the Rambus advantage over DDR in granularity to be substantial, then what kind of cost advantages due to the higher volume will DDR have by then? The time for Rambus to show its great advantage is now, not 2 years from now.
The rest of the article talks about performance issues. The performance issues are a lot harder to put one's finger on; granularity and system costs are much easier to quantify. The performance issue is one to be argued out with benchmarks. And since there are a lot of DDR systems coming out over the next few months, I am going to leave that subject alone. The benchmarks will speak for themselves.
-- Carl |