To: Zeev Hed who wrote (26633 ) 8/7/1999 11:21:00 PM From: Dan3 Read Replies (2) | Respond to of 93625
Re: without a high speed memory This may sound backwards, but I think that the beauty of rambus isn't that it's fast, but that it's cheap. Rambus seems to have been conceived around the time that a difficult move from 66MHZ to 100MHZ was going on, and 100 looked like the end of the line for capacitor memory. The pentium's bus overcame the lack of progress in memory speed increases by doubling the width of the bus from 32 to 64 bits - but the next step, to 128 bits would result in expensive and difficult to design mother boards and high pin counts. So rambus built a clever arrangement of 128 bit wide 100MHZ cells, then integrated a high speed control system to serialize the data over a high speed 16 bit bus. Not only is the 128 bit memory now easy to place on the motherboard, but rambus also interleaved the 100 MHZ cells in the same way DIMMS are interleaved on servers to provide for high speed streaming data rates. The result is excellent performance, and lower pin and trace counts. Great, so far. Trouble is, as clever as this design is, it increases the die size around 40%, and the chip structures that drive the data serialization to 300-400MHZ and transfer data at both edges of the clock hasn't been easy to produce. There also is a big lag before each set of transfers can begin. It takes 40 to 50 ns before any data starts to come from the chip - and that's on the 800MHZ (400MHZ double data rate) parts. I don't understand why it takes so long for a part running at 400MHZ internally, (even if the actual cells are at 100MHZ - that's still only 10ns) but it does. Now that wouldn't matter if capacitor ram technology had remained stuck at 100MHZ, but it hasn't. Instead, the same process that can produce the 800MHZ parts is now producing 166MHZ SDRAM. And some of those designs are being speced out at single cas latencies. (eg. 1 x 6ns instead of 2 or 3 times 6 ns) and 133 mhz SDRAM is being produced that transfers data at both edges of the clock just like rambus. For most mainstream software, latency is as important, if not more important than streaming data rates in determining performance. And standard SDRAM can be interleaved just like rambus is by using more modules, if the expense can be justified (as in a server). But DDR and 16 byte bursts (instead of 8 or fewer) will probably provide sufficient streaming data rates without resorting to interleaving. The net result is performance that is as good as, or better, possibly noticeably better, than rambus. And these technologies don't add significantly to the cost of the memory module. And you do pay more for rambus - first you have to buy more silicon for a given memory capacity, then there are royalties on top of that added expense. So why do I say that the beauty of rambus is not performance but price? It's because I think that the price of the silicon will come down rapidly over the next few years, while the cost of higher pin counts on chipsets and cpus, and the cost of motherboard cm2 will stay the same or rise. So ultimately, rambus could become a winner for the great mass of entry level computers, as well as cost sensitive devices such as video games, digital television, etc. But it'll only happen when the price comes way down, and that won't happen right away. Good luck to you, as I said before, market factors (eg. Intel regains control of the high end and won't support anything else) can easily overwhelm these suppositions, whether they are right or wrong. Dan