To: Michael Wilson who wrote (2353 ) 11/21/1997 2:03:00 PM From: Michael Wilson Read Replies (8) | Respond to of 93625
Technical Assessment of Rambus For all those who are not familiar with what Rambus' technology really is, the following technical assessment was written by a colleague of mine. I have been following this thread for some time, and have found this type of discussion notably lacking. --------------------------------------------------------------- First let me state that I am a chip design engineer and that I have assessed Rambus' DRAM technology 3 times in the past starting in 1990 while I was designing digital set top boxes for General Instruments. At that time I chose to use EDO DRAM because it had neither the cost penalties or the physics challenges of the RDRAM interface. More recently I have looked at their 2nd generation technology for use in ATM switching designs and reached a similar conclusion. Its performance was no better than SDRAM (Synchronous DRAM) and the reduced pin count was more than offset by the premium for RDRAMs as well as the royalties we would have to pay our ASIC vendor (LSI LOGIC) for use of the RDRAM core. Rambus is a high speed interface wrapped around a standard DRAM core. What does this mean? It means it does not make the fundamental DRAM access any faster. That will always be limited by what is known as Trac (Row address access time) and Trc (RAS (row address strobe) cycle time). Historically DRAM access times have decreased linearly as process geometries shrink. Densities however will increase as a square of the geometry shrink. When geometries go from .5 um to .25 um densities go from 16 Mbit to 64 Mbit (not 32 Mbit). So what is the advantage of the RDRAM interface? Because it is a faster path into the DRAM core it requires less interface pins to carry the same amount of data as compared to a slower interface. For example, a 64 bit wide SDRAM interface running at 133 MHz can transfer data at a rate of 8.512 Gbit/sec. A 9 bitwide RDRAM interface running at 600MHz can transfer data at 5.4 Gbit/sec. On the surface this sounds like a 64-9 = 55 pin savings. The reality is that due to the increased power and ground pins required by RDRAM to maintain signal integrity at 600 MHz the pin savings is much smaller. I found that the pin savings for a SDRAM interface verses a RDRAM interface to achieve similar data rates for 64 byte bursts was about 20 pins. That still sounds pretty good doesn't it? So lets add up the savings and look at the pros and cons. For a typical high pin count package such as a 456 Plastic Ball Grid Array package (PBGA) the cost is about $5 in the types of volumes used by the PC industry. Or, about 1.1 cent per pin. Thus saving 20 pins grosses you a savings of about $.22. Not bad you say. Oops ... we forgot to account for the royalties, higher PCB (Printed Circuit Board) costs, and higher manufacturing costs. Going from H&Q's numbers of royalties of 1-2 % (I will use 1.5% to be fair) and assuming similar royalties for the interface used in the interfacing chip the royalty cost alone would erode the savings by about $.75 for a $50 (price is my guess) 64 Mbit DRAM and another $.30 for the interfacing ASIC for a net savings of ($.83) (the parenthesis mean LOSS). So forgetting about any of the other cost burdens you are already losing 83 cents on the deal. So, if RDRAM costs more for the same levels of performance why did Nintendo choose the technology? Plain and simple, I believe they made a bad choice ... unless Rambus gave them a sweet deal to buy into the market and gain credibility. Mind you I don't think it is hurting Nintendo. It does not cost much more and although Sony playstation uses conventional DRAM interfaces I doubt the users either notice or care. So, if RDRAM costs more for the same levels of performance why are what are obviously very technically smart people at Intel pursuing the technology? Processor speeds _are_ outstripping DRAM access rates and Intel needs to put pressure on the DRAM industry to continue to improve DRAM performance. What better way than to side with Rambus and throw down the gauntlet to the rest of the industry. Has it worked? You bet. Two new competing DRAM interfaces standards are now emerging from JEDEC (JEDEC is a DRAM industry standards body). Because they are industry standards they are FREE and there are no associated royalties. Beginning to get the picture? Intel commits a paltry sum to develop a RDRAM interface in exchange for forcing the industry to develop a equivalent alternative for FREE. Such a deal. It is also a good hedge strategy ... if the DRAM industry does indeed fail to deliver (of which there is NO evidence that they are going to) then Intel would at least have the RDRAM technology to depend on. However, like I said, there are two new competing standards. They are: 1. DDRSDRAM - double data rate SDRAM. Sound familiar? It is. The SDRAM interface is being enhance to transfer data on both edges of the data clock EXACTLY THE SAME WAY THAT THE RDRAM INTERFACE DOES! "Rambus will sue for patent infringement!" you say. Sorry ... transferring data on both edges of the clock is not covered. 2. SLDRAM - Sync Link DRAM ... for all intents and purposes the same as RDRAM. It is a narrow high speed bus with separate address and data busses. Check out their web site at sldram.com . If you are long on the 'bus I _strongly_ suggest you make yourself aware of the competition. Let's reiterate what is probably the most important point of my entire post. RDRAM, SLDRAM and DDRSDRAM only help to improve the time it takes to move data from point A to point B. for example, putting these interfaces on a disk drive does not change the fundamental disk drive seek time and access time. I can not stress this enough. DRAM core access times are soley a function of process geometry and to a lesser degree the cleverness of the DRAM vendor. Currently the fastest DRAM core is built by Ramtron. Ever heard of them? They have had the fastest DRAM core for the past 7 years and have thus far only managed to lose money and go bankrupt once in the process. Bottom line is interface speed has nothing to do with the DRAM access time and it is DRAM access time that is the real bottleneck and Rambus does not claim to solve that problem nor can it solve it even if it tried. Finally there is the whole monopolistic side of story. Who in their right mind would license RDRAM technology and pay royalties when there are equivalent interfaces available with NO royalties. Why would Intel allow themselves to be held over a barrel by a single vendor that holds patents and demands royalties for the very heart of PC memory systems? I came from just such an industry ... in the late 80's General Instrument owned the C-Band backyard satellite dish market. Quite by accident mind you but it happened none the less. When it came time to standardize digital TV GI tried again by introducing their own proprietary digital compression system. The cable industry would have no part of it after suffering GI's monopoly of the C-Band market and the MPEG industry standard was born. So, in summary: 1. Rambus provides pin saving DRAM interface. 2. The cost savings in pins is much less than the added system cost of using RDRAM. 3. There are free competing standards. 4. The industry is very unlikely to all line up and pay Rambus royalties for the rest of DRAM history. I welcome all comments, both nice and nasty, to what I believe is a even handed assessment of Rambus' market and future.