SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Bilow who wrote (31143)9/29/1999 11:26:00 PM
From: Bilow  Read Replies (4) | Respond to of 93625
 
In the long run, Rambus is very doomed, due to inevitable technological change. Probably within the next 4 years, most DRAM is going to be sucked onto the processor. Embedded will still be way too expensive, but other techniques for combining DRAM and logic into a shared package will proliferate...

The days when standard desktop computers allowed the user to upgrade their own memory are fast coming to a close. The Camino launch was sunk by the requirements that the user be able to upgrade memory, (i.e. RIMMs) and this requirement will gradually become a more and more restrictive requirement to future system designers.

Once, long ago, the user was able to purchase cache memory separate from his processor, but those days are now long gone. So long gone, in fact, that people don't ask much what happened to them. The fact is that allowing the user to touch the cache memory was too expensive and too slow for competitive systems. The engineering tradeoff slowly tilted so as to eliminate extendable cache memory from most computers. We will see the next leg in this natural, evolutionary growth of computers soon. USER SERVICEABLE DRAM MUST DIE!

Instead, most DRAM will be added to the processor "package" at the factory. If the user wants more memory, he buys a different processor / memory combination, just like he now buys a different processor / cache combination.

The decrease in pin costs for modern technology will also allow incredibly wide external chip busses. This will mean that system bandwidths will explode upward. But the cheapest way of connecting memory to the processor will be using one of a variety of chip connection techniques. These techniques will eliminate the horrible dead hand of Printed Circuit Board (PCB) limitations to signal speeds, and, therefore, the necessity of buying Rambus licenses. Some of the inter-chip connection techniques are listed in #reply-11393109 .

The days that PCB is used to connect processor to main memory are very numbered. This will, of course, obviate the necessity of paying Rambus royalties, as the Rambus patents are designed to allow high bandwidth signaling on PCBs. Instead, the connections will be extremely efficient direct chip to chip, and this will reduce both interface power consumption per pin as well as propagation delays.

These techniques will remove about 20ns from typical memory latency to the CPU, allowing much faster processors execution. Bandwidths will be limited only by CPU technology for years, due both to the extremely large number of pins, reduced noise environment, and the high data rates available on the extremely short wires attached to those pins. (Prop delays measured in ps.)

Both high and low end processors will be forced to incorporate DRAM in the processor package, but for different reasons. The high end processors will require it for performance reasons, while the rest will need it for cost issues - these new processors will be considerably cheaper than the systems that are spread out over a mother board, even more so than the systems that are spread out into motherboards and DIMM or RIMM modules.

Memory for very large systems, those so large that the memory cannot fit in the same package as the processor, will also be unlike that which we currently use. In order to minimize system latency, very large memory systems will be placed in single packages, along with "bit-sliced" memory controller interfaces. The combined unit will have a short bus connection to the processor package, and will be organized so as to allow easy interleaving. That is, memory will be packaged in mega package units, each with a variable number of data bus pins, but the same control interface. A complete external memory system could consist of a single x128 unit, or two x64s, or four x32s, or eight x16s. This packaging will be similar to the way that memories chips are currently sold as x4s, x8s, or x16s, but all with the same process and a shared JEDEC standard package.

-- Carl



To: Bilow who wrote (31143)5/5/2000 11:45:00 PM
From: Bilow  Read Replies (2) | Respond to of 93625
 
Hi all; Pin count reversal: 256Mb RDRAM has 26 more pins than the 256Mb DDR.

Samsung finally has a preliminary spec sheet for 288Mb RDRAM:
usa.samsungsemi.com

Some surprises. The package is a 92-pin uBGA, with a ball pitch of 0.8mm, and all of those pins have to be soldered down, zero NCs. Total package size is 17.6mm x 10.6mm. This is not going to be a cheap package. How is it ever possibly going to be within 10% of the cost of SDRAM?

Speed grades are 600 to 800MHz, no change from present. The data width is still x18. In short, the technology hasn't advanced at all from the previous RDRAM, except that the part has a higher density. The pin count increase didn't buy any advantages at all, same old bandwidth, but worse granularity.

Now we are in a position to compare this against 256Mb DDR specifications.

The Jedec standard 256Mb DDR SDRAM from Hitachi is in a 66-pin plastic TSOP, which is a lot cheaper than RDRAM's uBGA. There are 6 NCs included in the x16 part, lots more in the narrower widths, allowing this pinout to be used for later extensions. The total package size is a little larger (21% by area) than the RDRAM, at 22.22mm x 10.16mm, but since there is room to put 18 of these on a DIMM, the size is not an issue. Clock frequency goes to 143MHz (286MHz data rate), and the same package suffices for the x4, x8 and x16 parts:
semiconductor.hitachi.com

I know that I have stated repeatedly on this thread that pin count isn't what it used to be, so RDRAM being in a package with more pins isn't a big deal. But we have to remember that the whole reason for Rambus' existence is to save pins, what is the industry doing throwing pins away?

The above information on pin counts for the 256Mb packages proves that the industry agrees with me - pin counts on BGA packages are not a significant cost. See my post #reply-13483793 for links to places that will allow you to estimate the cost of pins in various packages. Please note that these facts are in total contradiction to the various amateurs who have been posting on this thread about the great pin savings that Rambus provides.

The simple fact is that the pin count costs that made Rambus a good idea in 1990 are no longer true. Rambus provided an expensive, barely manufacturable solution to a problem that went away. The only thing keeping RDRAM alive now is Intel, but watch what happens this summer.

-- Carl