To: Bilow who wrote (31143 ) 9/29/1999 11:26:00 PM From: Bilow Read Replies (4) | Respond to of 93625
In the long run, Rambus is very doomed, due to inevitable technological change. Probably within the next 4 years, most DRAM is going to be sucked onto the processor. Embedded will still be way too expensive, but other techniques for combining DRAM and logic into a shared package will proliferate... The days when standard desktop computers allowed the user to upgrade their own memory are fast coming to a close. The Camino launch was sunk by the requirements that the user be able to upgrade memory, (i.e. RIMMs) and this requirement will gradually become a more and more restrictive requirement to future system designers. Once, long ago, the user was able to purchase cache memory separate from his processor, but those days are now long gone. So long gone, in fact, that people don't ask much what happened to them. The fact is that allowing the user to touch the cache memory was too expensive and too slow for competitive systems. The engineering tradeoff slowly tilted so as to eliminate extendable cache memory from most computers. We will see the next leg in this natural, evolutionary growth of computers soon. USER SERVICEABLE DRAM MUST DIE! Instead, most DRAM will be added to the processor "package" at the factory. If the user wants more memory, he buys a different processor / memory combination, just like he now buys a different processor / cache combination. The decrease in pin costs for modern technology will also allow incredibly wide external chip busses. This will mean that system bandwidths will explode upward. But the cheapest way of connecting memory to the processor will be using one of a variety of chip connection techniques. These techniques will eliminate the horrible dead hand of Printed Circuit Board (PCB) limitations to signal speeds, and, therefore, the necessity of buying Rambus licenses. Some of the inter-chip connection techniques are listed in #reply-11393109 . The days that PCB is used to connect processor to main memory are very numbered. This will, of course, obviate the necessity of paying Rambus royalties, as the Rambus patents are designed to allow high bandwidth signaling on PCBs. Instead, the connections will be extremely efficient direct chip to chip, and this will reduce both interface power consumption per pin as well as propagation delays. These techniques will remove about 20ns from typical memory latency to the CPU, allowing much faster processors execution. Bandwidths will be limited only by CPU technology for years, due both to the extremely large number of pins, reduced noise environment, and the high data rates available on the extremely short wires attached to those pins. (Prop delays measured in ps.) Both high and low end processors will be forced to incorporate DRAM in the processor package, but for different reasons. The high end processors will require it for performance reasons, while the rest will need it for cost issues - these new processors will be considerably cheaper than the systems that are spread out over a mother board, even more so than the systems that are spread out into motherboards and DIMM or RIMM modules. Memory for very large systems, those so large that the memory cannot fit in the same package as the processor, will also be unlike that which we currently use. In order to minimize system latency, very large memory systems will be placed in single packages, along with "bit-sliced" memory controller interfaces. The combined unit will have a short bus connection to the processor package, and will be organized so as to allow easy interleaving. That is, memory will be packaged in mega package units, each with a variable number of data bus pins, but the same control interface. A complete external memory system could consist of a single x128 unit, or two x64s, or four x32s, or eight x16s. This packaging will be similar to the way that memories chips are currently sold as x4s, x8s, or x16s, but all with the same process and a shared JEDEC standard package. -- Carl