To: Joe NYC who wrote (107738 ) 4/25/2000 6:43:00 AM From: Bilow Read Replies (2) | Respond to of 1572912
Hi Jozef Halada; The subject of latency is complicated enough that real engineers run simulations rather than listen to the flack put out by the various liars... The hardwarecentral article, is (more or less) the same as the one that Samsung put out several years ago, when they were still flogging RDRAM (due to their lead over the other memory makers in that technology). They are now flogging DDR, as is everybody else, but only to design engineers. The public will start to see the products this summer, and they will be sold as heavily as Nvidia publicized its use of DDR. The article does a comparison of the best available RDRAM against the most tepid available PC133. This is sort of traditional when hyping new technology. Some time ago, I posted a list of DDR and RDRAM part numbers currently available. Each maker puts out about a dozen different RDRAM chips, and they have different bandwidth and latency. Same with SDRAM. You can the comparisons come out either way by choosing the right parts. The Samsung article is shot with other misrepresentations. For instance, they measure latency as the time to the first bits back from memory. But the interface to the processor is 64 bits wide, and Rambus only gives you 16 bits at a time. So you have to wait 3.75ns more in order to assemble enough bits to actually do something with it. Another thing the Samsung article talks about is the granularity of the RDRAM interface. In actual use, this has to be multiplied by the amount of bits that you are going to move around at a time, similar to the latency issue. Sixteen bits just doesn't cut it in a memory controller, no processor on the planet still executes instructions that slowly, for instance. The two cycle addressing problem is the time required for the address bus to settle to a stable value. It's worst on the address bus cause they have to go to every chip. The clock lines do to, but they don't have to carry information, and generally end up better terminated. DDR fixes this problem by going to a newer voltage level definition, SSTL2 (but it also probably uses more power than the old LVTTL3. But the chips as a whole use less per bit, due to the voltage reduction.) Controllers for very large memory arrays will duplicate the address pins in order to minimize this problem. Of course that takes more controller pins. Naturally, the Rambus hypesters make latency calculations assuming that you don't have the extra controller pins, but make pin savings calculations assuming you had to use them. And when you do have a lot of memory chips, so that problem is at its worst, it corresponds to the case in an RDRAM system where the extra delay due trace propagation delay is worst. And that, naturally, isn't included in their latency calculations. Real latency calculations would be so much more complicated than what shows up in these white papers as to be almost beyond belief. The real equations should allow choice of components, frequencies, topologies, number of chips, organization of chips, and also allow for various choices in the termination and connectors etc. Real life is complicated, and also depends on the ratio of cache misses, and how far away those misses go. (Making DRAM rows longer increases the chance of a row hit, thereby decreasing average latency, but also increases power consumption, for instance.) Engineering is all about trade offs, and it is complicated to an amazing degree. Rambus is all about hyping a stock. The management of some of their partners in industry are thinking about making money off of options rather than making products that offer the best price/performance. This has dulled their decision making capability. AMD will probably continue to pick up market share for the next 18 months, and then will become a completely entrenched major player in the x86 marketplace (and elsewhere), as it will take a long time for Intel to dig themselves out of this. -- Carl