To: Tenchusatsu who wrote (48964 ) 8/5/2000 6:12:31 AM From: Bilow Read Replies (1) | Respond to of 93625 Hi Tenchusatsu; I went back through the document, and it provides no reasons why DDR shouldn't hit the server market next year. The biggest digs they have are that some of the DDR specs are still being changed. All it is is the memory makers makers and chipset people honing the specs so that they get the maximum yields and bin splits. Intel makes a lot about these, but they are not that big of a deal. If that were the worst thing designers have to put up with, my designing life would be total heaven. In fact, didn't Intel modify the RIMM specifications a few months ago and issue a big press release about it? The time to make these kinds of changes is before production starts, and that is what they are doing. Hey, Intel is supporting DDR in servers, what more needs to be said about that? But about latency... I still don't like Intel's tendency to confuse the latency issue by hiding it in a measurement that is basically one of bandwidth, but they seem to have succeeded in changing the discussion from one of "latency and bandwidth" to one of "average latency". In some sense this is good, since this is what the system really sees. But they've been using it to beat up PC100/PC133 with the RDRAM club. RDRAM needs to be compared to DDR, not SDRAM. Of course the "average latency" of PC133 bites when it is being run at close to its bandwidth limit. This statement applies to every memory technology ever. If they had run their graph up to where RDRAM bw limits it would have gone through the roof, too. DDR does considerably better, since its bandwidth limit is up around RDRAM's. But the basic fact is that Intel is going with DDR for servers. I believe that the current situation is one of memory bandwidth not being a critical bottleneck for system performance for single processors. I have no doubt that this is why Intel is accepting being forced to use SDRAM in the P4. Memory bandwidth not being the major system bottleneck may now be the official Intel line. Did you catch this quote from Gelsinger:"The high cache memory [included] with Pentium 4 ameliorates the [data rate] difference with SDRAMs. It makes a slow memory look fast." #reply-14168269 In other words, because of cache, DRAM bandwidth is not a system performance bottleneck, and therefore the very high "average latency" that SDRAM gives at high bandwidth does not obtain in real systems. (Of course anyone can write a synthetic benchmark and prove most anything.) That the FSB of the P3 matches the bandwidth of PC133 SDRAM is not a big coincidence, nor is it an indication that both bandwidth limitations are significant bottlenecks. With the P3 and a 133MHz x8 FSB (and a single processor system), it is obvious that SDRAM provides sufficient bandwidth, but the P4 is another story. Maybe Intel's decision to support SDRAM in the P4 is an admission that neither the DRAM bandwidth nor the FSB bandwidth was a significant (i.e. 50%) system performance bottleneck. One thing to note is that situation is completely reversed in the Nvidia GeForce chipsets, which are totally memory bandwidth bound (and are DDR). This is proved by the overclockers, who get no performance gains by overclocking the processor, but nearly 1 to 1 performance gains by overclocking the memory. With x86 processors, the tendency is reversed. Overclocking the memory provides almost no performance gains, while overclocking the processors is huge. My guess is that if we could clock current processors at around 2 to 3GHz they would bottleneck at memory bandwidth vs CPU clock 50% each. That is, at around that speed, overclocking either the memory or the processor by a small percent should give a system performance improvement of about half that. But that is a long ways away, except in multiprocessor systems. DDR should take us to around 5GHz or so. Comments? -- Carl