Carl, re: Intel's DDR comments during Spring IDF 2000
(By the way, the foils are indeed from Spring IDF 2000. The date on the first slide is probably a typo.)
<Power consumption is difficult to analyze. Intel is probably not the most evenhanded at this analysis.>
Also remember that this presentation is from the viewpoint of servers, where there are obviously different power characteristics than desktops.
<Intel's little problems on latency are already well noted. If latency weren't such a big issue, how come the PC133 beats RDRAM?>
Because once again, we are talking about the difference between core latency of the DRAM and average latency that the processor sees. The latter is what ultimately matters. Core latency does have an effect on average latency, but it is only one factor.
What Intel is saying is that in high-load situations, RDRAM-800 is more efficient at utilizing bandwidth than DDR. This leads to less delay due to bandwidth overload, leading to lower average latency, thus higher performance. DDR attempts to solve this problem by brute-forcing the rate to 266 MHz, but Intel is talking about DDR-200 here.
PC133 is doing better than RDRAM at benchmarks which really don't exercise the bandwidth all that much. In this case, average latency is highly dependent on core latency, where PC133-based chipsets do very well (and RDRAM-based chipsets still have a way to go).
<But basically, this report is way out of date, and doesn't deserve comment.>
Now that you know that the report is indeed recent, could you take a look at it and make some comments? This isn't (necessarily) another DDR-vs-RDRAM bash-fest, but more of a serious look at Intel's challenges in implementing DDR. After all, Intel is going to do DDR anyway, so why not open the lines of discussion?
Tenchusatsu |