SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Bilow who wrote (48945)8/4/2000 8:28:50 AM
From: Steve Lee  Respond to of 93625
 
Hi jim kelley; That "Intel Spring IDF" conference presentation you quoted extensively from is from spring of 1999.
February 15-17, 1999
developer.intel.com.

But basically, this report is way out of date, and doesn't deserve comment.

That's quite some dated milk you're peddling. Try going back to the cow for some fresher stuff.


Bilow, anyone looking at the details of that intel doc would see that it has the wrong date on it. It is clearly from Spring 2000 IDF.



To: Bilow who wrote (48945)8/4/2000 10:09:45 AM
From: jim kelley  Read Replies (1) | Respond to of 93625
 
BILOW

You are speculating far too much! This does not help do anything but muddy the water.



To: Bilow who wrote (48945)8/4/2000 2:03:15 PM
From: Tenchusatsu  Read Replies (5) | Respond to of 93625
 
Carl, re: Intel's DDR comments during Spring IDF 2000

(By the way, the foils are indeed from Spring IDF 2000. The date on the first slide is probably a typo.)

<Power consumption is difficult to analyze. Intel is probably not the most evenhanded at this analysis.>

Also remember that this presentation is from the viewpoint of servers, where there are obviously different power characteristics than desktops.

<Intel's little problems on latency are already well noted. If latency weren't such a big issue, how come the PC133 beats RDRAM?>

Because once again, we are talking about the difference between core latency of the DRAM and average latency that the processor sees. The latter is what ultimately matters. Core latency does have an effect on average latency, but it is only one factor.

What Intel is saying is that in high-load situations, RDRAM-800 is more efficient at utilizing bandwidth than DDR. This leads to less delay due to bandwidth overload, leading to lower average latency, thus higher performance. DDR attempts to solve this problem by brute-forcing the rate to 266 MHz, but Intel is talking about DDR-200 here.

PC133 is doing better than RDRAM at benchmarks which really don't exercise the bandwidth all that much. In this case, average latency is highly dependent on core latency, where PC133-based chipsets do very well (and RDRAM-based chipsets still have a way to go).

<But basically, this report is way out of date, and doesn't deserve comment.>

Now that you know that the report is indeed recent, could you take a look at it and make some comments? This isn't (necessarily) another DDR-vs-RDRAM bash-fest, but more of a serious look at Intel's challenges in implementing DDR. After all, Intel is going to do DDR anyway, so why not open the lines of discussion?

Tenchusatsu