SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: Scumbria who wrote (115686)6/13/2000 1:40:00 AM
From: Elmer  Respond to of 1572207
 
Re: "All that an on-chip memory controller does for you is to reduce latency by a few clocks, so there is less impact on performance"

It would be easier to run an on-die memory controller to cpu memory bus at higher speeds than you would see with a FSB.

EP



To: Scumbria who wrote (115686)6/13/2000 2:08:00 AM
From: Joe NYC  Read Replies (3) | Respond to of 1572207
 
Scumbria,

The advantage the BX carries is lots of research into the algorithms used. This can make a huge difference in the ratio of page hits vs. page misses, and thus a large difference in latency.

I agree with that. Plus, the interface was synchronous, vs. all other chips on the market that are asynchronous (except 760 and 750?). I don't know how this would apply to Rambus, but you could think of scheme where the critical word is fed to the CPU (2 bytes at the time) and once there, the CPU could continue to run, even before full 8 bytes are retrieved (as is the case with off chip north bridge)

All that an on-chip memory controller does for you is to reduce latency by a few clocks, so there is less impact on performance.

Well, Rambus may die because of a few clocks of latency. I think the few clocks of latency are going to be very important as the CPU speed go up, and a few clocks of latency of the FSB will translate to several dozen (CPU) clocks of stalled CPU.

Joe



To: Scumbria who wrote (115686)6/13/2000 4:15:00 PM
From: Daniel Schuh  Read Replies (1) | Respond to of 1572207
 
Scumbria, can you explain a little what you mean by "algorithms used" here? I assume you're talking about managing row buffers and stuff like that in DRAM. I know Rambus has some, er, flexibility there, but I though with conventional DRAM that kind of stuff was pretty much fixed by the order of memory accesses. Are there multiple row buffers?

There is the possibility of interleaving, I didn't think that was done since the early PPro FPM chipsets though. The P5 chipsets managed the external L2 cache, so there was some cache management policy to be done there, but that's long gone. What algorithmic flexibility is there in what a memory controller does these days? (leaving the Rambus can of worms aside, of course).

Cheers, Dan.