To: Dan3 who wrote (30723 ) 9/26/1999 9:52:00 PM From: Tenchusatsu Read Replies (2) | Respond to of 93625
Dan, allow me to apologize for my nasty tone in my earlier post. It seems to me that you do indeed know what you're talking about. Most of the time, anyway. <Rambus advocates keep reiterating that it has better bandwidth, get kind of mumbly when it comes to latency> Intel's own foils talk about the importance of both latency and bandwidth, i.e. "random accesses" and "streaming accesses." They argue that latency is improved with Rambus in a system with some real memory traffic. In other words, they're more concerned about "average latency" than "core latency." I guess the assumption is that other memory technologies aren't going to reduce core latency enough to impact average latency in a loaded system. <Overclockers are getting plenty of benefit from it now [PC166]> Let's talk about systems that aren't overclocked, shall we? Overclocking, at least on an Intel chipset, not only overclocks memory, but also the chipset, processor bus, AGP, and PCI. You can't make any definite conclusions about memory performance from overclocking. <The gist of the articles, as best as I can recall, is that even with a small cache like 16K, a system executes many instructions before it goes out to main memory for more.> Yeah, but that benefit has already been realized many years ago when processors like the 486 started employing caches. Now we're at the point where a cache miss rate of just 5% (or so) is considered terrible. Yet that very small fraction of memory accesses that miss the cache is starting to impact system performance. Amdahl's Law is in full swing here. <so most accesses to main memory are not serial, where Rambus has its big advantage.> I already explained there are two aspects of memory performance, which are latency and bandwidth. Obviously the CPU is more concerned with latency than bandwidth. But if you don't have enough bandwidth, or if you can't utilize your bandwidth efficiently enough to satisfy requests from multiple sources, actual latency is going to take a hit. It's no good to have the CPU wait while an AGP device accesses memory. (That can happen even if the CPU has priority over AGP and PCI.) <Sorry if you are a big believer in rambus, but I have real doubts about the whole approach, even after this current problem is solved.> You may be surprised. From my point-of-view, system interfaces are already moving toward a low pincount, high bandwidth, stubless, source-synchronous, and packetized interface. Rambus, HubLink, NGIO, FIO (I assume), and others. That's the direction that Intel is heading toward in the far future. If its the wrong direction, then I'd sure like to see the alternatives, especially when silicon feature sizes continue to shrink, but external traces can't. Tenchusatsu