To: Dan3 who wrote (74818 ) 3/18/2002 10:10:02 AM From: fyodor_ Read Replies (1) | Respond to of 275872 Dan3: As the chips run at higher multiples to a given memory speed, the Dp/Df drops. And these two parts are being measured at different points along their curves. Northwood 2000 is running 5 times the speed of its memory, while XP 2000+ is running at 6.5 times the speed of its memory. You are comparing scaling factors at substantially different points on the curve - this is significant. You bet it is! As I've stated, AMD can (most probably) significantly improve scalability by adding more cache and increasing memory (and FSB) bandwidth. I wasn't referring to Palomino and Northwood alone in the universe, but rather with their current infrastructure. Clearly, infrastructure also influences how well a processor scales with frequency (as defined by the resulting performance increase).An analysis like this with both chips on a single SDRAM channel or both on a single DDR channel motherboard, with memory timings set the same, would be a lot more informative. I disagree. It's not that I wouldn't like seeing the results, but I just don't see what it has to do with anything. Unless AMD wants to (and can) change the available infrastructure (primarily, giving the FSB a bump), they are stuck with 2x133MHz. Similarly, unless AMD wants to (and can) increase the L2 cache (or come up with other architectural improvements), they are stuck where they are currently.That leaves only frequency as the variable. This is what I was looking at. It simply doesn't make sense to take the processor, or parts of it, and look at how they scale with increasing frequency. You need to look at the whole system. Clearly, something like prefetching can dramatically influence scaling, but if the available bandwidth and cache is limited, the prefetching isn't going to be very helpful. Actually, if the cache or bandwidth were low enough, it could very well have a negative impact on performance.The level of totatl performance on a given memory technology is what is at issue. I disagree. It makes no sense to separate the processor core from the memory subsystem. The processor on its own has zero performance. It needs a link to the rest of the system and this link (or links) will most certainly influence both the performance and the scalability. Hammer will presumably gain quite a bit, both in base performance and in scalability, from the integrated memory controller and the HT link. These are an integral part of Hammer and everything needs to fit together. The same goes for EV6 and Athlon, as well as for AGTL+ (or whatever they call it) and P4. Providing excellent infrastructure is a huge part of "making a processor" today. If Intel has an advantage in that department right now… then that's the way it is and the credit is theirs. Intel designed the P4 specifically with certain elements in mind (width and frequency of FSB, as well as the memory subsystem) and it makes no sense to compare a P4, coupled with an inferior memory subsystem to an Athlon, coupled with the memory subsystem for which it was designed. Sure, if a better memory subsystem was available, you could do the comparison with that. I actually did that, since the scores reported used the KT333 chipset coupled with PC2700 memory. When, and if, something better arrives, we can do the comparison again, but the results are quite clear: Unless AMD does something , simply keeping up in quantiherz isn't going to cut it. -fyo