I finally read Anand's attempt to earn back the credibility lost, or as he calls it Part 2 of the Rambus article: anandtech.com
What I got out of it is that he doesn't understand the meaning of bandwidth and latency. I am not saying that I do. I have a vague concept of the terms, but enough to know that he is not on solid ground writing the article, and that his concept of the terms and the relation between the 2 is even more vague than mine.
I think the main point to realize is that it is impossible to construct a test machine (using off the shelf components) to test these concepts. There are more variables than bandwidth and latency, and any benchmark that you perform, any machine you will configure will have some of these other variables involved to some degree.
One variable that comes to mind is the clock speed of FSB. If you clock the celeron at 66 MHz and later at 100 MHz, you are not only increasing bandwidth, you are increasing FSB, which itself influences latency. If you had a capability to control bandwidth by controlling the width of the data bus (let's say increasing it from 64 bits to 128 bits), you would increase the bandwidth, but at the same time, you would decrease the time it takes to fill a cache line, which itself is a component of latency. Even though, my understanding is that the key measurement of latency is the time it takes to deliver a critical piece of data a stalled CPU needs to resume operation.
Anyway, I am not any smarter after reading the article than I was before.
Joe |