<1. The assumption that you have a greater number of banks open in DRDRAM is not necessarily true. It depends on the size and configuration of the SDRAM.>
SDRAM currently has two, four, or eight banks. Mixing SDRAM modules may reduce the number of banks that the memory controller can keep open. For example, if you use one module with two banks and another with four banks, you have to treat the four bank module as a two bank module.
This doesn't happen with RDRAM. First, I believe each module already starts with 20 banks. Second, adding modules also add to the number of banks.
<2. The locality of addresses behind a large L2 is extremely poor. The vast majority of CPU dram accesses will be page misses, which DRDRAM does slowly.>
Ali says that same thing, and two months ago I would have agreed with the two of you. But Anand's tests with the K6-3 are making me think otherwise. I thought that the K6-3's L1 and L2 caches will render the motherboard L3 cache next to useless, because the addition of the on-die L2 cache will increase the randomness of addresses behind the L2. But Anand's tests clearly show that the L3 cache still has a major impact on office performance. This surprised even the AMD engineers who commented on Anand's tests.
So it seems that even a processor with a two-level cache will issue addresses onto the processor bus that aren't very "random," suggesting that an increased number of memory banks will indeed have a positive impact on performance.
Let's not forget that Intel also wants to pair up AGP 4x on the Camino chipset along with RDRAM. AGP-to-memory transactions, mixed in with processor memory accesses, can also benefit from RDRAM's greater number of banks.
All this, of course, should just be taken as theory. The guys who know the real answers to these questions aren't talking, no are they allowed to, I guess.
Tenchusatsu |