Elmer and Dan, both of you are off the mark.
Elmer, what I was talking about was having 3 RIMMs on a single RDRAM channel. Protocol-wise, RDRAM can have up to 4 RIMMs. Electrically, RDRAM is able to go up to 3 RIMMs per channel (although I wonder if 4 can still be achieved). But for the past few months, Intel has been limiting each channel to 2 RIMMs as an interim solution for the September fiasco. But soon, Intel will spin a new version of 820 that will fix the problem once and for all, allowing for 3 RIMMs per channel.
None of this has anything to do with a repeater hub, Elmer. Actually, the MRH-R repeater hub turns one RDRAM channel into two (or four if you use two MRH-R components on a single channel).
And Dan, you're wrong by saying that 3 RIMMs is only achievable under "perfect circumstances." Intel's respin of the 820 chipset is proof that 3 RIMMs is achievable in practical situations. You're also wrong in saying that RDRAM is undesirable for servers because of the "delays inherent in such an architecture," because in fact, for servers, sustainable bandwidth matters more than latency. (I thought I said all this already back on the RMBS thread.)
It turns out, incidentally, that DDR SDRAM is more desirable for servers because of cost, not necessarily performance. When you have gigabytes of memory in a server, it becomes a major factor in the price of the server. And I think the industry (even Intel) agrees that DDR SDRAM will be cheaper than RDRAM in large quantities.
Tenchusatsu |