Re: If you think that one DDR channel is all that's necessary....
Tenchusatsu,
Read the last paragraph of that post, here it is again....
If the system is running a lot of substantial CGI, ASP scripts, etc., it will be running many concurrent processes, and be heavily accessing memory as it executes the various tasks and swaps them in and out. But the accesses will be primarily short burst accesses where latency is very important. I would think that streaming large blocks of data for an extended period (where rambus could shine) probably won't be a common occurrence.
Big servers, workstations, and many other machines can take advantage of as much memory performance as they can get. But their demands on memory aren't for streaming huge blocks of memory, they are demands for many smaller bursts from random locations. In this type of application, the streaming capabilities of Rambus, its "efficiency" is of little use, while the lower latency of VC DDR is used to best advantage.
You had said: the EV7 will have four RDRAM channels per processor, so a 4-way system will have 16 RDRAM channels. And that will have roughly the same effective bandwidth as 16 DDR channels (not peak, but it's well-known that DDR is less efficient than RDRAM at using bandwidth).
With the implication that the streaming capability of Rambus would result in it outperforming DDR on an MHZ to MHZ basis. And for a handful of video applications, like certain games, it should. But I'm arguing that for almost any server application, it is DDR that has better MHZ to MHZ performance due to lower latency.
You have said that you are a chipset designer, and I not, so I would tend to defer to your conclusion except that it seems to fly in the face of common sense. If I am mistaken, I'd be interested in hearing why.
Dan |