SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Petz who wrote (83265)12/16/1999 4:49:00 PM
From: Tenchusatsu  Read Replies (1) of 1581513
 
Petz, I never claimed that RDRAM is the holy grail for servers. And if I did, then let me restate things a little here.

<a server with multi-giga-ops of CPU power will likely have hundreds of simultaneous tasks, overwhelming the RAMBUS system with merciless thrashing and reducing its performance to SDRAM.>

The reverse is actually true. RDRAM, in general, is much better equipped to handle such random traffic than SDRAM because of the multitude of banks that can be accessed concurrently.

<Don't I remember reading that the number of open pipes (wrong terminology?) to RAMBUS memory was recently reduced by a factor of two to reduce power dissipation in the RIMM's?>

I'm not really sure if servers will have the power dissipation restrictions that the desktop chipsets have. I don't know why there are such restrictions for the 820 and 840 chipset, but I guess the guys who worked on those chipsets have their reasons. And I think their main reason is that they are a desktop chipset group, with desktop chipset restrictions. I'm hoping those problems that you mentioned, Petz, are merely by-products of a first implementation.

Anyway, the way I see it, the problem with RDRAM in servers is two-fold. First, it's widely assumed that RDRAM will be more expensive than DDR SDRAM. I'm not talking about the current exorbitant prices of RDRAM, either. Instead, I'm talking about the steady-state commodity prices that RDRAM and DDR SDRAM will have. A lot of OEMs are pushing Intel to support DDR in servers because of the cost issue.

And second, a single RDRAM channel has a limit of 32 devices, meaning that with 256 Mbit technology, you can only get to 1 GB per channel. DDR, on the other hand, can support 32 devices per module, and four modules per channel, meaning that with 256 Mbit technology, you can get up to 4 GB per channel. The RDRAM memory limit can be solved using branch channels, but that can get costly, and the branch hub will add latency, thereby sapping performance somewhat. (Sure, 4 GB may sound like much more than anyone needs at the moment, but even that amount might not be enough for servers coming out in the next two years.)

I just thought it was very interesting to see that the Alpha EV7, which will most definitely be used in enterprise servers, will support RDRAM. Apparently they feel that the advantage of integrating four RDRAM controllers onto the processor core outweigh the inherent disadvantages of RDRAM in servers.

Tenchusatsu
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext