Hi Sun Tzu; Re: "So you are both agreeing that one cannot fit enough RDRAM in a server. At least not the kind of server that IA-64 was made for. Yet somehow Intel saw fit to make McKinley RDRAM compatible?! Why?"
Intel made a mistake.
They assumed that RDRAM would be the next mainstream memory, and that it would provide the highest memory per chip. In fact, the memory industry didn't increase bits per chip (i.e. density) as fast as what had been predicted, and in addition, RDRAM prices never dropped to SDRAM levels. In addition, at any given density, SDRAM chips became available long before the same size RDRAM chips. This means that at any given time, SDRAM provides higher densities. In addition to the higher densities per chip, SDRAM provides higher densities per module, and more modules per channel:
RDRAM, 512MB/RIMM, mass production at Samsung: samsungelectronics.com
SDRAM, 1GB/DIMM, mass production at Samsung: samsungelectronics.com
SDRAM can have 4 DIMMs, while RDRAM can only have 2 RIMMs. Result: Maximum memory per channel is 4GB for SDRAM, and 1GB for RDRAM. You can repeat RDRAM channels (and SDRAM channels also), but that adds latency, cost, risk and board space. The upshot is that even at Samsung, Rambus' biggest supporter, SDRAM has a 4x density advantage over RDRAM.
-- Carl |