SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Dave B who wrote (19458)4/28/1999 2:50:00 PM
From: Alan Bell  Read Replies (1) | Respond to of 93625
 
Dave,

This is clearly an academic paper and its implications for Rambus are very hard to interpret but I don't see any smoking guns. I can't imagine a reporter being able to accurately understand this paper.

[ They are actually modeling an 8-way superscaler architecture rather than a multiprocessor. (But I was never convinced that the clam that Rambus was not good for multiprocessors was accurate.) ]

Your observation about their choice of 128 bit memory rather than the typical 64 bit width raises questions of applicability.

More quotes -

"Ingorning price premiums, cost is a good argument for the high-speed narrow-bus DRAMs. Rambus and SLDRAM parts give the performance of other DRAM organization at a fraction of the cost."

"However, as the studies show, we will soon hit the limit of these benefits: the limiting factors are now the speed of the bus and, to a lesser degree, the speed of the DRAM core."

-- Alan



To: Dave B who wrote (19458)4/28/1999 3:09:00 PM
From: Tenchusatsu  Respond to of 93625
 
<First, was this an 8-processor system. If so, I believe someone already claimed that Rambus was not good for multiple-processor servers. But I'm not sure that that's what this means.>

They meant 8-way superscalar processor, not an 8-processor system. The toolset they use is called SimpleScalar, and I think it's a popular simulation tool for acadamia. But I think SimpleScalar would choke on an eight-processor simulation, at least for the servers that the University of Maryland can afford. ;-/ (By the way, my sister attends Maryland right now, although her only knowledge of computers comes from AOL.)

<Second, is the data path 128 bits (16 bytes) between the DRAM and the chipset for current PCs? Isn't it 4 bytes? Their test assumed a 16 byte-wide path, and I think, though am not positive, that their test results may have been different with a 4-byte wide path. Maybe not. Any other thoughts?>

What they're trying to do is create some sort of apples-to-apples comparison. PC100 SDRAM normally runs over a 64-bit data path, i.e. an 8-byte wide path. At 100 MHz, this gets 0.8 GB/sec peak bandwidth. RDRAM has a 16-bit data path running at an effective data rate of 800 MHz, thus achieving 1.6 GB/sec peak bandwidth.

PC100 SDRAM can achieve the same peak bandwidth of RDRAM by mounting two identical SDRAM DIMMs side-by-side and working them in parallel. That will get you an effective 128-bit data path, leading to a peak bandwidth of 1.6 GB/sec. However, this is less efficient than the RDRAM method, as you might imagine.

As for the details of the paper, I'm printing it out right now, and I'll read it when I get a chance.

I'm also going to attend some seminars here in Intel Oregon. I missed the one earlier this morning on RDRAM and Intel chipset support because I had a dental appointment. However, a coworker of mine will send me the slides and I'll see what I can get out of them.

Tenchusatsu