SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: TigerPaw who wrote (19457)4/28/1999 2:36:00 PM
From: Dave B  Read Replies (2) | Respond to of 93625
 
Wow, now I remember why I didn't want to get into academia. <g> Sorry for speaking before reading the article earlier. <gg>

So they simulated the performance of a variety of memory technologies (with some good descriptions of how the technologies differed, by the way). From what I understood (which was limited), it appears that they simulated a 128 bit wide path to a workstation-class processor. I've attached their summary paragraph below. This generates a couple of questions for me.

First, was this an 8-processor system. If so, I believe someone already claimed that Rambus was not good for multiple-processor servers. But I'm not sure that that's what this means.

Second, is the data path 128 bits (16 bytes) between the DRAM and the chipset for current PCs? Isn't it 4 bytes? Their test assumed a 16 byte-wide path, and I think, though am not positive, that their test results may have been different with a 4-byte wide path. Maybe not. Any other thoughts?

We have simulated seven commercial DRAM architectures in a workstation-class setting, connected to a fast, out-of-order, eight-way superscalar processor with lockup-free caches. We have found the following: (a) contemporary DRAM technologies are addressing the memory bandwidth problem but not the memory latency problem; (b) the memory latency problem is closely tied to current mid-to high-performance memory bus speeds (100MHz), which will soon become inadequate for high-performance DRAM designs; (c) there is a significant degree of locality in the addresses that are presented to the primary memory system—this locality seems to be exploited well by DRAM designs that are multi-banked internally and therefore have more than one row buffer; and (d) exploiting this locality will become more important in future systems when memory buses widen, exposing row access time as a significant factor.


Dave