SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Estephen who wrote (5612)7/12/1998 1:27:00 AM
From: Boplicity  Read Replies (1) | Respond to of 93625
 
Some educational information also food for the Bears on the thread.

DRAM Performance:
Latency vs. Bandwidth

By Bert McComas

The industry is in the midst of a raging debate over DRAM performance. Today, chip makers are fighting it out, but very soon the battle zone will expand to include system manufacturers, all the way down to individual users. The debate is over bandwidth vs. latency and DRAM chip interfaces. <snip>

For the rest of the article read this.

www2.tomshardware.com

Greg






To: Estephen who wrote (5612)7/13/1998 10:05:00 AM
From: TigerPaw  Read Replies (1) | Respond to of 93625
 
Not Downside - Design Constraint
Perhaps I should have not labeled the tolerance issue as a downside to Rambus but as a design constraint for boardmakers. I deduce from the article biz.yahoo.com that compaq intends to design more rambus interface ports on the AlphaServer than Intel is designing into the Merced. This is a smart move on their part to provide differentiation. Bigger DRAM chips will also mean more memory, its just that they are not ready - YET!

In the quote by Toprani, 4 way servers each with four 1GB channels equals a 16GB machine. I am ignorant of WindowsNT scheduling but in other mutiprocessor operating systems the whole point was that each processor could get to all (or most) of the memory in a coordinated but independent way. This way any processor could begin running the next scheduled task. If each processor has separate memory, the design is not much different than having four different computers side-by-side.