SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Bilow who wrote (49000)8/5/2000 12:03:13 PM
From: NightOwl  Respond to of 93625
 
Hey! What's the big idea?!?

Here I am trying to bring EE design to the Unwashed Masses and you come along and mux the whole thing up with your "so called" simple questions! :8)

Well,... the peculiar morass I was attempting to explicate before I was so unceremoniously interleaved can be found at:

cs.berkeley.edu
(if the server isn't too busy - I do believe they are using DRDRAM)

There you will find The Official "Commando Cody" EE Version of my Truthful Table For Excellon Mom's & Pop's. The paper, complete with pretty pictures, provides a complete description of this "packetization" of the RAMBus which only an EE could love. But beware you riders of the Bus!! It also contains the following solemn warning for the peak bandwidth addicted:

High Speed Operation:

With a little skill, a RAC designer should be able to take advantage of address line bit switching with the device ID number to evenly distribute data accesses across many banks and devices. Resulting in a single virtual device with up to 1024 banks (32 banks/part * 32 parts/channel), of which 512 may by activated at any one time. The banks should be alternated so split banks will look contiguous to activates. Best case, continuous reads or writes spread across devices and banks, allow command requests to be totally hidden and should result in a 100% access efficiency of 1.6 Gbytes/sec. For the worst case, where adjacent banks are being thrashed, speeds of as little as (1/(tCYCLE * tRC) * 16 bytes = 16/(2.5 ns * 28)) 230 Mbytes/sec or (230 Mbytes / 1600 Mbytes) 14.4% efficiency are produced.

For a typical access, thrashing between reads and writes of already opened rows, a bubble equal to the total round trip read delay will be inserted between each write data followed by read data. This would result in a 8 tCYCLE / (8 tCYCLE + 2 tCYCLE) = 80% efficiency or 1.28 Gbytes/sec. If a system typically had 7 reads for every write (which seems logical for a VLIW type of machine with 128 bit instruction words, 75% instruction reads, remainder split between read and writes), (8 * 4 tCYCLE)/ ((8 * 4 tCYCLE)+ 2 tCYCLE ) = 94%. One thing that often gets overlooked in efficiency equations dealing with burst or data packets is that it assumes you actually needed all 16 bytes of data. If you assume that 128 bit instructions fully utilized the packet but a read and a write only utilized one byte, then efficiency would be ((16 bytes * 6 packets) + (1 byte * 2 packets) / (16 bytes * 8 packets) + 4[r/w propagation] ) = 74.2% efficiency.

DRDRAM refresh detracts about 1% of system efficiency. (8 tCYCLE * (Banks + Rows)) / tREF = ((8 * 2.5 ns * (32 + 512) / 32 ms = 1.02%.


Yes, yes, there may have been a few bytes lost in translation to Mom&Popese. Nevertheless, after all your many long nights of waging war on the Angry Villagers of DRDRAMburg, one would expect that you'd be just a tad more conversant with the local dialect. (Who Hee!:8)

0|0