SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : CYRIX / NSM -- Ignore unavailable to you. Want to Upgrade?


To: FJB who wrote (22908)1/12/1998 3:32:00 PM
From: patrick tang  Read Replies (1) | Respond to of 33344
 
Robert,

This goes to how X86 access data in 4 bit bytes - you need long latency to get to set up for the first byte but once it's set up, the next bytes comes rather fast in a 'burst' mode whereby the chip chip does not need to do anything at all except read the burst of data that comes out of the DRAM automatically.

For SDRAMs, there is an extra feature in that the memory in divided into 2 or more 'banks'. As such, if the chip set is set-up to do so (and I believe the earlier chip sets are not but e.g. the TXPro that I am using that has a on/off BIOS setting for DRAM look-ahead cache does have) it can take advantage of this. Essentially, while the chip set is reading from the first bank, instead of having it's control signals sitting there doing nothing because the DRAM can automatically burst, the control signals are busy setting up for the long latency on the 2nd bank. Thus if the next CPU request matches the set-up, the chip set can present the data in another burst but without any long delays.

If we count CPU clocks,for 4 byte x86 reads e.g.

Fast Page - 3 latency/3 1st byte/3 2nd byte/3 3rd byte/3 4th byte = 15 cycles, repeat 15 cycles next 4 bytes = 30 cycles for 8 bytes

Fast Page - 3 latency/2 per byte thereafters = 11 cycles, repeat 11 cycles next 4 bytes = 22 cycles for 8 bytes

SDRAM w/o banking - 4 latency/1 per byte thereafters = 8 cycles, repeat 8 cycles next 4 bytes = 16 cycles for 8 bytes

SDRAM w/banking - 4 latency/1 per byte thereafters = 8 cycles but only 4 cycles next 4 bytes = 12 cycles for 8 bytes, and the advantage grows even more for 3rd 4 byte read also matches 'cached' data.

As can be seen, with long continuous data like video app. where the whole screen might get dumped at once, this is a huge advantage in using the banking system.

patrick tang