SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: wanna_bmw who wrote (165624)6/2/2002 12:21:25 AM
From: tcmay  Read Replies (1) | Respond to of 186894
 
"PT, the 20MHz thing looks like a misprint. Maybe they meant 200MHz?"

A memory subsystem made up of DRAM can access a particular bit in a time on the order of 50 nsec. (Remember when SIMMS and DIMMS were 70 nsec, 70 nsec, etc., just a few years ago?). This is where he presumably got his 20 MHz figure from.

A 200 MHz access rate would be consistent with 5 nsec access times (from inside the DRAM, through the glue logic on the board, to the memory bus). Possible with SRAM, but pricey. (And the author mentions cache memory, separately, so he clearly was talking about main memory, not SRAM cache.)

Of course, an effective increase in bits per second retrieved is possible by accessing N bits at the same time. Access 50 bits each at 50 nsec and one has an effective GHz access rate. But nobody defines memory access this way.

There are lots of good reasons why DRAM cells and bit lines and all don't keep up in speed with SRAM cells and SRAM logic.

One of the reasons for VLIW/EPIC is to increase the effective memory parallelism by using very long instructions which can be worked on in pieces, so to speak.

--Tim May



To: wanna_bmw who wrote (165624)6/2/2002 4:08:47 AM
From: Dan3  Respond to of 186894
 
Re: the 20MHz thing looks like a misprint

No, but it implies that there is no cache. A read to main memory takes 50ns, which works out to 20 mhz.

This, by the way, is why Rambus failed (higher latency). Rambus motherboards need dual memory channels to compete with single channel SDRAM/DDR motherboards that have lower bandwidth. Doubling the number of channels adds costs.

Edit - I see Tcmay wrote about this, already. If you want to see more about it, go to the Rambus board and start reading the messages from Q2 of 1999.

It's also worth noting that Hammer's on-die memory controller means that Hammer will "see" memory running at "300mhz", while all other chips from Intel and AMD are stuck with "200mhz."



To: wanna_bmw who wrote (165624)6/2/2002 5:16:08 PM
From: ptanner  Read Replies (1) | Respond to of 186894
 
wbmw, re: "This gap in performance will continue to widen because processor makers get paid based on clock speed, memory makers on megabytes."

What struck me as funny was that this seemed to imply that it was purely market forces which were limiting the progress of memory speeds - as if a memory which offered higher performance couldn't command a premium price.

I see the slower rate of memory speed technology development as more a function of the inherent limitation in an off-CPU component (it takes time to get from place to place) and other technical limitations and not simply because the product is priced primarily based on its capacity.

-PT