SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD)
AMD 236.73-6.1%Jan 30 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: combjelly who wrote (79798)5/10/2002 11:04:11 PM
From: Joe NYCRead Replies (1) of 275872
 
combjelly,

All of these interfaces, DRDRAM, SDRAM, DDR, DDRII are all interfaces to the essentially unchanged DRAM core from the 1970's. DRDRAM gets an awful lot of it's latency because of it's otherwise clever daisy-chain scheme. It has to allow for the signals to propagate through the arrays and back again.

I don't have much of a clue about how DRAM interface works. Are you saying that part of the latency problem is caused by the fact that there may be several devices on the same channel with signal daisy chained between them and the host? How much benefit would a "fixed" interface, for example, specifying limit that a memory channel can have only 1 DIMM, and this DIMM can have only 8 (9) chips. Would this allow lower latency?

I thought the latency would come from an overhead of creating HT pockets, and decoding them at the destination. Isn't this one of the reasons SDRAM beats RDRAM on latency?

I don't know how much this overhead would be compared to current DRAM interface, built on CPU die, or the traditional Memory<->Northbridge<->CPU configuration.

Joe
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext