SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: combjelly who wrote (79798)5/8/2002 9:28:07 PM
From: fyodor_Read Replies (1) | Respond to of 275872
 
combjelly: HTT used as a memory interface would have the problem that it would be inherently shallow, i.e. it's point to point. But there wouldn't be the propagation delays because of this either. It would also have the nice property that as you add memory, your bandwidth goes up. It's downside is that your memory expansion is limited by the number of HTT ports, so for very large memory arrays, you would need to have HTT switches that would add some to the latency.

You can also daisy-chain using HyperTransport. I assume this is what he was referring to.

Unless you are talking a fixed memory situation (i.e. a closed box like a console), daisy-chaining would be the only feasible solution. It just wouldn't make sense to lay down separate traces to all memory banks when 90% aren't going to use them.

-fyo



To: combjelly who wrote (79798)5/10/2002 11:04:11 PM
From: Joe NYCRead Replies (1) | Respond to of 275872
 
combjelly,

All of these interfaces, DRDRAM, SDRAM, DDR, DDRII are all interfaces to the essentially unchanged DRAM core from the 1970's. DRDRAM gets an awful lot of it's latency because of it's otherwise clever daisy-chain scheme. It has to allow for the signals to propagate through the arrays and back again.

I don't have much of a clue about how DRAM interface works. Are you saying that part of the latency problem is caused by the fact that there may be several devices on the same channel with signal daisy chained between them and the host? How much benefit would a "fixed" interface, for example, specifying limit that a memory channel can have only 1 DIMM, and this DIMM can have only 8 (9) chips. Would this allow lower latency?

I thought the latency would come from an overhead of creating HT pockets, and decoding them at the destination. Isn't this one of the reasons SDRAM beats RDRAM on latency?

I don't know how much this overhead would be compared to current DRAM interface, built on CPU die, or the traditional Memory<->Northbridge<->CPU configuration.

Joe