Hi John Walliker; You wrote: "If DDR offered greater timing margins than SDRAM as you suggest, then why are there so few DDR systems with four or more DIMM slots? (The reference designs for those with three or four slots recommend duplicating the address bus to avoid excessive bus loading. This would of course have significant implications for EMC.)"
This is silly. How long has it been since you designed a DDR or SDRAM system? The address buses on DDR run at SDRAM rates, they are not doubled in frequency, as is the data bus. PC133 and DDR266 address/control wires run at exactly the same frequency, and with dang near exactly the same data. When SDRAM is uses 4 slots they are also supposed to duplicate the address buffers which "would of course have significant implications for EMC." If you had ever designed a significant SDRAM memory system you'd have known this. The alternative is to drive a single set of control buses (except for the CEs), and allow an extra clock for the transients to die out. Between SDRAM and DDR, in this, there is no change.
For your education, here's the data sheet for the Intel i815 northbridge. Take a look at page 61, and note that they have triplicated part of the address bus in order to better support 3-DIMM motherboards: (The address lines that are not triplicated are undoubtedly restricted to only changing every other clock, with sampling controlled by the CE pins, which are separate.) ftp://download.intel.com/design/chipsets/designex/29823401.pdf
As to why there are fewer slots, first note that DDR has more slots than RDRAM. With DDR, the slot restriction is because of data bus considerations, not address bus, and it is again mirrored in the SDRAM world. With more than two DDR DIMMs installed, you're supposed to use registered sticks, as in server memory. With SDRAM, if you installed four sticks of unregistered fully loaded (i.e. 4 bank) SDRAM, instead of meeting timing requirements at 133MHz, you very likely had a system that didn't run. This means that the user can trade off memory capacity against performance, by choosing which type of memory, registered or unbuffered, to use. RDRAM doesn't allow this freedom, instead requiring only the equivalent of unbuffered.
With modern computers there is a tendency for a smaller number of memory chips to be sufficient for a typical computer. Largely because of this, the number of slots has been decreasing for nearly two decades. And as long as I'm on the subject, why is it that RDRAM has absolutely no machines with 4 RIMM slots? Are there even any 3-slot RIMM systems out there? Does that imply that RDRAM has considerably less margin than DDR? Do explain!
In reference to the question of what the V/ns spec on SDRAM vs DDR was you wrote: "As you well know, the answers to these questions are unlikely to be found in the device data sheets. If that were the case there would be little work for you to do as a memory system designer." It was silly of you to write this, as you know enough about me to know that I was well aware of the facts before I made the suggestion. So now you've gone and cut down your expert standing by another notch.
Okay, so maybe you don't know where to look for specs on parts, and you don't know that there are people who do. Yeah, you're a big time expert on memory. The nature of your BS is not that you are saying things that you know to be false, but instead that you are making statements about a subject you have little knowledge of. I doubt that you have ever looked carefully through a DDR data sheet with the intention of finding the answers to the questions you have so readily given answers to.
I find that IBM has pretty good data sheets, so lets look there...
For SDRAM, the clock rise and fall times are given as minimum 0.5, max 10.0 ns. Note that since the cycle time on 133-MHz memory is only 7.5ns, it is therefore impossible to assume rise and fall times longer than 3.75ns and still achieve Vil and Vih. Since Vih = 2.0V, and Vil = 0.3V, this gives a slew rate as high as 1.7V/0.5ns = 3.4V/ns, and in no case less than 1.7V/3.75ns = 0.45V/ns. Short circuit output current is given as 50ma. Here's the part, a 256Mb 133MHz SDRAM: chips.ibm.com
For DDR, the (256Mbit Rev B) part assumes slew rates of 0.5V/ns on inputs, and they include a derating table to allow calculations with a slew rate as low as 0.3V/ns. This is as much as 6x slower than the maximum slew rate assumed on SDRAM. Short circuit output current is given as 50ma, just as in SDRAM. www-3.ibm.com
If you want to comment on defects of DDR technology, go read the data sheets, and understand them, before you show up to the party. The vast majority of the industry has already signed up for DDR, where are you?
If you want to pursue this research, why don't you look up the output power consumption difference for outputs active versus outputs inactive in SDRAM and DDR (at the module level). That may give you an idea about how much power is being shoved out the bus. But the primary reason DDR is nicer than SDRAM is because of the tighter specs on the lines. Most of the increase in performance is due to the source synchronous, not due to the increase in frequencies. In fact, as you know, DDR clocks run at the same rate as SDRAM clocks. On the other hand, timing on the data buses (and hence the slew rates) are eased in DDR because of the source synchronous nature.
Comments, Ali Chen?
-- Carl |