To: Ian Anderson who wrote (48770 ) 8/2/2000 8:14:39 PM From: Bilow Read Replies (1) | Respond to of 93625 Hi Ian Anderson; The DDR guys have are taking into account the fact that the boards have to run in so many different installations with so many different DIMM populations &c. I think that if you listen to the industry presentations you will be impressed. Not the high level view graph stuff, but instead the audio presentations of the DDR guys. I'll post some links later, when I get to my other computer. You will have about 4 hours of lecture to listen to, but there really isn't any point for me to talk to you about this without your having listened to the other side. You've made some comments on the difficulty getting SDRAM to operate reliably in DIMMs. (And believe me, the comment about the FCC compliance guys hits home with me.) It is possible that you are not taking into account the signalling differences between SDRAM, and DDR: (1) DDR went to SSTL-2 signalling, not CMOS. This means that voltage swings are drastically reduced (about half amplitude). Since EMI is typically proportional to something like the square of the voltage swing (with the same rise and fall times), this means that EMI is probably going to be easier with PC2100 than with PC100. The termination has been standardized and improved quite a bit. It still isn't as sweet as a decent ECL SSI board (I used to be in supercomputers), but the signal levels are beautiful and the EMI has been reduced. In short, the engineers learned from your PC100 EMI experience, and they took this into account. Also SSTL-2 uses relative threshold voltage levels instead of the ground reference. This helps make signal integrity less susceptible to ground bounce and reference problems, allowing lower voltage swings. This is not your daddy's TTL. (2) DDR went to source synchronous timing, while SDRAM has a fully synchronous system clock. This means that the timing margins are huge, relative to SDRAM. (That is, much less time is getting eaten up by clock skew.) Consequently, the designers were able to leave the edge rates alone. DDR is sweet stuff. As far as power supply problems, well it uses considerably less power than SDRAM, at the same bandwidth, so that shouldn't be an issue. I think that DDR266 is going to be fine for the PC industry for the next three years or so, and after that, I think other trends will take over and alleviate the necessity of running large bandwidth interfaces over long PCB traces. One of those trends is embedded, another is SOC &c., the third is the explosion in cheap high pincount packages, and the fourth is a slow but sure reduction in memory chip count per system (and thus encouraging point to point designs). So DDR doesn't have to have legs that far into the future. If it were really the case that DDR wasn't scalable to large systems, as you are suggesting, then you must find it very odd that essentially all the server chipset design companies (including Intel) are making DDR chipsets for servers. The basic fact is that DDR has sewed up the low end, point to point market, as well as the 2000 design wins in the high end, buffered system market. The middle market (including desktop workstations) now has dozens of chipsets in development, it will convert next year. Looking forward to the granularity issues that will arise from 1Gbit DRAM chips, the DDR guys are designing memory interfaces as wide as x64. So DDR definitely has enough legs to satisfy the industry for quite some time. -- Carl P.S. Glad to welcome another VRAM survivor to the thread... That stuff had the nastiest collection of timing requirements of any memory I've dealt with. No one who has not dealt with it first hand really understands how bad it was. You can still get data sheets on it from OKI, who still, believe it or not, makes the stuff. I'd give a link but you have to sign up for their data sheets.