To: Dave B who wrote (19561 ) 5/2/1999 2:39:00 AM From: Tenchusatsu Respond to of 93625
Dave and others interested in technical discussions, You are right. The problem is that we have a discrepancy between common-clock and source-synchronous transfer methods. The way things traditionally worked in the electronic world, the pins of an electrical interface worked just like regular electrical nodes. The voltage of the whole line went up to a logical '1', then went down to a logical '0'. Back then, the clock signal was a real square wave with near-vertical edges. Sure, the clock switched twice as fast as the data, but we didn't think of things that way. Rather, we worried about things like setup and hold times (the time before and after the rising edge of the clock when the data had to assume a definite '1' or '0' voltage level). This is what we now call "common-clock" mode. However, as signals switch faster and faster, you can't assume that the whole node changes voltage at every point at the same time. Now you have to start looking at electronic signals not as voltage levels, but more like transmission waves such as those going through a telephone line or a fiber-optic cable. That's the idea behind "source-synchronous" mode, and that how tricks like latching data on both edges of the clock arise. That's also why RDRAM "packetizes" information for transmission over the narrow bus. "Packet" is a term that you usually hear when you listen to someone talk about networks, communication protocols, etc. The electrical details get pretty messy, that's for sure. But this is becoming necessary because the lines on a motherboard which connect two devices like memory to chipset aren't getting any shorter. Sure, transistors on a piece of silicon can get smaller and smaller, which is why a microprocessor can still be designed around the "common-clock" mode. But the interface lines in-between these chips can't get shorter because it's just not practical. That's why in order to increase the speed of the interfaces, you have to start thinking in a "source-synchronous" mode. RDRAM thinks in terms of "source-synchronous." SDRAM is still mostly "common-clock." DDR SDRAM may have to use the "source-synchronous" method in order to achieve "double-data rate" on the data lines, although their method is more clumsy than RDRAM because DDR SDRAM seeks an easier upgrade path from regular SDRAM. As usual, confused? Tenchusatsu