SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Bilow who wrote (32505)10/21/1999 7:10:00 PM
From: John Walliker  Read Replies (2) | Respond to of 93625
 
Carl,

Thanks for the compliment on my understanding of Rambus. (I have to take these as I get them...)

I think we both understand the subtleties of RAMBUS (and some of the competing technologies) far better than we did a few weeks ago:-)


The above thought came to me as well. Then I realized that in order for this to happen, it must also be the case that the data outputs from the two chips would collide at the controller. In other words, in order to have the above, they would have to have had a data bus collision. Because of this, I concluded that it wasn't a realistic scenario, but you may be right.


I think I am right on this one.

If the first device is nearer to the controller, then its first transmission goes to the controller and is reflected. Its second transmission is superimposed on the reflection and the component of the second transmission travelling away from the controller travels along with the reflection of the first transmission towards the termination. The second device is further away from the controller and transmits one clock edge after the second transmission of the first device, so there is no conflict at the controller. (Remember, the relevant clock is the one travelling towards the controller and these three transmissions will appear in an orderly sequence at the controller, which is the only device that looks at them.) However, in doing so the second device is transmitting at just the moment when the superposition of the reflected first transmission and backwards component of the second transmission are both passing it on the way to be absorbed by the terminator. The second device now has to deal with this and its own contribution, giving a signal 1.5 times larger than a normal superposition and just outside its guaranteed operating range.

There is no conflict at the controller in any of this, so the system would work normally most of the time if this event, which clearly should be forbidden (and probably is, in theory, if not in practice in the 820) were to occur.

As you say, variations in bus impedance caused by mismatches between the motherboard and RIMMs would cause additional small changes in the voltage. (This can be compared to the effect of small ocean waves becoming taller and turning to surf when they reach shallow water.) These variations might just be enough to make the effect a problem with some RIMMs and not others on particular motherboards. On other motherboards there might not be any devices positioned at just the right combination of distances for this scenario ever to happen.


I just had another thought... If Rambus solved this problem by requiring a dead space of 1.25ns (or maybe 2.5ns, since it is a DDR type clock), the effect might be a lot worse than merely a slight decrease in bandwidth. The reason is that the controllers for RDRAM are clocked internally at much lower frequencies than 400MHz or 800MHz. One would end up adding a dead time of one of the slower clocks, otherwise you would lose the synchronicity between what happens on the RDRAM bus and what happens on the internal clock. (You could design a system that would let those clock domains run independently, but it would increase complexity a lot, and if you thought you had solved the bus contention issue, you wouldn't implement it, as it would be unneeded. The extra complexity would be due to the fact that the read time, for instance, would be harder to predict, rather than simply a constant number of the slower clocks.)


There could be (probably is) a simple finite state machine running at the full Rambus clock speed to deal with these things. Also, at system startup the controller calibrates the delays to each memory device and downloads this information to them.

Then, in order to avoid having what amounts to a bus contention issue, they would have to specify that each RDRAM chip have enough "break before make" to avoid contending.


The neat thing about Rambus constant current drivers is that bus contention in the conventional sense does not matter (except under certain specific read/write turnarounds which are clearly discussed in the documentation and dealt with by wait states) - it's just that it may sometimes matter if three contending signals occur at a device output. For two contending signals, which happens much of the time, the margins are easily large enough to cope with modest bus mismatches.

In many ways I hope this is the cause of the problems because there is then a clear solution with negligible performance impact and it does not cast any doubts over the long-term reliability of the Rambus concept.

John