Hi John Walliker; I'm sorry, but I just can't let this one go by without correction. The reason is because metastability is a subject near and dear to me.
Re: "The clock travels together with data, so I don't think the latching of data is the problem." The problem is that the clock and data drift away from each other due to small random differences in environment. I cover this in a post between these of ours.
Re: "Conversely, failing to meet the setup time will cause random errors whose probability is exponentially related to the amount by which the specification is missed and the inherent speed of the data latch." This is simply not the case. The fact is that the specification provides a time region where the part is guaranteed to operate under. Any particular device is sure to operate under an even wider range of times. Getting metastability isn't particularly easy, just busting the setup or hold will not get it very often. Instead of having to do with the amount that a specification is abused, the metastability exponential decay parameters have to do (1) predicting the actual width of the metastability zone, and (2) predicting the duration of metastability, once that state is entered.
Re: "This is not true. Consider the IBM 256 Mbit DDR RAM you have mentioned previously. The setup and hold times are 0.075 * TCK, or 0.525 ns for the fastest grade device at a clock frequency of 143MHz. So the time window in which data must be stable is 1.05ns. Compare this with the 0.2 + 0.2 = 0.4ns time window needed by the older generation Samsung DRDRAM. The older DRDRAM actually latches data 2.6 times faster than the latest DDR RAM." Remarkably, the above calculations are correct, but you then fail to complete the calculations. The correct next step is to compute how much time the engineer has available to make the switch. Lets do that now.
With the DDR, a cycle is 1/286MHz, = 3.5ns. Since the data must be valid for 1.05ns, this means that the window in which the data may change is 2.45ns long.
With the RDRAM, a cycle is 1/800MHz = 1.25ns. Since the data must be valid for 0.4ns, this means that the window in which the data may change is 0.85ns long. This is almost three times as tight as the DDR specification. No big surprise here, your results are counter-intuitive.
Re: "Because of the exponential metastability relationship which will be much steeper for the faster DRDRAM data latches, less extra time is needed to provide an equivalent safety margin." This statement has no connection to engineering. The safety margin was calculated above, it is not the time that the data must be valid, it is, instead, the time that is available to switch the data. And incidentally, the exponential metastability characteristics depend on the design details of the flip-flops used in the two systems, and, given that they are running on the same process, and therefore can implement identical flip-flops, the metastability numbers are likely to be identical.
But metastability has nothing to do with working memory systems. When a memory designer starts computing metastability numbers (for the data paths), it is a sign that something has gone seriously wrong. None of the metastability calculations has much to do with the inherent safety of DRDRAM.
The fact that they are having trouble with producing this is because they are sending a lot of data down a small number of pins. This is not a safe way to engineer. A lot of companies are now in trouble about this.
-- Carl |