SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Tenchusatsu who wrote (43156)5/30/2000 12:59:00 AM
From: Joe NYC  Read Replies (2) | Respond to of 93625
 
Tenchusatsu,

One of the benchmarks used, "Stream," is the same one that Bert McComas used to "demonstrate" the supposed inferiority of Intel's 840 dual-RDRAM chipset to Micron's Samurai single-DDR chipset.

Are you now going to denounce the Stream benchmark? I think you ownership of Rambus stock is beginning to cloud your vision a bit, it seems.

This benchmark has been used for some time to measure the effectiveness of memory / chipset. When BX chipset was released, it smashed everything else by a wide margin, and it was used by Intel supporters to (rightly) claim the superiority of Intel platform at that time.

Coincidence?

It's not any more a coincidence than the fact that people all over the world use spoon to eat their soup. It's just the best tool for the job.

Here is the source code, which is very easy to understand:
ftp://ftp.cs.virginia.edu/pub/stream/Code/stream_d.c

- The article uses that same overclocked 440BX chipset as before to "demonstrate" PC133 SDRAM's supposed superiority over RDRAM. As I said before, an overclocked 440BX is not a valid platform for these sorts of tests. Tom (or Van Smith in this case) still thinks Intel's 815 chipset (which will support PC133) will outperform the overclocked 440BX.

I don't think 815 will match BX in performance. But it is not because can't be done. It is because Intel chose not to do so. In BX the memory and CPU run synchronously. All the other chipsets are asynchronous, that is the FSB of the CPU does not necessarily match the memory. Via Apollo, 810 are examples of this. You trade performance for flexibility. I believe 815 will trade performance for flexibility as well. Incidently, Rambus is also asynchronous, which is the cause of some of the performance penalty.

(FYI, AMD 760 will be synchronous and I believe it will support both DDR and SDR memory. So I guess this will be the real BX successor. The performance of 760 even with SDR will be a reminder to Intel of what might have been)

What if they're wrong? Will they take back their assertions that PC133 outperforms RDRAM?

I don't think they need to, since 815 will be a different kind of chipset, a lot closer to Via Apollo than BX.

- Some of the benchmarks used in the article was hand-written by the author of the article himself

You can check the validity of his benchmarks when he releases them as Open Source. His explanation seems straight forward.

Can we trust the validity of his benchmarking?

I personally trust more to a benchmark that I can understand. His synthetic benchmarks are very easy to understand. I think these synthetic benchmarks are better for testing some aspects of CPU / memory than application or game benchmarks, since those have an interaction of many variables (Graphics card, memory in characteristics of the graphics card, interface between graphics card and the system (PCI, AGP Xx), Hard Disk, drivers etc.

- In conclusion, this is nothing more than an attempt to make "dead dead dead" a self-fulfilling prophecy for Rambus.

I don't know about other people, but I did not know that Rambus performs so poorly in the Stream benchmarks, which does undermine one of the few claims to fame Rambus has to offer in the PC environment - that is the bandwidth. How much more Rambus friendly can a benchmark get?

Joe



To: Tenchusatsu who wrote (43156)5/30/2000 1:38:00 AM
From: Bilow  Read Replies (2) | Respond to of 93625
 
Hi Tenchusatsu; "- A common piece of anti-Rambus FUD is echoed yet again in this article: "RDRAM is rather more susceptible to EMI (electromagnetic interference) than SDRAM, and it is often advised to enable ECC (Error Correcting Code) especially with the faster PC800 RIMMs." I don't know of any engineer that would try and mask reliability problems with ECC. That's absolutely ludicrous. It's like driving a car more recklessly just because the car is equipped with airbags."

The truth is a lot more subtle than this. If it were the case that EMI induced memory failures were bit independent, and that they had a sufficiently low rate, then you could reduce that rate considerably by going to ECC. While this will not entirely eliminate the problem, as it would still be possible for two bits to err on the same 64-bit word, it is possible that the the error rate would be moved from an unacceptably high rate to an acceptably low rate. Everything in engineering has a limit, and error rates are no different.

As an example, suppose that the bit error rate is independent and is at a rate of one error every 10^15 bits. This would mean that the computer would have a memory failure every few weeks or so, depending on how fast memory was accessed. This would be an unacceptably high failure rate. But if ECC could then take that rate to one error every 10^30 bits, then it is highly likely that no user of any such computer ever built would ever see an error, even assuming that they ship 100 million of them, and that they each compute for 100 years.

Metastability is the classic example of where engineers ship products with a non-zero chance of failure. As was mentioned on the thread, the i820 and VIA chipsets are "asynchrnonous". With such interfaces, metastabilities are inevitable. (You get these when, in this case, you transfer data from one clock domain to another, asynchronous one.) Engineers essentially eliminate the problem by adding delays to the system, delays that are long enough to force the probability that a metastability will cause a system error to be extremely tiny. In fact, every time you type a character on a keyboard, there is a non-zero chance that you will, through a (highly improbable) cascading metastability cause your computer to err.

The subject is quite fascinating, here's a link to a typical industry article giving metastability calculations. The fact is that in this imperfect world of mucus, blood and feces, (as opposed to God's world of infinite perfection) engineers do drive faster (but not necessarily recklessly) when they have airbags:

Metastability Considerations January 1997
Metastability is unavoidable in asynchronous systems. However, using the formulas and test measurements supplied here, designers can calculate the probability of failure. Design techniques for minimizing metastability are also provided.
xilinx.com

-- Carl



To: Tenchusatsu who wrote (43156)5/30/2000 3:05:00 AM
From: The Prophet  Respond to of 93625
 
Thanks, Ten. No worries. The ever present short position constitutes a natural buffer to the usual FUD during the rampup of RDRAM.