SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin
RMBS 110.62+2.7%3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: mishedlo who wrote (53198)9/13/2000 8:36:41 AM
From: mishedlo   of 93625
 
Thoughts on bandwith from Ron at the FOOL
Ryan (rbroner) is asking the right questions, folks. I welcome his input. I get frustrated with some of the others, though, who - for example - keep posting about how DDR offers similar data transfer speed to RDRAM, and support their argument by using peak bandwidth figures. Peak bandwidth is theoretical. The important criterion is effective bandwidth, which is the sustained data transfer rate. I don't think there's any dispute that SDRAM and its variants offer no greater than 65% effective bandwidth, while RDRAM offers 90-95%. You absolutely need to take this into account when comparing the two technologies. Otherwise it's not an apples-to-apples comparison.

This is a complicated technology. It's not possible to answer the same questions over and over without writing a novel. The main issue is that the need for these high-speed data transfer rates is dependent upon software. Compare MS Word on a SDRAM system versus a RDRAM system and there won't be a significant difference, regardless of how fast your processor is running. So to a large extent, the RDRAM/SDRAM debate is theoretical, dependent upon what future bandwidth needs will be. So you can only take current benchmarks so far. Given the current software, I won't disagree that RDRAM offers little or no improvement over SDRAM (except in special cases like Sony PS2). The real need for RDRAM will arise with more bandwidth-intensive software (i.e., voice recognition, high-end 3-D graphics, and the like) coming in the future. However, I believe that this is coming sooner than we may think.

Scalability: IMO, the main problem with SDRAM is it's not scalable. Right now, we're looking at the very highest end that SDRAM can handle. Now the SDRAM folks are determined to double-clock, or use both the rising and falling ends of the cycle, which is called DDR. This makes a 100MHz system bus equivalent to 200 MHz. The problem with this approach is beyond 100MHz, there are all kinds of mis-timing problems. Going to the next step, which is 133/266 MHz, actually offers worse effective bandwidth because of timing errors, different stub lengths or some other engineering minutia that I'm not capable of understanding. Here's what Samsung says, for example:

Surprisingly, due to the mismatch between its interface and core timing, the 133 MHz SDRAM is significantly slower than the PC100 SDRAM.

usa.samsungsemi.com

And here's what Dataquest says:

There has been a lot of talk lately about double-data rate DRAMs, or DDR, and the possibility that this less-expensive technology could postpone the need for RDRAMs indefinitely. Dataquest does not buy into this theory for a number of reasons. First, the DDR interface, although easier to implement than RDRAMs (without the burdens of an increased die size and a royalty payment), is a solution only at the DRAM chip level. Many of the cookbook details that have been so neatly worked out by Rambus' engineers, details like signal paths, termination, and clocking, are left to the individual OEM in the case of DDR. This can significantly slow down time-to-market. Second, the high-signal frequencies used by DDR are alien to most circuit board designers and are likely to end up causing trouble when brought through one or more connectors to an indeterminate number of DRAM chips. Other reasons include a lack of the required support components and even a lack of rigid standardization that could impede the acceptance of this technology until the true winner is determined from an array of nearly compatible devices.

messages.yahoo.com

This guy seems to know what he's talking about:

DDR technology does not have legs. It is very hard to scale to higher speeds. DDR is just increasing speed of the data group signals, not the address, control, clock signals. Why not? Because it's too hard to keep all the lines in sync at high speeds! Rambus takes a completely different approach of packetizing the transfers into a protocol and sending them around in a loop (kind of like token ring) between the controller and the memory…

If you're using a DIMM type package, DDR RAM has reflection problems because of the stub length off the bus for each DIMM. Rambus minimizes these with almost no stub length and a continuous bus from DIMM to DIMM, making for less signal degridation and higher clock speeds.

Message 10509331

So when you move up to DDR to 133/266 MHz, timing errors and other problems cause the bandwidth efficiency to become much worse as a percentage of peak bandwidth. This is what the Samsung white paper is addressing. The figures I've seen indicate that DDR at 133/266MHz would only offer about 40% effective bandwidth. If you compare that to DDR at 100/200MHz at 65% effective bandwidth, then you'll see there's no improvement at all. That's seems to be the crux of the criticism by Intel and others. In summary, SDRAM/DDR doesn't scale.

Zorloc and others will vehemently disagree with the above statements. That's fine, and I have no way to rebut them. Why? Because 133/266 DDR doesn't yet exist as a working product! That's right, it's still in the testing stages. Presumably, there will be a 100/200 DDR system coming out with the AMD Athlon in a few months. But right now, DDR and beyond (e.g., DDR-2, which is set for what, 2002? 2003?) aren't here yet to any significant degree, so we really can't discuss them that intelligently.

Alternatives to RDRAM: Okay, so now let's talk about what SDRAM could do to increase data transfer speed or memory bandwidth to remain competitive with RDRAM. First idea is to double-clock the system (i.e., DDR), as mentioned above. Problem #1 is that double-clocking doesn't seem to work effectively at 133MHz and beyond. Problem #2 is that DDR may run afoul of RMBS' patents. My personal belief is that DDR does violate the patents, but obviously others may disagree. Time will tell. Here's an analysis of the issue.

dramreview.com

The other two things that SDRAM can do to increase memory bandwidth is to widen the system bus beyond 64 bits / 8 bytes, or otherwise to increase the speed of the system bus. Let's consider both of these:

Alternative #1: Expand SDRAM width to 128 bits

The other way to increase SDRAM bandwidth would be to expand the width of the data channel itself - such as 128 bits instead of the current 64 bits - but this could result in unwieldy module designs and increased power demands.

zdnet.com

Adding a second Rambus channel in the chip set ... takes far few pins than doubling the width of an SDRAM bank... This approach is far more expensive with SDRAM.... The small number of pins required by an RDRAM interface will also make integration of the interface onto the processor chip compelling, eliminating the latency of the chip set.

mdronline.com@20061605zxvqsp/slater/perspective/1314sp.html

Alternative #2: Increase SDRAM clock speed beyond 133MHz / 266 MHz

Current SDRAM designs may have difficulty beyond 133 MHz, as the varying lengths and signal loads on the circuitry may cause problems for identifying data signals.

zdnet.com

PC133 at best offers a modest incremental improvement over the ubiquitous PC100 SDRAM. The speed probably can't be increased much further without changing the interface....

mdronline.com@20061605zxvqsp/slater/perspective/1314sp.html

RDRAM, on the other hand, is easily scalable to higher memory bandwidths. RMBS will be announcing a 1.6 GHz product with two memory channels offering 6.4 GB/sec peak bandwidth and around 6 GB/sec effective bandwidth. Now compare that to any of the SDRAM/DDR solutions on the following chart, and draw your own conclusions about which memory technology will ultimately become predominant.

theregister.co.uk

Basically, SDRAM/DDR seems to max out at around 1 GB/sec effective bandwidth. That's great for when the fastest processor on the market is only 1 GHz. My personal opinion is that I GB/sec is as fast as we'll ever see a SDRAM/DDR system go in terms of effective bandwidth. (Others, I'm sure, will disagree.) So what about when 1.5 GHz processors go on the market? Here's what the general manager of Intel's desktop division says about that scenario:

[W]ith a 1.5-GHz processor, there just isn't sufficient bandwidth with SDRAMs. For a 1.5-GHz processor, it would be like giving up 500 MHz of processor speed, and that is why we have shuffled our strategy to make RDRAM the primary memory for Willamette.

eetimes.com

On the other hand, the 6 GB/sec effective bandwidth that will be announced by RMBS in June should carry us into the foreseeable future, at least.

Price: The current price differential exists because RMBS memory is supply-constrained and demand-driven. The actual cost differential at present is 40%, and RMBS' goal is to bring the differential down to 10% by the end of the year. This is the guidance that Geoff Tate has given over the last year. Take it or leave it, there's probably no way to know for sure (unless somebody wants to go interview some Samsung guys). Personally, I believe Geoff. (Stop laughing, zorloc.) He's always been more than up-front that the cost differential is a problem for RDRAM, and reducing the differential is the source of their RMBS' greatest focus at present.

Take it away, Geoff:

But having said that, it is certainly the case that price premiums initially are going to be quite high. Very significant, well over 50% versus SDRAM, on any comparison. The reasons for that are two. One is demand versus supply ... The other issue is the SDRAM itself has gotten to be such a low-priced part ... [Y]ou do have to invest to build and sell Rambus DRAMs. With SDRAM ... they already built fabs, they already bought the testers, they already have the assembly lines. With Rambus, they can use the same wafer fab, which is the big investment, but they do need to buy new testers and they do need to buy [equipment] for chip-scale assembly. In that case, they can use sub contractors as well, but at a minimum they have to buy testers…

Our target is to get within the range of a 10% price premium by the second half of 2000, a year or so from now. So we do that two ways. We work on costs with our partners and certainly volume helps cost, and the third thing is just increasing the number of suppliers and have supply exceed demand like it does for SDRAM so that price and cost correlate.

ebnews.com

The actual breakdown of the cost differential can be found midway through the following Rambusite post:

rambusite.com

The other huge issue that some people seem to be missing in terms of cost is that while the component cost is higher, RDRAM's higher bandwidth per pin means that RDRAM is already cost effective for certain low-memory devices like video games. For example, from the same shareholder meeting posted immediately above, Geoff Tate estimates that Sony's using SDRAM chips on PS2 would have added $30-$50 to the cost of making each machine. You can agree or disagree, but if RDRAM isn't cost-effective, why is Sony using it?

Heat: Every now and then, you hear SDRAM supporters criticize RDRAM's heat production, although I don't seem to come across references to the heat issue anywhere other than on the RMBS message board. Whatever.

Latency: Geoff Tate and others in the RMBS camp maintain that RDRAM and SDRAM have equal latency. SDRAM supporters say that RDRAM's latency is much worse. Anyway, here's what one source said about latency.

Some critics assert that RDRAM performance suffers because, even though it offers higher bandwidth, it has greater latency. Perhaps this concern is left over from first-generation RDRAMs, which did have long latencies. Direct RDRAM, however, roughly matches SDRAM latency and has unquestionably higher bandwidth.

mdronline.com

Miscellaneous PCs aren't the only potential growth driver that have been mentioned by RMBS management. Also in the pipeline are communications products, HDTV and printers that are targeted to use RMBS technology. Something for RMBS investors to look forward to in the future.

Big caveat: RDRAM's success is dependent upon software that will make use of these high speed technologies once they're introduced. We're talking about the future, so there's no way to know for sure whether this type of software will ever become predominant. As an investor, that's your call to make.

Here's to more civil, respectful and fact-specific discussions about this technology.

Ron
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext