SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: jim kelley who wrote (110938)5/15/2000 12:09:00 PM
From: 5dave22  Read Replies (1) | Respond to of 1578147
 
Jim Kelley, from the article

<As the current industry leader, Intel is taking a risk in supporting RDRAM. They are betting on the fact that there is a clear need for a higher bandwidth memory solution and that RDRAM seems to be that very solution. If it turns out that RDRAM is the solution we've all been waiting for, then don't be surprised if you see AMD supporting RDRAM shortly thereafter, but at the same time, since Intel is the one putting themselves on the line here, if RDRAM isn't all that it's cracked up to be, AMD can just sit back and say "we told you so" without losing face.>

Just as long lost Paul used to point out the fact that Intel can sit back and let AMD work out all of the Cu pitfalls, AMD can do the same with RMBS/Intel. As far as you saying that "AMDroids" fear Rambus, I doubt it. Personally I want to see INTC fail with Rambus in the short term so I can realize some income sooner, but it is NO WAY a threat to my AMD investment. I might hedge my AMD bet and buy a small allotment of RMBS stock. Seems like a good strategy, eh?!

Dave



To: jim kelley who wrote (110938)5/15/2000 12:15:00 PM
From: chic_hearne  Read Replies (3) | Respond to of 1578147
 
Re: http://www.anandtech.com/showdoc.html?i=1239

Jim, Pete, and everyone else discussing technicals of RAMBUS-

I read the article. I don't understand much of it. I believe you don't need to though, because there's only one important sentance and everyone can understand it:

Currently, you can find a 128MB PC800 for under $600 if you shop around, but if you compare this to $100 you can pick up a generic 128MB PC133 module for, that's still quite pricey.

Maybe this one is also important:

The price of RDRAM will go down, but by year's end, don't expect it to be the same price as SDRAM although the price difference will definitely decrease. That decrease may not only be because the price of RDRAM will be falling, but potentially because the price of SDRAM may rise again.

Let me interject a little common sense into this argument, as long as RAMBUS is $500 more expensive for each 128 MB, RAMBUS is just STUPID STUPID STUPID!!!!

For a performance system with 512 MB, that's $2000 more for the RAMBUS system all other things being equal.

That's like me selling you a Geo Metro with a Ferrari engine for $50,000.

Burns is our friend. Barrett is our friend. RAMBUST is our friend.

chic (who loves AMD more and more as the days go by)



To: jim kelley who wrote (110938)5/15/2000 1:09:00 PM
From: pgerassi  Read Replies (1) | Respond to of 1578147
 
Dear Jim:

I read over the editorial. There are glaring holes in his "Debunks". First. You can get more out of SDRAM without a large increase in pins. This is called interleaving. You connect the data pins together and skew the clocks given to each SDRAM DIMM. Doing this to pairs of DIMMS, you get PC266 from paired PC133, where on the rising edge of the base clock, you get DIMM ones data is valid, on the falling edge (rising edge of the skewed clock), you get DIMM twos data. This is effectively DDR. Interleaving 4 DIMMS gets you PC533. Usually this takes buffering on the DIMMS and tight control of trace characteristics to get this to operate reliably. This is why workstations and servers usually required buffered DIMMS.

Now when you increase width, you do not double all the pins, just the data pins plus one or two additional control pins at most. Thus 2 wide DIMMs only add 64 pins to the memory controller. And 4 wide adds 192 pins. 4 wide and 4 interleaved gives you 16 times the bandwidth. If you start with PC133, you get 16.96GB/sec far higher than any RAMBUS currently designed. This is 10 times PC800. Ten PC800 would use 290 pins where the above would use 278 pins. NOw who has the lower pin count? Most SDRAM DIMMS controllers have 4 DIMMS to a trace, thus the above is quite usable and not out of the ordinary.

Second, power usage, if one RAMBUS chip is referenced constantly (quite probable with current working sets being less than 16MB (size of RAMBUS chip)), is far greater than a SDRAM chip of same size. Engineers must design for worst case loads. That one chip, uses 4 times the power of SDRAM chip (maximum and this assumes that SDRAM is like all others). That higher thermal density, having to get the four times the heat off the same area, requires a heat sink not because the heat needs to be spread, but because heat needs to be removed.

Three, it costs more due to the way RDRAM is tested. Due to the high heat of the chip, full die testing requires the heat sink. The heat sink is not in place until the RDRAM dies are assembled into a RIMM. If one chip does not meet spec, the whole RIMM is wasted. Thus, if a RAMBUS die yields 95% good after some minor die testing, the overall yield of a module is 95% ^ 8 * RIMM Board yield * RIMM Assembly Good yield or about 60%. If yields are 90%, you get about 30% RIMM yield. I believe that individual RDRAM chips are within 2-5% of SDRAM yields, but RIMM yields are much worse than SDRAM DIMM yields (around 99%) due to full chip testing or "Known Good Die". A Yield of 60% would double the price, and a yield of 30% would quadruple the price. Even if 95% were correct, a 256MB RIMM uses 16 chips (die) and 95% ^ 16 is 30% again. RAMBUS costs are only reduced, if single dies can be more fully tested. The cost differential is due to the RAMBUS design of RIMMS. The heat appears to be concentrated in the I/O portion of the die. Most likely the high speed drivers and control logic. On buffered DIMMs, this is regulated to driver ICs on those DIMMs.

Four, for embedded applications high width (x 32) SDRAMs are used where performance is not necessary and DDR where it is (see most high end 3D cards). Here, the cost / benefit analysis is more of a "Wash" and depends on what is desired more.

Pete