SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Elmer who wrote (152245)12/12/2001 1:16:20 PM
From: combjelly  Read Replies (1) | Respond to of 186894
 
" It's obvious but it's no more contrived than the claim that Intel is losing money on Xbox."

Good point. And you are right, that comparison was uncalled for.

While I am sure that Intel does not make the same money per chip as they do on their regular processors, they probably are a ways from breakeven. Especially if they put it in a flip chip BGA package.



To: Elmer who wrote (152245)12/12/2001 11:51:08 PM
From: Dan3  Respond to of 186894
 
Re: What makes the people who claim that any different from Dan?

You might consider that, when you and Paul were raving about how fabulous RDRAM was going to let Intel crush AMD, I was writing this:

Friday, Jul 30, 1999 8:09 AM
Respond to of 140962

I wouldn't expect the K7 to ever use Rambus. Why take on the complexity and latency costs of squeezing 128 bits of 100MHZ RAM down to 16 bits, and then opening it back to 64 bits if you don't have to? Rambus was designed on the premis that it was impossible to build DRAM faster than 100MHZ. But all the major ram semicons are shipping 166MHZ SDRAM - today - check out the spec sheets at Micron, LG, Hitachi, etc. Rambus was a clever solution to a problem that turned out to not exist.


...and this:

Saturday, Jul 31, 1999 12:02 AM
View Replies (1) | Respond to of 140962

Rambus is still listing binsplits for the 800 parts with part numbers for latency between 40 and 50 ns. SDRAM for delivery in the same time frame is running between 13 and 30 ns. SDRAM loses some of that advantage in bandwidth due to setup time, (I think its 10 ns, can someone correct this for me?) but it still leaves a delta of up to 27 ns. There was a lot of analysis I remember from 8 or 10 years ago when many systems were offered with or without cache. No cache - as was reprised in the early celerons - caused about a 35% performance hit. 8K cache caught about 85% of mem requests, 16K about 90%, 32K 95 - but then the hit rate for most code asymptotically approached 98% (this is all from memory - please correct the inevitable errors).

Rambus with its 128 bit wide 100ns rams being fed through a 16 bit wide bus at 8x the speed of the back end devices isn't compelling as a performance solution. It's still reading from 100ns RAM and now there is the overhead associated with ordering the bits and pumping the data down the bus. But for a motherboard/CPU manufacturer like intel it radically reduces pin and board trace counts and so reduces costs. But you can only save so much money on a motherboard that retails for $75 to $200 including chipset and power components. And if your memory costs an extra $50, and your $700 processor now performs exactly like a $500 processor, you have a problem (If most of this is correct).

If AMD can't pump out enough k7's to be a significant market presence, this doesn't matter. If lots of k7's come out at speeds equal to or greater than coppermine it's back to the motherboard and chipset drawing boards for intel.