SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Energy Conversion Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: wily who wrote (4934)5/25/2000 2:35:00 PM
From: Plaz  Read Replies (1) of 8393
 
If OUM can achieve significant density advantages over current cache RAM, on-die OUM cache or large OUM chips as part of a multi-chip module could solve the memory bandwidth problem (which is not really a problem until processors get up to about 2 GHz) and make DDR/Rambus irrelevant.

Very unlikely. There will always be need for higher speed chip-to-chip interfaces. To think otherwise goes against the entire history of computing.

It is also much more likely to see OUM (if we ever see it) in a flash chip first (because of price and demand in that segment), followed by DRAM, then to an embedded cache scenario. Yes, integration saves money, but it always trails the leading edge of technology. This fall, Intel will finally release their system-on-a-chip (aka highly integrated) Timna product. Intel and AMD have enough to worry about making 40M+ transistor chips without integrating OUM and adding 7 steps (is that right?) to their fab process right off the bat. It could end up on chip eventually as an embedded DRAM cache, but that's a long way away to start making predictions on. I'd just be happy to see an OUM product!

Plaz
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext