Plaz,
I agree that Flash is the first target for OUM -- the warmup, so to speak. But I don't see why system memory and embedded applications (particularly cpu cache) couldn't come along soon after. Until they get the multibit functionality going in OUM (which will give it a great density/cost advantage over DRAM), I think that there may be even more incentive for someone like Intel to use it as cpu cache than for a DRAM manufacturer to use it for system memory.
On-die cache provides a huge advantage for cpu efficiency, but it is extremely expensive (in its present form). Check out the prices on Xeon processors with 1 and 2 MBits of on-die cache. Don't they use SRAM for this because of its speed? But, I believe SRAM is a very large memory cell, so the cost in die-space is extreme.
It's possible that you're right -- that integrating OUM on a cpu would be a huge engineering challenge -- we'll have to wait and see, but it's fun to speculate... Tyler has said that it should integrate easily. OUM adds two or three mask steps to the CMOS process.
OUM is small and because it is nonvolatile, it is also static, like SRAM. I think this could be a huge and coveted advantage for Intel over AMD in the processor wars (not that Intel will have exclusive right to OUM, but they will definitely have the early experience advantage, and if it provides a great edge at least one of them is going to want it), as there isn't much advantage to be gained anymore in terms of pure architectural innovation.
As for your rule that "There will always be need for higher speed chip-to-chip interfaces", I don't know. Engineers are always looking for the cheapest way to accomplish something, and you get a lot of bandwidth bang for the buck by integrating in a package or on-die. I think that (and I'm not an expert) new packaging technologies are enabling multi-chip modules and that it is a strong trend for the future.
wily |