To: Zeev Hed who wrote (7048 ) 11/1/2000 2:11:22 PM From: SBHX Respond to of 30051 Zeev, It is easy to overestimate intel's architecture team's ability to correctly predict performance gains, or to even correctly identify the true bottlenecks. Consider the question of taking long strings (as your term is used). Intel's top io architects has been heard to say in some public forums like IDF that he would like to deal with bursts of 1K or larger. This certainly would solve efficiency problems from a streaming data point of view, but the penalties to everyone else running other applications are quite severe. First of all, one of the main design targets for all this fast memory is actually AGP/shared-memory style graphics, and host processing is probably a second. This means that whatever memory that is being designed, has to work well with graphics. Lets take the 1KB burst by itself and study a single problem : texture fetching by a 3D graphics accelerator. The nature of textures is that they typically are bilinear fetches requiring 2x2 texels being read from a texel map. Ideally, these texels should all be in the texel cache on the graphics chips, but they never end up that way, since there is not enough cache-reuse there. The graphics chip that wishes to optimize performance on AGP or whatever the replacement is has to have their cachelines organized to match this long optimal burst size of the external memory subsystem, this requires some very interesting juggling with how the texels are tiled, as well as a punishing requirement in terms of the optimal absolute size of the texel caches in EACH of the texture units. Judged in that light, I doubt if intel or anyone else really has a complete handle on what the real solution is. (or so my graphics chip designer friend tells me (vbg)). We also give too much credit on each engineers' ability to make absolutely correct design decisions, the higher level of the application chain one goes, the more abstract it becomes, and the harder it is to correctly analyze, using queueing networks or simulations. Sometimes, when there's too much system level interaction, it's just a gut-feel on where the bottlenecks are. That's why the io architect wants to deal with 1K bursts, because he knows he has a solid solveable problem that has 'optimal' efficiency. As an aside, based on these potential royalties and the SDR/DDR licensing, I would not hesitate to play rmbs on short terms and buy them on severe dips, but the one thing holding me back from considering rmbs as a long term hold is actually the amount of conviction from many people that the intc-drdram thing is still a reality. Ignoring all that, I still don't understand from a purely balance sheet view, why rmbs would be hurt if drdram was dropped. Even if DDR was used, as long as rmbs still gets the royalties, what difference does it make? But the level of conviction makes me nervous and the 'leaked' intel document is a good indication that when the final reality sinks in, people will not make rational decisions based on potential revenues, but on the irrational aspect of use of rdram or not using rdram. SbH