To: Tenchusatsu who wrote (107876 ) 8/21/2000 1:22:49 PM From: EricRR Read Replies (1) | Respond to of 186894 I didn't even know that HP had such a feature in their hands. I know that HP has it for PA-risc, and is working on it for IA-64. I don't know the status of the seocnd project though. look at this HP link:hpl.hp.com The motivation for this project came from our observation that software and hardware technologies appear to be headed in conflicting directions, making traditional performance delivery mechanisms less effective. As a direct consequence of this, we anticipated that dynamic code modification might play an increasingly important role in future computer systems. Consider the following trends in software technology for example. The use of object-oriented languages and techniques in modern software development has resulted in a greater degree of delayed binding, limiting the program scope available to a static compiler, which in turn limits the effectiveness of static compiler optimization. Shrink-wrapped software is shipped as a collection of DLLs (dynamically linked libraries) rather than a single monolithic executable, making whole-program optimization at static compile-time virtually impossible. (edit)( if you think this is bad, polymorphic code is even harder, and scripting programs (JSP and ASP server apps) are the worst)((/edit) Even in cases where powerful static compiler optimizations can be applied, the computer system vendors have to depend on the ISV (independent software vendor)to enable these optimizations. But most ISVs are reluctant to do this for a variety of reasons. Advanced compiler optimizations generally slow down compile-times significantly, thus lengthening the software development cycle. Furthermore, a highly optimized binary cannot be debugged using standard debugging tools, making it difficult to fix any bugs that might be reported in the field. The reluctance by ISVs to enable advanced machine specific optimizations puts computer system vendors in a difficult position, because they do not control the keys to unlock the performance potential of their own systems! Meanwhile, the trend in hardware technology is in the direction of offloading more complexity from the hardware logic to the software compiler. The move from CISC to RISC to VLIW clearly illustrates this. The motivation from the hardware perspective is clear: simpler hardware can potentially run cooler, can be implemented on a smaller die (increasing the chip yield), and possibly cost less to manufacture. But the problem is that the static compiler is being asked to take on an increasingly greater performance burden at a time when the obstacles to static compiler analysis are continuing to increase. The result will inevitably be either very complex compiler software that provides only modest performance gains on general-purpose applications, or highly customized compilers that only work well for very narrow classes of applications tailored for those compilers. Clearly, a technology is needed to bridge this growing gap between software and hardware trends, so we can avoid the pitfall of simply taking the complexity from one side of the hardware/software boundary and shifting it to the other. In our view, the ideal bridging technology is a dynamic performance delivery mechanism, implemented primarily in software, that operates very late in the program-compile-install-load-execute spectrum. It should complement rather than compete with the strengths of the static compiler and microarchitecture hardware. And most importantly, it should be possible to deploy the technology without any disruption to existing software and hardware technologies, or else its adoption will be too slow to make any impact. The Dynamo project was started in 1995 to understand the challenges in designing, engineering, and deploying such a technology in the marketplace, in a way that provides a clear value-proposition to system vendors, software designers and end-users. And look at the diagram in this page. It compares transmedia to the new HP paridigm. hpl.hp.com Now look at the Intel diagram for Itanium's firmware spec.eetimes.com Not being too familiar with such things, I found it hard to interpret. But I think it looks like exactly what HP is talking about. The question is, will Each Itanium system include this technology likt transmedia, of will ONLY HP systems include it?