Lizzie - I am a microprocessor design engineer by training (although I have not practiced in many years), so any new processor design is exciting to look at. Kind of like guessing what the baby will be like before it is born.
But years before we had silicon, before we even had the simulator, people were questioning the 'hard over' stance Merced took on instruction optimization. Basically, the design depended on having a reliable stream of executable instructions with very few misses. That was supposed to happen with specially designed compilers. Since we didn't have the compilers, we 'assumed' for the purposes of analysis that they would be there. Even then, there were lots of questions, which led to major redesign in McKinley.
Of course, the 'magic' compilers never did come to pass, and so the subsequent generations of IA64 went to an increasingly traditional pipelined design to accommodate out of order instruction execution, large cache to reduce miss penalties, and the usual tricks.
In the early days, there were lots of VLIW advocates who were in love with the general architecture, and a whole spectrum of opinion aside from that, ranging to 'this dog won't hunt' on the negative end, from people who were more in the Alpha camp (remember that Alpha was the leading 64 bit processor architecture at the time).
All of that discussion was years before silicon was at hand. By the time even good simulators were available, it was pretty clear that a lot would need to be done to make IA64 work. |