To: Steve Porter who wrote (58807 ) 6/25/1998 11:57:00 PM From: Haim Barad Read Replies (1) | Respond to of 186894
You are responding to this message from Steve Porter on Jun 25 1998 7:34PM EST You are talking insanity now. Yes there is now a bottle neck in the CPU at this stage of the game. But this isn't something Intel (or their investors) should count on for much longer. The next generation of chips, already under development offload even mroe work from the CPU. Coupled with DirectX6 which has the ability to feed even more "raw" data into the graphics chipset, the CPU will be used for less and less, eventually only for sound routines and the like (which are also being offloaded to dedicated co-processors). This is not at all true. Games will make more and more use of the CPU (and therefore be more and more scalable). How? You can make a laundry list of techniques: more complex models, deformable surfaces, better physical modelling, displacement mapping, volume rendering, mixed rendering, better lighting models, fog volumes, and so on. On top of this, the difference we are talking about are going from 35 to 57 fps. Well the human eye can only see between 24 and 30 reliably anyway, so what difference does it make???? Again, this is short sighted. If a game developer sees that a high end platform is rendering his content at a high rate, then that means he has more "headroom" to enhance his content (i.e. use more and better features on the high end gaming platforms than you would see on the low end platforms). In other words, a fast Pentium II system with a high end graphics card can show a "more exciting" game than a cheaper platform. Many games in the past were designed this way and game developers will be doing this even more in the future. Haim