Soup -- thanks, now we are talking. Its good to have some numbers like this as a starting point. But I'm not sure if his test with mathematica:
`The real proof that faster is better comes when you apply a high-performance Power Mac to a computation-heavy job. Rather than pick from the usual graphics applications, I selected Wolfram Research Inc.'s Mathematica 3.0, which was recently benchmarked by Karl Unterkofler at fampm201.tu-graz.ac.at;
is what I'm talking about. You see, for a well-compiled binary running at the assembly code or machine code level yes we'll see that RISC is better than pentium etc, but that is the processor which we are seeing, and some things about the OS handling of tight resources, but not the kind of everyday slugishness I was talking of. Once you start the compu-intensive mathematica job you wont see much differenece between win95 and winNT for example beause you arent opening and closing windows, using the OS, youre just running the processor which is the same. The big improvement with mathematica on intel under next is nice to see but may just mean that a big program like mathematica was probably written in a high level language and the compiler for the unix-origin OS were more efficient than windows ones, i.e Wolfram began life in unix and ported perhaps not efficiently to windows. Still its a good point (which Philip Lee made) that big programs will tend to port well into Rhapsody and your numbers confirm this to me. If it shows up on a range of programs then I'd be even happier.
But what the everyday user sees, who doesnt run mathematica is things like I mentioned in my last post. Maybe the only way is to wait for the product and see. I know only on my computer yes big compu-intesive jobs run great but I still dread turning on Next now because of all the general waiting around. Anyhow, your link is a good frame of reference. Thanks,
Shahn |