To: Charles Tutt who wrote (51685 ) 10/19/2000 12:54:08 PM From: rudedog Read Replies (1) | Respond to of 74651 Charles - there are always anecdotal cases for uptime. That's why statistical evidence from some independent third parties are necessary. I drew my numbers from Gartner, Giga and Meta and I actually did some discounting for the effects of the I/O problems, which drove availability to 94.5% in a small sample of affected systems. I know that SUNW has made improvement of their uptime statistics a priority. When you try to get above 98.5% availability, the practices of the IT department, the network configuration, the way power is routed to the servers, and a host of other factors become important in developing better performance. SUNW has made the claim that part of the problem is that the "dot-coms" do not have the kind of infrastructure to support big systems, and that better education on best practices is at least as important as the reliability of the base systems. I happen to agree with that. CPQ has a comprehensive set of programs around best practices, one for their advanced server deployments and an even more rigorous set for the datacenter products. Those services include educating the customers, providing analysis of the architectures being proposed and the likely availability results, ongoing support of the infrastructure for both performance (service level availability guarantees or SLAs) and for comprehensive life cycle management, and in short the whole range of IT services that big customers have come to expect from vendors like IBM. Sun has been playing catch-up in the whole services aspect of their business and in my opinion this will be the critical area for them over the next few years. Not to say they can't do it... but some of the less informed SUNW partisans posting here (not including you of course!) don't seem to understand that they are likely to highlight a competitive advantage of the current MSFT proposition when they start down the availability path.