SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : All About Sun Microsystems -- Ignore unavailable to you. Want to Upgrade?


To: JC Jaros who wrote (28549)3/4/2000 11:08:00 PM
From: rudedog  Read Replies (1) | Respond to of 64865
 
JC - That series was fascinating - especially given the time it was written. One of Petreley's comments is especially interesting - he says the central theme of this series has not been that Microsoft is failing to innovate, but that Microsoft's misplaced competitive priorities may adversely affect the development of NT.

After the launch of NT4, when "cairo" was the Next Big Thing, MSFT announced technology sharing agreements with several "big iron" companies, including Sequent, DEC and Tandem. The notion was that some of that enterprise technology would work its way into NT. As far as I know, none of the initiatives proposed back then ever came to pass.

A MSFT manager who was involved in that work "retired" last year, and I had dinner with him a few weeks later. Over a bottle of wine (well, a couple of bottles...) we ended up talking about what happened, or rather didn't happen, as a result of those initiatives. According to this guy, the prevailing opinion was that the fundamental problem with MSFT's ability to play in the enterprise was not so much the technology they had, but the way they manage the development process (leaving aside support, for the moment). Companies that service the most demanding mission critical accounts have a largely customer-driven development cycle for the products those customers use - new features are designed to solve customer issues, not trump the competition. As a result, features are introduced only rarely, and only in a well-integrated way that minimizes disruption to the existing technology base. And after each feature introduction, there is a subsequent release which only solves any transition or integration issues which the previously introduced features might have created.

MSFT of course is at the other end of the spectrum - they never met a feature they didn't like, and the notion of slowing down the product cycles to accommodate improved stability, and to support the longer development and life cycles of large enterprise customers, worked against the core competitive nature of the MSFT machine.

Now we see some lip service being paid by MSFT to the notion of service packs which do not introduce features. But at the same time we have the next version and the version after that being hyped, with delivery dates well within the window of a single development and deployment cycle for an enterprise customer.

I wonder what would have happened if somewhere along the way, MSFT had decided to do an OS aimed at meeting the needs of high end customers without regard to the rest of the market. But given the MSFT culture, I can't see anyone inside the company ever building enough traction to make something like that happen.



To: JC Jaros who wrote (28549)3/5/2000 12:11:00 AM
From: rudedog  Read Replies (1) | Respond to of 64865
 
JC -
One more comment on Petreley - he is a big fan of X. Although his comments on the difficulties Citrix has encountered in supporting multiple user contexts on NT are all true - in fact they couldn't do it on standard NT - he glosses over the weaknesses in X with some questionable statements.

he says X allows the meat of the application to run at the server and simply display and take input from the remote terminal or workstation.
True as far as it goes. But of course X was developed when a full-up RISC workstation was well over $10K, and $10K was a whole lot more money than it is today, so the protocols were designed to minimize the intelligence required on the client. As a result, the cut point on the interface is high - the client knows much less about the properties of screen objects than a Citrix MetaFrame client knows. I can run a MetaFrame client with a display resolution of 1280 by 1024 over a 28K modem and get good response - plenty usable for remote access and even remote development and debug. Try that with X... even on DSL, X is a sloth. Nice and snappy on 100M EtherNet though...

He goes on to say X11 is harder on a network wire because it is more distributed.
This is EXACTLY BACKWARDS!! X is LESS distributed. The bandwidth us used because the client can not deal in metadata and must request every component of the display from the server.

He goes on to compare the number of users between an NT server supporting Citrix and a Unix system supporting X - and correctly points out that the Unix box, even running the X engine, has only a fraction of the load of the NT system, and can support a lot more users. That is, as he says, because the Citrix system has to not only manage the graphics interface but also has to "fake out" NT and manage all of the individual user contexts.

While I agree with his criticisms of NT in that regard, there is no comparison between X and the MetaFrame architecture. I have been playing with the beta MetaFrame engine for Solaris, and it has better performance and less server load than X at a fraction of the network load. I also have the ability to run any mix of apps (NT or Solaris) on any client, NT or Unix - as soon as I get all my 11 MB wireless shaken out fully, that will be a big benefit, especially sitting by the pool...

Take a look at citrix.com
if you're interested.