SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Amy J who wrote (99620)2/22/2000 9:40:00 AM
From: rudedog  Read Replies (1) | Respond to of 186894
 
Amy -
Linux does indeed provide a good platform for "scale out" solutions, using exactly the same architecture that DELL used to do their demo on-stage at the W2K launch. CPQ has been using that architecture for years. A CSCO router is used along with a transaction monitor to retain and resubmit a user request for a web page. In the current state of web interaction, this provides a complete recovery. The architecture also offers nearly unlimited scaling, limited only by whatever back end services (such as database access) are required by the application.

MSFT provides well integrated services in W2K to allow that transaction monitoring, but there are lots of other ways to do it, both commercial and home-grown. There is little difference in implementation between W2K, Linux or any other web hosting package in this regard as the OS on the web host is by design not much involved in either scaling or failover.

There is almost no reason to increase the power of an individual node - it is a pure economics game, what is the package that puts processing power in a rack at the lowest cost. Just in the last few years that "sweet spot" has moved from 1 processor to 2 processors, and the next generation of products may take that to 4 processors.

There is nothing inherent in Linux that keeps it from "scaling up" well, except for the attention and experience of the designers working the problem. There has just not been a lot of interest in the Linux community in solving that problem. An engineer needs access to big hardware, which might reduce the number of people who tackle the issue in comparison to architecture-independent problems, but the real reason is just that there is no burning need to go after big SMP scaling at the moment. Code paths around lock management, reducing the "weight" of a thread, careful analysis of pipeline stalls and conditions which flush the I & D cache, design of asynchronous I/O subsystems, and such stuff all needs to be carefully tuned to improve big SMP performance, and those things are also processor dependent. In that sense people designing specifically for IA32 SMP have an advantage since the systems are readily available and the problems well known.

However, as one moves into larger SMP configurations, at least at today's price points, the importance of other components, such as management and performance analysis tools, becomes increasingly important, and the cost of the OS becomes less of a factor, so there is not as much justification for designers to go after base OS scale-up performance until those other pieces are available.

Now onto the pending conflict in the market between Intel and some of the big OEMs like CPQ. Intel is moving up the food chain as discussed recently by Barrett and Grove - this was laid out quite clearly in the last few days by Grove. As Intel makes larger and more complete building blocks, they inevitably reduce differentiation for vendors like CPQ who engineer those same blocks. And it is in Intel's business interest to foster the broadest possible adoption of those blocks - getting into the ASP space with a vendor like DELL who can effectively take advantage of the most integrated blocks Intel can produce is just one example. Intel gets more of the "value add" - the portion of the sticker that goes to intel's bottom line as opposed to the vendor's margins.

It may come to a direct competition between Intel and CPQ in that space eventually, although I doubt if that is Intel's goal. But since the trend is inevitable and driven by fundamental economics, there is little either company can do to avoid the conflict - just as CPQ was unable to avoid the inevitable conflict with their own channel over the last few years. Intel is probably wise to go after this problem in as direct a fashion as possible, communicate their plans clearly, and let the chips fall where they may (pun intended...).

I believe that CPQ is still Intel's largest customer - it's either them or DELL, and all of CPQ's server and commercial desktop systems, as well as a majority of their consumer line (despite the AMD press) are Intel based. In any event, they are certainly an important part of Intel's business, and I would imagine that the management of the two companies are working this in a pragmatic way. It's just a hard problem, and Intel has fewer constraints in their solution matrix than CPQ does.

As far as "Glad you've joined the INTC thread." Thanks - I have been an investor in INTC for a long time, and have posted here frequently in the past. I read a lot more than I write on most of the threads I monitor, and over the last year or so, the regulars on this thread have done a good job of highlighting most of the points I would have made, so I felt little need to put my oar in the water. This thread has always had a high level of quality technical content and it's always a pleasure to get into a discussion here.