SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Mary Cluney who wrote (140704)8/2/2001 10:38:10 AM
From: Tony Viola  Read Replies (2) | Respond to of 186894
 
Mary, good post. I'd like to see the look on Mike Ruettgers face when told that his RAID Arrays, SANs, etc., were counted in server sales, instead if storage sales. Same with Cisco, Brocade, NTAP, etc. people. I'd even like to see the look on the IBM Shark top VPs face if he were told his sales went into IBM's server coffers. Bottom line you can't lump everything in the corporate IT center room as server. This argument is getting old, and getting nowhere. We're right, they're wrong, and that's it for me on this subject.

Tony



To: Mary Cluney who wrote (140704)8/3/2001 4:08:31 AM
From: pgerassi  Respond to of 186894
 
Dear Mary:

< The bill that you refer to is a server used by the Transaction Processing Performance Council (TPC) used for transaction processing and database benchmarks.

This has nothing to do with our discussion. Our discussion is about the 4 million servers each year shipped from manufacturers and used by IDC to track server sales. DBMS sales belong to Oracle, IBM (DB2), Informix, Sybase and others. SAP, Ariba, ITWO software sales are not included. Mass Storage Devices Shipped in boxes on their own are also not included. Cisco, Lucent, and other hardware sales are not included.

Pure garbage! Each of those 4 million servers include storage of some type. If any server uses virtual memory, it means the disk it is attached to. Any windows "swap" file is an addition to main memory stored on disk. So storage is included in all servers that are attached to storage through their I/O buses. IDE, SCSI, and fiber controllers are plugged into a PCI bus. Some times this bus in on the MB, but most times on a PCI card plugged into a PCI bus (sometimes there other buses like CompactPCI, Cardbus (laptops), P2P links like HT. In all these cases the controller card and its resources are plugged into the server in the I/O bus somewhere in the server. The drives have little intelligence and cannot be considered a server as much as it pains you to hear it. If any intelligence is on the controller, it is inside of the server proper and it connected to the server CPUs via chipsets and PCI bridges. If the controllers are not in a server the disks do nothing. Thus, by no means are they a server by themselves. Similarly for fiber channel connected disks.

This stupid division only exists for for those who do not think servers have storage. WRONG! Registers are storage, cache is storage, main memory is storage, ROM is storage, flash is storage, and so is disk or tape. No server runs without some storage (it is part of the definition of a computer, even it is nothing more than a signal traveling down a wire). Each is storage, but with different attributes, access latency, bandwidth, volatility, and so forth. You have no trouble with the first five types.

A cabinet that can hold 1260 drives would be pretty big. It would be hard to haul through doors and move around. So they separate it into smaller cabinets each holding a fraction of the total. It does not matter how many cabinets they separate them into, they are still connected via IDE, SCSI, or fiber cables to the I/O bus portion of the main box (it could be in a separate cabinet with a suitable PCI bus PCI bus bridge). Why is this so hard to understand? Logically they are one unit.

Logically you can have 30 PCI devices per PCI bus and up to 255 PCI buses logically connected. It would be also hard to put all the CPUs, all of the memory, and up to 15K PCI devices in one cabinet for the same reasons.

Your example is equivalent to the Comparative Museum Council's report on Museum valuations.
It includes the cost of museum construction and the acquisition cost of all exhibits.
That report would be different than the one the Museum Construction Cost Council provides - that report would only consider the actual construction cost of a museum.

Wrong! They separate the server from the client computer portion. The client computer portion simulates N users concurrently using the server to do transactions. N is usually a very big number, since the typical user does 6 to 10 transactions per minute. To get 100K TPMc you need more than 10K users. They separate the server software into another category. They separate the client systems and the networking infrastructure connecting the server to the client simulators. Serving 10K users would need much more network infrastructure. And I have met few users who never print at all (payroll checks is one of those times almost everyone wants hard copy). And notice, they all include at least one tape drive (although it is highly inadaquate for real backups for real servers).

Your arguments against this particular bill (since you brought up ERP) do not hold water. This would be a core cost and definitely a lower bound of actual server costs.

<<< <sarcastic on>Yeah, its real hard to justify the 2% figure.<sarcastic off> >>>
Your ally, Dan, reported that Intel sold 6 million units of the Xeon chip for an ASP of $1200. That comes out to $7.2B.
That one chip alone that goes into servers using your 2% figure would give you a server market of $360B for one chip.

I disagree with Dan as to the numbers of chips going into the server market. I believe he was talking upper bounds. Those can be much higher than actual. Of that 6 million, most went into workstations and not servers. Many of those servers were probably powered by Celerons, P2s, and ordinary P3s (even AMD Athlons and some cheap RISC CPUs like MIPS). These went into the far larger lower end servers like web servers and the like. These probably make up the bulk in units say 3.5M. Less than 1 million Xeons went into servers is what I believe. Of 1 and 2MB Xeons, a few 100K went into servers. This keeps ASPs in the $1200 range, yet make up only $200 million or so. This leaves about 100K or so servers for those large SMP servers and say $400 to $600 million or so. Even if I am off by 100%, and the numbers are double these, it is a far cry from your 25% or $15 billion.

What is different is that much of those servers by IDC are very small web servers and the like. 3 million office PCs would cost only $15 billion yet cost only $150 million in CPUs. 750K entry level servers would cost another $15 billion and hold $250 million in CPUs. 200K mid level servers would cost another $15 billion and have $350 million in CPUs. The last 50K of large servers cost $15 billion and have CPUs that cost $450 million. That totals $60 billion in servers and $1.2 billion in CPU chip costs. The last two depend on whether you place large servers starting at 5 CPUs or 9 CPUs (maximum). I assume that the entry to mid level is at 3 CPUs (duals are entry level).

The 2 % figure that you relentlessly push is therefore not possible. It does not pass the sanity test.

Your assumption that all 4 million must be better than 2 way servers is not possible. I push only that 2% is about right for mid and up servers. PCs are higher, but far below 25% for the average unit. Given IDC's figure of 4 million and $60 billion, that means the average server system is $15K. That is below the average 2 way server. So I figure that a majority are very small 1 way (PCs) web, print, SAN, and such servers. These usually sell for $1-2K like Athlon, P3, Celeron, or RISCs like MIPS 1U or PC types. High end or power PCs make up the rest in small servers. My home system could easily handle a small business of 10 to 50 employees (depending on business type).

Entry level servers are 2 way and usually use P3s (and now Athlon MPs). They are usually a cut above high end PCs and can work like them, if desired. Server cases for them go up to 15 bays or so. These are the work horses for companies of 50 to 250 employees and division systems for larger companies. They are also configured as workstations so can cover both market areas. They go for $3-25K depending on configuration.

Mid range systems are 4 way and now we start seeing Xeons and mid range RISCs. These go for $15-250K. High end systems start at 8 ways and go up from there. They start at $100K and go up (double for 24x7 and double again for scalable cluster).

If your definitions are different, how would you separate the levels?

Pete