SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Tony Viola who wrote (140686)8/2/2001 9:01:47 AM
From: Dan3  Read Replies (1) | Respond to of 186894
 
Re: They're separate cabinets and considered storage, mass storage. Mass storage is not considered part of a server

Cool. Would you classify them as human resources costs? Cleaning supplies? What, then?

:-)



To: Tony Viola who wrote (140686)8/3/2001 2:33:11 AM
From: pgerassi  Respond to of 186894
 
Dear Tony:

There are two rights sold here. Costs for access of each cell board by the OS which are lower costs billed and is considered by HP as hardware. Costs for license to run software on each processor are the higher costs. Thus, HP charges you to allow the server to look at each cell board and any processors there and charges you to allow to run things on a given processor. These charges probably go for the costs of the firmware (BIOS, microcode, management drivers, etc.) to give the processors hot swap and remote dynamic management. HP is specifically separating this charge which is buried in other server OEM processor costs. These are costs not in the chips themselves but for all those things needed to keep those processors running and able to do those other things not inherent in the CPUs.

As to the disks, when will you get it through your head that disk was always part of the server since the beginning of time. It was used as part of main memory (where do you think virtual memory refers to?) Using disk as an extension of main memory has occured all throught history. Even Windows (all versions) uses swap files that exist on disk to extend memory. On all 386 and up CPUs, although the physical main memory cannot go beyond 4GB, virtual (disk) memory could be much higher (x86 has it at 64TB IIRC). Thus disk is the same as adding non volatile memory to a server.

You do not seem to have any problems will cache memory or solid state memory. And HP, IBM, SUN, the various Intel OEMs all agree that servers do contain external hardware to the main enclosure. Heck in the old days, the CPU took up multiple cabinets and main core memory had its own cabinet. In the ancient IBM 1400 series mainframes, the main CPU was the 1401, the line printer was the 1403, the console the 1402, and the rotating disk the 1407. They were all considered part of the main computer and were sold by IBM that way. Ditto for the IBM 360 series, the 370 series, and the DEC PDP 11 series, the DECsystem 10, DECsystem 20, and the DEC VAX series.

To this day the option lists include all of these peripherals as part of and extensions to the server just like CPUs and memory. Did you see that the HP9000 Superdome also include external I/O cabinets for additional PCI cards? Whether the disk is connected through an IDE controller on the main backplane, a SCSI controller on a PCI bus card, or a fiber controller on a PCI bus card, both a normal or RAID type, both cached or not, does not matter. It is all considered part of the server. Whether it is in the main enclosure or another enclosure makes no difference. The only time it may be not is if it is connected via a network and that is debatable whether it is one server unit or two. Also, some SCSI disk drives can have up to 7 slave drives (they are the master if they directly connect to the controller (this is where the logical drives are talked about in the SCSI standards since SCSI-1).

It especially gets confusing with clusters. Some see it as one unit and others see it as many individual units. HP, IBM, Sun, and Compaq call it consolidation when many different computers are brought into one of these scalable servers. So they seem to think that a cluster of N nodes is one server no matter how it is architected. You manage it as one partitionable unit and essentially act as it was a distributed single server system. That goes against your view that it is N compute units, X storage units, Y communication units, Z entry units, and P print units. Even TPC seems to call these MPP systems as one unit. Even when they are composed of many dissimilar nodes (a hetrogeneous system), they are still called one server.

I think you are overruled by the majority.

Pete