SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : Gorilla and King Portfolio Candidates

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: saukriver who wrote (51039)4/23/2002 3:46:33 PM
From: paul_philp  Read Replies (1) of 54805
 
Saukriver,

Let me see if I can answer your questions about blade servers. A blade server is simply a motherboard with at least one CPU, some local disj and some memory. Blade servers have no external chasis. They are mounted into cabinets sized units called racks. Racks can handle 8 - 64 blades. The origical driving force behind rack-based servers was floor space and power. During the boom, space in the data centre became as expensive as NYC real estate and a NYC solution was needed - build up. Thus we entered the wacky world of MIPS/Sq.Ft.

The rack-mounted server as cut down on power usage. By removing the fan and the power supply from the server total power consumption dropped. All the servers in the rack shared a power supply and a centralized cooling system. Think of it as a computer without da noise. A significant number of large web-sites are run by rack-mounted servers today and the role of rack mounted servers is pretty much limited to the front-end web server capabilities.

So far this is little more that +1 marketing for a product on mainstreet. Things get interesting when you add some of the newer technologies being commercialized as you read this:

- Intel Itanium processors (commidity 64-bit server chip)
- Linux (a Unix which is good enough AND cheap)
- Networked Storage (all our blades can share a file system)
- Infiniband (a high-speed external bus to connect the blades)
- Clustering software (share the workload between all the blades)
- 10 GigEthernet, 10 Gig FibreChannel

Add all these ingredients together and something quite magical happens - you bust up a core metaphor which lies at the heart of IT technology. Up until now there has always been a direct connection between the task to be preformed by the 'computer' and physical machine(s) which run that task. No more. With the emerging blade architecture and the newer interconnect technologies you can strap a bunch of (dirt cheap) blades together and have a highly reliable, highly scalable, massively distributed server resource.

Now the computing resources can be doled out on a as needed basis. There is no more direct connection between the user, the app and the hardware. Exactly the same thing is happening in storage. As a matter of fact, the blade server itself is much like the disk drive, a high volume commodity item.

One interesting question is what happens to the software companies when this happens. Almost all software licences depend on counting the number of CPU's or the number of users. In the massively distributed world this may not make any sense. Do all software companies become service providers renting out their capability on a as needed basis? Maybe.

That is a quick overview of my thinking.

Paul
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext