SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC)
INTC 36.15-0.6%Dec 24 12:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Mary Cluney who wrote (138793)7/7/2001 9:59:39 PM
From: pgerassi  Read Replies (1) of 186894
 
Dear Mary:

Have you ever written such an application? Did you install a system for such an application? Did you administer such a system? Did you purchase server hardware?

The quickest servers for RDBMS applications are those with SSD (Solid State Disk). That is a bunch of DRAM with the interface of a disk drive. They have access times measured in microseconds (0.000001 sec). These large companies have hundreds of server systems. They have databases measured in TB (trillions of bytes). The rule of thumb is 1 byte of memory for each 10 bytes in the database. Thus, these servers have 100GB of memory either as main memory or the larger amount of somewhat cheaper memory in SSD. Even at today's depressed prices, 2 to 4TB of SSD (databases must have room for indexes and work areas) costs around $1M to $4M. The CPUs whether they be PA-RISC, Power4, or Xeon are much less than that. You constantly use the bare bones price of the servers and do not take into account the extra costs for adding necessary components to make it work.

A warehousing server (a typical app in all large manufacturing operations) needs a database of 100 to 300GB. The twin HP PA-RISC server (a main server and a hot backup / testing server) cost $1M, 20GB memory, and contained 4 PA-RISC chips (for $12,000 total), each. The base price was $100K for that platform. The OS cost about $500K. Oracle cost another $500K (2x250 concurrent users). The RF terminals, CRTs, terminal servers, scanners, printers, etc., cost another $1M. The application software added $10M. None of these prices included maintenance, the communication costs to attach to the business mainframes and factory systems. It was sold to the company as a turn key server for $14M. $14M in revenue with only $24K for CPU chips. $3M in actual hardware, and $4M by my method. Every RF terminal and CRT had a tested response of between 0.2 and 0.6 seconds in full scale testing.

Warehousing in a manufacturing or distribution environment is as you say it a real time system. Any database, when set up correctly, can deliver 1000's of transactions per second. You build your applications to have the original and mostly used scans retrieve as few records as possible. But with today's PCs, even full index scans take milliseconds even with dozens of concurrent users (full large table scans are to be avoided, but that is true even of big data centers).

Why are big servers still used? Mostly because of inertia. The value of the software is so high and rewriting is so costly both in terms of time, money, and trust (new software is not as trustworthy as tried and true code in use), that most businesses just upgrade the hardware, when the response time falters (increases beyond some limit). Another reason is the cost and low amount of history on DRDBMS (Distributed RDBMS) systems.

But if you look toward those companies that do real time, reliable serving (like AT&T, Lucent, Northern Telcom, etc.) they build lots of clustered small fault tolerant servers within a higher order topology. Like the ones in my previous message to you. Going down is not an option for those kind of systems. And they handle hundreds of calls every second, send a guaranteed amount of packets between pairs of users every second with no interruptions allowed. Most of the work is done by highly specialized custom chips, embedded CPUs, and programmable logic. Only a small amount is done by the main CPUs, but it is key to the operation. Histories are kept, usage is metered, journals and other housekeeping tasks are performed.

I am not saying that everything can be done with small servers. I am saying that the main CPU portion of big ticket servers is a lot smaller than it appears at first glance. You can not take your PC experience and apply the same percentages to this market. It is just like the home builders market. Here you see these Million Dollar Fancy Homes and you would think that the builders of these homes are making lots of money. Well, you would be wrong. I know builders at both ends of the spectrum, and they all agree that the small starter home builders make the most money. They do not get the big payouts, but a few K here, a few K there, and a few K all over the place beats one or two 10K a quarter (rich people want real value for their money and they know what it costs to do something, that's how they got that way). Volume makes up for making only nickels and dimes, when you make it, millions and billions of times a day.

Pete
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext