Old...but it talks alot about what McAdams alluded to in each of the conference calls
September 15, 1997 (Vol. 19, Issue 37)
Efficiency and technology fuel centralized processing
Users benefit from increasing power in multiprocessor systems By Julie Bort
Whatever people in the PC world think about the mainframe's dominance in days gone by, the mainframe did offer IS one asset that has been virtually unknown since the client/server revolution took place a decade ago: centralized control. Today, thanks to powerful multiprocessor servers -- and budgets depleted by the high cost of managing distributed systems -- the pendulum is swinging back. A growing number of companies are consolidating servers to build a network schematic that looks amazingly similar to the mainframe/terminal structure of yesteryear. These projects typically reduce the number of servers in use within an organization by colocating many applications onto fewer servers. They physically place the bulk of them in a central location. Stamford, Conn.'s Gartner Group predicts that by 2001, more than 50 percent of data-center growth will come from data-center server consolidation.
It's not for everyone. Experts say that in some cases a distributed architecture that places servers close to users to improve performance makes sense. But with processing power constantly growing and the cost of telecommunications dropping, many users are centralizing their processing.
What is fueling this trend? Quite simply, the bottom line.
"The fundamental reason to consolidate is total cost of ownership," says Shahin Kahn, director of marketing for data-center and high-performance computing at Sun Microsystems, in Beaverton, Ore.
"Analysis shows that total cost of ownership can be reduced by 30 percent [by consolidating], most by manageability -- even though hardware costs can go up," Kahn says. "It's not like you fundamentally change things. You simply do what you're doing more efficiently."
Less management, exponentially, is the operational theme here. A distributed system is far more labor-intensive than a centralized model, according to the Gartner Group. For each server, costs involve asset management, capacity planning, support strategies, training, security auditing and control, vendor selection, change management, migration strategies, and service agreements. Reducing the number of servers reduces the time and expense it takes to perform these tasks. Moreover, the lower costs of telecommunications make remote access more affordable.
In most cases, no one intentionally chose to create a chaotic infrastructure. It created itself. Spurred by the low purchase cost of PC-based servers, IT acquisition slowly fell into the budgetary realm of the average department manager. These managers could pay for PC-based systems, but were often at a loss to support them; unfortunately, without centralized control, so were the IS folks who traditionally handled those matters: the data center.
"Driven by massive investments in personal computing, networking, and server technology, organizations are rapidly migrating from a hierarchical, monolithic approach to a flat, distributed-computing model," according to the Gartner Group.
"Whether deployed by IS or business units, this new approach is driven by inexpensive technology that masks huge labor costs and risks as production and mission-critical systems are rolled out on this fragile, unmanageable infrastructure."
Centralizing computing power does more than just ease the distributed-system-management burden, advocates say. Fewer servers reduces the number of vendors a company works with, forces a company to implement -- or simply comply with -- enterprisewide standards on software, makes the enterprise easier to secure, and allows a company to engage in what some call predictive maintenance.
"Typically we look at what we have and frequently consolidate servers in order to get economies of scale," explains Nigel Bufton, vice president of business development of worldwide services at Maynard, Mass.-based Digital, which specializes in outsourcing and consolidation projects for large multinational corporations.
"For predictive maintenance this is critical," Bufton says. "When you consolidate you can have a parts inventory. If a disk is failing you can detect it before it crashes if you are monitoring it continuously. You can have on-site engineers. But, the more sites you have, the more this becomes un feasible."
But centralized processing is not only within the realm of Unix servers that scale to dozens of processors. Smaller sites can gain efficiencies by taking advantage of more powerful Intel-based servers, users say. For example, one large medical manufacturing company in the Northwest reduced the number of single CPU servers it managed by 82 percent when it consolidated onto a handful of NetFrame NF9008 quad Intel-CPU machines running NetWare.
"We targeted 33 servers for consolidation into five NetFrames. Our sixth server (and first system) was for our corporate e-mail, which would have taken up to three or four fully configured smaller servers," explains the company's IS manager. "We consolidated to have fewer servers to manage. We also had a [restructuring], so we had fewer administrators to do the job."
Server consolidation can even benefit companies that have always had a centralized schematic. For instance, Burlington Coat Factory recently replaced eight production Sequent Computer SE60 and SE70 class machines with three Sequent NUMA-Q 2000 systems configured in a cluster. Each of the NUMA, or nonuniform memory access, boxes is equipped with three quad-Pentium processors, but the NUMA architecture allows the 2000 to connect many more quads to support a total of 252 processors.
"There are a couple of things coming back into vogue: thin clients and centralized computers," says Mike Prince, CIO at Burlington Coat Factory, in Burlington, N.J. "We've always had thin clients and very few servers -- all centralized. In some cases [our previous servers] were as big as you could make them. They couldn't be scaled and they had limited bandwidth to get at memory."
"Now we've got some enormous power," Prince adds. "We think this three-node cluster will last forever. We can scale on the inside as opposed to buying more boxes ... it's a cost-of-ownership decision and a practical tactical move."
THE GOTCHAS. For all the compelling reasons to consolidate servers, the ultimate success rests on a variety of circumstances. For instance, servers will only run a single operating system, so standardization will be required. This means that IS managers may have to tread some pretty heavy political backwaters if the servers targeted for consolidation reside in departments that have been running competing brands of operating systems, such as NetWare, Windows NT, or Solaris. Someone's got to give, so user buy-in is a must. Of course, consolidation can occur around operating systems. But the more operating systems there are, the higher the management cost, so standardization is best whenever possible.
"The toughest things [were]: the newer operating system, data conversion, and the client perception [of data ownership]," one server- consolidation-project veteran describes. "We mitigated most of it by working closely with the vendors and getting our clients to become partners."
Sun Microsystems' StarFire server is a partial exception to the single OS rule. The StarFire family includes a feature called Dynamic System Domains, which allows resources such as CPUs, memory, I/O, and interconnects to be partitioned into domains without rebooting the system. Because domains can be isolated and managed separately, they can also run different versions of Solaris. One large system is able to function as many smaller ones or allows one application to grab more power during shorter bursts of heavy use, such as seasonal upswings.
However, end-users say one of the ultimate benefits of server consolidation is the productivity gained by upgrading older software and older custom applications.
"The issues we ran into were serious issues," Prince says. "The new computers required the latest version of the database and the OS and all the third-party software. We had been running some very old Oracle applications. We had to basically upgrade these applications and fix them. You could manage all of these with middleware, but it was healthy for us to clean up and bring all of these applications up to date. It increased our efficiency even more."
Another important consideration is which server vendor to choose. This is far more critical when consolidating than it was when using a greater number of servers, vendors say.
"If you have 10 servers with 100 users each on them [and] any one goes down, 10 percent of the population goes down," says Marty Miller, product line manager for server vendor NetFrame Systems, in Milpitas, Calif. "Now if you take the 10 servers and consolidate them into one server and it goes down, 100 percent of the population is down. Reliability becomes much more important as you go to consolidation."
IS managers must therefore limit their search to servers that were designed with the server-consolidation market in mind. In addition to being scalable enough to manage growth inside the box (and with clustering), these servers should have redundant systems, hot-swappable components, and management software that automatically switches between failed and backup units. These features cost more. On the high-end side, some multiprocessor servers are competing in the mainframe space with price tags equally as steep. For instance, the StarFire family from Sun Microsystems has entry-level systems that are priced at more than $800,000. Sequent's NUMA-Q 2000 family ranges in price from about $240,000 to $2 million. IBM's RS/6000 SP, also a heavy player in the consolidation market, sells in about the same price range as the NUMA-Q and StarFire.
Numerous players are also now in the low-end space, thanks to Intel's quad-processor boards. Although these don't feature the scalability or level of availability as the premium-priced Sequent NUMA-Q, they will run NetWare and/or NT. These include the NetFrame NF9008, the Compaq ProLiant 7000, and the ALR Revolution 2XL.
IS managers should closely examine benchmark information on these servers as well as meantime-between-failure statistics included on vendor-specification sheets. Most importantly, they should talk to other customer references.
While researching reliability issues, be careful not to get caught up in an old and rather obsolete discussion on SMP architecture vs. massively parallel processing (MPP). For the most part, servers competing in this market have adopted the SMP architecture, which, simply speaking, shares memory and resources between CPUs. The exception is the RS/6000 SP, which uses MPP. Some legacy applications perform better on one type of architecture than another, but because many users upgrade applications when consolidating, such a shopping criterion is often a moot point. However, the RS/6000 SP has long since been a popular choice for server-consolidation projects, particularly in IBM shops, because of its extreme scalability and robustness, vendors and analysts say.
LEAVE WELL-ENOUGH ALONE. It's worth mentioning that there are times when consolidating a specific server isn't the best idea. Disaster-recovery planning is one instance in which some redundancy of servers in separate locations is a wise idea. Another reason why separate may be better is to improve the performance of a popular application by bringing it closer to the source that uses it most. Or to replicate an application to several disparate workgroups that use it most (such as Lotus Notes). But tread carefully here. Replication is one of the factors that contributes to a fragile, chaotically distributed system in the first place, experts say.
One good bit of advice is to start with the premise that every server will be consolidated and then justify every case for a stand-alone server. That means a detailed plan must be created, approved, and anointed with user buy-in before a single server is unplugged.
"Make sure you understand why you are consolidating," Bufton advises. "If you are doing it for totally cost reasons, you'll probably be disappointed."
"Almost everything done for cost only does not fulfill the dream," Bufton says. "You should really do it for the standardization, the flexibility, and to regain control of the future. Have a clear understanding of the vision and make sure you have it timed well."
Done right, server consolidation is a clear case in which less is more.
Julie Bort is a free-lance writer in Dillon, Colo. She is the author of Building an Extranet, published by John Wiley & Sons.
Top 10 List
So, how do you know if your company can see good consolidation returns? Several indicators are clear warnings that your company should undergo a server-consolidation project. In David Letterman-style, here are the Top 10 signs that your company is a prime candidate. It's time to consolidate when:
(1) You start losing track of your servers.
(2) Your hardware is having seizures over scalability.
(3) You have more systems administrators than you have users.
(4) You're running 15 different operating systems.
(5) You don't know if you are in compliance with all of your software licenses.
(6) Capacity planning is a synonym for buying more servers.
(7) Department managers routinely purchase and install their own servers (but leave management to you).
(8) Utilization rates for more than half your servers are in the single digits.
(9) Physical security of every server could only be accomplished by Star Trek-like shields (rather than locking the door to a single room).
(10) And the No. 10 sign that your company is a prime candidate for consolidation: You spend more money on server upkeep than the U.S. government owes.
NT: The consolidation buster?
Even a brief foray into the issue of server consolidation will quickly turn up a large monkey wrench: Windows NT. Debate rages on about whether NT is one of the culprits of distributed-system chaos or the eventual platform of choice.
Microsoft, which generally espouses a distributed architecture of many NT servers, claims almost NT is scalable enough for nearly all user needs. However, users who are consolidating for large enterprises say today's NT just doesn't cut it.
"We're running a little NT for applications that require NT, but for a database we're going forward with Unix," says Mike Prince, CIO at the Burlington Coat Factory, in Burlington, N.J. "It's not a religious issue at all. Right now, it's the opposite of religion. We have NT in-house and it doesn't begin to scale as well as Unix. By far, the most scalable, most reliable operating systems are all based on Unix."
Microsoft officials, however, assert that NT will suffice for the vast majority of today's consolidation projects.
"Typically what companies are looking for with server consolidation is cost savings," says Jeff Price, Microsoft product manager for Windows NT Server, in Redmond, Wash. "We've almost quadrupled in terms of database [scalability] while cutting price and performance."
"With the advent of 8- and 10-CPU servers, were getting into the very high end," Price adds. "We're probably at a point where we cover 90 percent of customers' consolidation needs. The perception in the market is lagging [behind] the reality."
Price points to a study performed by the Transaction Processing Performance Council in which NT fared well on price-per-transaction analysis (http://www.tpc.org). He names the growing choice of multiprocessor NT systems offered by ALR, Digital, Unisys, Compaq, Hewlett-Packard, and IBM.
Some say vendors are banking on NT as a platform of the future, but will not concede that the OS is ready for it all today.
"NT is going to the data center. Users will push it into the data center," says Steve Wanless, senior marketing manager at Sequent Computer Systems, in Beaverton, Ore. "Don't get worried about NT's scalability. You have to trust Microsoft to recognize that and solve it. The data center tends to take the same view of NT that they did on Unix."
"Departments will use applications on NT if that's what they need, then turn around and say, 'Hey, this is your jurisdiction.' If [IS] isn't careful, they will be in a constant state of server consolidation," Wanless adds. "So, I ask my customers, 'What are you going to do about an NT infrastructure?'"
Sequent has announced plans to implement NT on its NUMA, or nonuniform memory access, family of machines in 1998.
NT remains the question that should be addressed by all IS managers who want to ensure that server control remains neatly and efficiently within their domains. With the growing number of applications for the OS, it surely can't be ignored.
Copyright (c) InfoWorld Publishing Company 1997 |