Old McDonald had a data farm EI EI o....This Steve D guy must be in high demand. Any time they need a qoute there he is!
Storage Heads for the Network Federal Computer Week (April 13, 2001)
Originally Published:20010409.
After years of concerted effort, the data-storage industry has succeeded in generating interest in the notion of networked storage. Networked storage is an umbrella term encompassing several approaches for making stored data accessible to software applications and end users without tethering it directly to the back of a server.
In the view of many vendors and analysts, the federal government is uniquely positioned to advance networked storage technologies by becoming an early adopter of network-attached storage (NAS) appliances and storage-- area networks (SANs). With NAS, storage devices are attached directly to a network, not a server. In a SAN, the storage devices and servers are connected by their own dedicated network.
Of course, being an early adopter means contending with certain problems. The network storage solutions the government has used have not been simple cookie-cutter fixes. But they have solved real problems, and the good news is that the solutions are only going to get less expensive and easier to use for those who choose to follow.
It is unlikely that many companies in the private sector have as great a need for highly scalable, accessible and manageable storage solutions as do government agencies and departments.
At NASA's Goddard Space Flight Center, Greenbelt, Md., for example, a two-year study of storage requirements for the Earth and Space Data Computing Division yielded a startling finding: By 2004, the division will need the capacity to store more than 130 million files totaling nearly 5 petabytes of data. One petabyte is 1,024 terabytes and, by some estimations, is equivalent to the text in 100 Libraries of Congress. After 2004, nearly 1.8 terabytes of data will be retrieved from storage at Goddard, and nearly 2.7 terabytes of new data will be added daily.
Similarly mind-bending numbers turn up in other dataintensive regulatory and operational government activities.
For one market analyst, that leads to one conclusion. "If you don't network storage [now], you will sooner or later," said Steve Duplessie, a senior analyst with the Milford, Mass.-based Enterprise Storage Group Inc. "A networked storage architecture, vs. direct attached [to a server], is better at everything, from scale, to performance, to the real issues - business flexibility and improved management capability. If [an organization's] online requirements are growing, there is no valid reason not to create a storage network."
Networked storage technologies resolve several shortcomings of server-attached storage, according to analysts. Server-attached storage, which includes disk drives installed in the drive bays of general-purpose servers as well as disk arrays tethered to a server by means of a host bus adapter, has been the dominant approach to fielding storage since the advent of distributed computing in the 1970s.
However, the tide is turning in favor of networked storage, according to market researcher IDC, Framingham, Mass. IDC expects networked storage technologies to show a 66 percent combined annual growth rate from 1999 to 2003, accounting for 37 percent of all storage technology sales by 2003. Meanwhile, server-attached storage will decline from a 93 percent market share in 1999 to a 62 percent share by 2003.
Three reasons are frequently cited for the shift to networked storage: scalability, availability and manageability. Server-attached storage is difficult to scale efficiently when more capacity is needed, according to many observers. The only alternative when you run out of space on one server is to buy a new server with installed disks or another attached array. That increases administrative costs because every server will need to be managed by the IT staff.
Server-attached storage also exposes storage to more downtime and inaccessibility, according to officials with networked storage vendor Procom Technology Inc., Irvine, Calif. They point out that directly attaching storage to a server means that the server and storage must be taken offline to add new disk drives. Additionally, general-purpose servers are complex machines with many potential failure points. Tying the accessibility of mission-critical data to server uptime, they argue, is a bad move.
Finally, many analysts view the proliferation of serverattached storage as inherently unmanageable. Networked storage, by comparison, should be compatible with centralized management approaches similar to those used in monitoring and maintaining complex local- and widearea networks.
These views are not universal, however. Dean Mericka, regional manager for national defense agency accounts within BMC Software Inc.'s Federal Operations Division, McLean, Va., is not so quick to condemn server-attached storage.
"While government is ahead of commercial accounts in taking a leading role to adopt networked storage technology, they are not pulling out their legacy gear in the process," Mericka said. "They have a lot invested in stovepipe applications with dedicated storage. The addition of networked storage is actually increasing the complexity of storage management, not reducing it."
Mericka thinks that government is becoming an incubator for storage-management software and networked storage. As early adopters of NAS appliances and SANs, government agencies are encountering problems with the technologies - including limitations to their manageability - before commercial adopters do. "Government's storage complexities are unique, but they are where the private sector will eventually be," he said.
Storage Proving Ground?
Portia Dischinger, operations manager at the NASA ADP Consolidation Center at NASA's Marshall Space Flight Center, Huntsville, Ala., doesn't characterize her environment as a proving ground. Technologies are acquired, she said, based on application requirements and after careful consideration of options.
The ADP Consolidation Center was created in 1994 as part of an initiative to consolidate NASA's many mainframe-based administrative data centers. By the time the Office of Management and Budget mandated such consolidations within federal agencies and departments in 1995, NASA's efforts were already well under way, Dischinger noted.
The organization has also led many other agencies in migrating applications off mainframes and onto opensystems platforms. "Nearly all of our administrative applicaLions are being considered for open-systems deployment, and most of the ad hoc query tools we use to access mainframe data are already running on Unix or [Microsoft Corp. Windows] NT servers," she said.
The trend toward open-systems platforms application deployment, combined with NASA's experience with stovepipe systems, led to the evaluation and adoption of networked storage technologies before they became "all the rage," Dischinger said.
"We had concerns about proprietary server-attached storage," she said. "We used to buy a server with disk storage from a single vendor, and we sometimes discovered that the storage couldn't be reused when the project ended. Networked storage - NAS and SAN - have a distinct appeal as a way to preserve storage investments against obsolescence."
Dischinger said the center's first foray into SANs was targeted at sharing and making more efficient use of tape storage systems acquired over the years from Storage Technology Corp. (StorageTek), Louisville, Colo. She said the center recently acquired a StorageNet 6000 storage domain manager from StorageTek that enables her to share the expensive tape libraries with numerous opensystem servers.
In the configuration deployed, tape devices are connected via Fibre Channel links to the domain manager, which is a server preloaded with storage-management software. The domain manager represents the attached devices as logical units to servers from Sun Microsystems Inc., Compaq Computer Corp. and IBM Corp. The servers are connected to the domain manager through a Gigabit Ethernet switch. This back-end network enables data archiving and backups to occur across a back-end storage-area network and delivers mainframe-class data protection to open-system platforms.
In the spring, the center will evaluate a networked storage solution for shared disk-based storage. "We are looking closely at NAS as a less-expensive alternative to SAN for sharing storage," Dischinger said. "We may also consider a SAN, particularly if it can take advantage of a technology like the StorageTek SN 6000, since we use StorageTek Virtual RAID disk systems currently."
Application requirements will help determine the eventual solution. "We originally looked to NAS as a solution because most of our applications require file-system-based storage," Dischinger said. "Even an Oracle database can be handled effectively on NAS appliances. But we cannot say that some of our applications will not require block-level storage access - the kind of access provided on the StorageTek disk arrays or on a SAN."
Noting the growth of data to be stored - more than a terabyte in just a few years - Dischinger said that scalability is critical. "We are watching SANs, especially the development of iSCSI SANs, very closely," she said. "They are attractive from an ease of implementation, and possibly from a lower cost, standpoint." iSCSI (Internet small computer systems interface) is an evolving technology for SANs based on the more common Ethernet and TCP/IP networking rather than Fibre Channel.
For now, it seems, large-scale SANs may be an option only for government organizations with the need and resources to justify the big investment. "Fibre Channel SANs cost, as a general rule of thumb, seven to 10 times the cost of the storage equipment itself," said Joe Butt, senior analyst with Forrester Research, Cambridge, Mass. "I refer to them as a multiplier of storage costs. Plus, you've got to have a lot of Joes out there to deploy them."
In smaller settings, Butt said, "homogeneous SANs that utilize components from a single vendor may be quite competitive. But if a small- to medium-size organization already has heterogeneous servers and storage platforms, they confront many challenges with current Fibre Channel SANs, including interoperability issues, lack of skills and knowledge, and high deployment costs."
Butt said SAN-adoption prospects may improve when IP-based SANs begin to appear in the market. Those SANs will leverage existing staff skills and knowledge, and widely installed network components, lowering some of the most significant hurdles to SANs. |