Meta Group notes on the IBM Shark
The Bottom line is: Use it for a competitive hammer. Product will not be ready for prime time until the end of 2000. Light on cache, does not yet support Raid-1, has a slow PCI bus, does not yet support Ficon, Fibre Channel, FlashCopy, or PPRC.
Bob Gauthier
In late July 1999, IBM introduced its road map for the long-awaited "Shark" Enterprise Storage Server (ESS), its best hope for recapturing significant share of the exploding market for heterogeneous enterprise storage (growing 70% annually through 2004). Much of the ESS hardware foundation will be delivered in 4Q99 (e.g., hardware, ESCON and Ultra SCSI connectivity, S/390, Unix, NT, AS/400 attach - see Figure 1 in Addendum). However, users must wait until 1Q00 and beyond for the more interesting software-driven functionality (Layers 2-5 of META Group's Enterprise Storage Architecture Blueprint - see EDCS Delta 846, 18 Aug 1999), which promises a more competitive platform. While Shark brings IBM into solid hardware competition for scalability (up to 11TB) and performance, users must wait until 1Q00 for FICON (Fibre Connection) and FC/AL (Fibre Channel-Arbitrated Loop) connectivity and new FlashCopy (IBM's volume-level SnapShot) and Peer-to-Peer Remote Copy. Moreover, its virtual implementation will be a homegrown RAID-5 log-structured file architecture not available until mid-2000.
Indeed, we believe Shark will need a 12- to 18-month gestation period before gaining the demonstrated credibility required for pervasive deployment in a global enterprise storage architecture. Mapping IBM's ESS road map against META Group's Enterprise Storage Blueprint, our analysis indicates that the ESS will be (by mid-2000) strongest in the two physical component layers and many elements of the first functional component layer (Layer 2's shared services: remote mirroring, point-in-time copy, data movement, replication, protection, etc.). However, we believe IBM will have difficulty delivering robust functionality in the critically important top three layers (adapter, application, and management - Layers 3-5) before 2001.
Longer term (2002-04), as IBM rolls out improved enterprise connectivity, better heterogeneous management disciplines, and a virtual, truly shared data architecture, we project the ESS will have modest success in regaining some of IBM's lost market share. Specifically, of a projected 140PBs to be shipped in 2004, we project IBM will account for about 18%, with EMC still dominant at 60%.
Near term (2H99), we believe Shark will have only a minimal effect on global market share (we project IBM will end 1999 with 12% of the 10,000 high-end TBs shipped worldwide versus EMC's 63% and Hitachi/HP's 20% - see EDCS Delta 814, 20 April 1999). While this will be due in part to the effect of the global 4Q99/1H00 Y2K procurement freeze, the unavailability of much of Shark's more interesting software functionality until 1H00 will drive users to a wait-and-see approach.
On the pricing front, IBM is hungry to hit the ground running and does not intend to lose (to EMC) on price alone. Specifically, initial bids for Shark have ranged from free trials (30-90 days with no procurement "strings" attached) to $0.25-$0.40/MB usable (versus EMC's $0.50+). We believe the ESS can be an effective tactical (4Q99) competitive threat for large users in search of leverage against an increasingly dominant (and therefore costly) EMC. Longer term (2002-04), we believe IBM's ESS will sell for the traditional PCM discount of 15%+ versus EMC.
The Shark Needs Some Work
We believe Shark's initial iteration will require quick enhancements in several areas to qualify as a world-class enterprise storage solution: - RAID-1 mirroring: We believe a significant ESS shortcoming will be its lack of a RAID-1 mirroring option, which IBM will fix by YE00. Our research indicates more than 90% of mission-critical disk subsystems are currently mirrored for protection and performance. As HDA (head-disk assembly) capacities grow exponentially (IBM will double the current 36GB drive by YE00), we question the ability of Shark's striped RAID-5 architecture to deliver the performance required in high-end, mission-critical environments. Indeed, we project that by YE00 IBM will be forced to introduce RAID-1 mirrored ESS capacity to satisfy performance requirements of users in both 36GB and 72GB configurations. - Lite on Cache: Shark's 6GB/11TB maximum cache/storage capacity ratio appears light for the many less than cache-friendly workloads users often encounter in mission-critical environments. We urge caution in putting large capacity (>8TB) ESS configurations into high-volume transaction environments until IBM either demonstrates solid performance (4Q99 scheduled availability of initial performance data) or increases the maximum cache size (to 12GB in mid-2000). - PCI Bus: One of Tarpon's (predecessor to the ESS) primary performance bottlenecks was its relatively slow (132MB/sec) PCI bus, which has been carried over to Shark. Although the ESS's multiple PCI bus architecture has a theoretical 800 MB/sec internal bandwidth, we still believe its overhead poses a potentially serious performance issue. We project IBM will largely eliminate the problem with the enhanced PCI-E bus of its next (4Q00-1Q01) RS/6000 generation.
________________________________________________________________________
Addendum Figure 1 - Shark Detail Hardware - Two 4-way SMP RS/6000 processors, with standard 132MB/sec PCI bus - Usable capacity: 420GB-11.4TB - Cache capacity: - 3GB per frame = 6GB total (potentially a performance bottleneck in high-capacity, high-transaction volume environments) - Nonvolatile storage (NVS) write cache: 192MB per frame = 384GB total - SSA drives: 9GB and 18GB @ 10K rpm; 36GB @ 7.2K rpm (also supports most 7133-20 & -D40 drawers) - RAID-5 only: - Base frame: Striped 6+Parity+Spare - Expansion frame: Striped 7+Parity - Connectivity: - Sept. 1999: S/390, Unix, NT, AS/400 - Sept. 1999: ESCON, Ultra SCSI (4 to 32 intermixed server connection ports) - 1Q00: FICON, FC/AL (S/390 only) - 2Q00: Native Switched Fibre Fabric (Switched Fabric via OEM [Brocade] switch: 4Q99) - Performance: - Up to 30K cached I/Os/sec - 350 MB/sec top throughput (185 MB/sec sequential throughput) - 800MB/sec usable internal bus bandwidth - 16MB/sec maximum ESCON bandwidth/channel Software - S/390-exclusive Performance Accelerators (1Q00 availability) Parallel Access Volumes (PAVs) and Multiple Allegiance - Enables multiple, concurrent I/Os to the same (logical) volume via UCB alias, significantly reducing pend times; works dynamically with Workload Manager (WLM) across multiple systems within a Sysplex. I/O Priority Queuing - Enables ESS or user to prioritize I/O queuing (can work with WLM in Goal Mode), reducing "device busy" status and improving performance balance. - S/390 and Open: FlashCopy (1Q00 availability) - Similar to StorageTek's volume-level SnapShot copy, but not (initially) implemented via log-structured files (we expect a log-structured deployment in 2H00). An "instant" (two-second) point-in-time copy available for backup, data mining, etc. - S/390 and Open: Peer-to-Peer Remote Copy (PPRC - 1Q00 availability) - Extension of IBM's volume-level PPRC synchronous copy via ESCON links between ESS systems. With less than 5% as many PPRC or XRC installations worldwide as EMC's SRDF, IBM has a significant challenge in proving their competitive worth. - S/390 only: Extended Remote Copy (XRC -1Q00 availability) - Equivalent to IBM's old XRC; open systems access only via XPE copy to 3390-3 type volumes under OS/390. |