Buck's take on the FC market - Part I
Why do I feel like the time I made my first pitch to a customer?
At the request of DownSouth, I've done a Q&D look at what I think of the Fibre Channel / SAN market. In that market are eight companies that are publicly traded. They are ANCR, BRCD, CRDS, EMLX, JNIC, QLGC, VIXL, and ZOOX. The natural division of these into sub-markets are: HBA mfrs. EMLX, JNIC, QLGC Fabric mfrs. ANCR, BRCD, VIXL and ZOOX. Router mfr. CRDS
Explanation of terms: HBA = Host Bus Adapter; equivalent to a NIC in a LAN; attaches host platform to SAN Fabric = a FC switch or hub; the core of a SAN; what HBAs typically plug into (yes, I know "fabric" is a FC standard term for switches only, but I can't think of a better word to use) Router = a FC inter-connect device; allows native SCSI hosts or devices to connect to a SAN; plugs into a fabric device.
I believe that FC is a discontinuous innovation in and of itself for the SCSI market. FC "beats" SCSI by doing the following: - longer distances between hosts and peripherals (10km for FC vs. 25m for SCSI) - higher bandwidth (100MBps vs. 80MBps, plus FC is introducing 200MBps, w/ 400MBps in the standard, and upward) - more addressability (~2MM devices per FC network vs. 15 per SCSI bus) - better sharing of devices (unlimited hosts accessing the same device vs 1-2) Therefore, I believe we have the start of a Gorilla Game with FC.
First, though, some FC/SAN 101:
FC is the backbone of Storage Area Networks, or SANs. SAN is a discontinuous innovation to the SCSI storage market. Simply put, SANs break the linkage between a server and it's storage. That storage is now on a network, where it can be shared amongst multiple servers. A customer can now scale his servers and storage seperately. Prior to FC and SANs, a customer would run out of slots on his server for storage interconnects (remember the 15 devices per SCSI bus, which was rarely reached, due to performance limitations on the bus itself). Once that happened, a new server would be installed and applications would be shifted to the new server, and data would be moved to the new server's storage. With FC and SANs, a customer can simply plug in more storage on the SAN and boost the amount available to the server (and any other server on the SAN). Companies like EMC, Hitachi, IBM, and countless others like this effect, due to the unbelievable growth in storage requirements (but that's another market.)
Another advantage of SANs is the ability to share expensive tape libraries amongst multiple servers. Prior to SANs, a tape library would be attached to one server. Tape is the most economical medium for backup and archival. Backing up a server farm required that the server with the tape library attached to it (the backup server) pull all the data from other servers across the LAN, and write it to tape. This makes for long backups, because of the inherent bottleneck at the backup server. Another bottleneck is at the LAN. A LAN is designed for passing lots of small messages amongst many hosts. Backups consist of very LARGE messages flowing in one direction. With a SAN, all servers can access the tape library, and perform their own backup, anytime. SANs use native I/O, and are designed for those LARGE messages, with low overhead, so you get better throughput on the same piece of fiber. This is called LAN-free backup.
Next on the backup front is Server-free backup. In this model, a server would send a command to a SAN device to move Data X from Unit A to Unit B. The SAN device would perform the data movement, and report back to the server when it was finished. This frees up the server from read and write overhead, and allows those freed-up cycles to go do some business, hopefully making money.
Therefore, I believe that we have the killer app for SANs in server-free backup, and other server-free data movement applications. More likely, it will be called server-freeING, as it does allow a server to be freed up from the tedious task of moving bulk data around, which is pure overhead and doesn't make a company any money. It could save their entire company if a disaster struck, but that's another market.
Now, to the market itself. The three sub-categories that I see are HBAs, fabrics, and routers. Other SAN components, and hence potential games, are hosts and devices. I am not addressing them here, simply because it is too large to quantify in one post. I will try to address them next weekend. Besides, you know them already as EMC, IBM, HP, SUNW, etc., etc., etc...all the platform vendors and the storage vendors. I believe that we have the potential for mini-tornados within the FC space itself.
I do not believe that the HBA market will produce a Gorilla, but will produce a King and/or a bunch of Princes. HBAs are to SANs as Ethernet cards are to LANs. It is hard to differentiate or add value in that space, because of the need to adhere to standards. Barriers to entry are not high...FC chips are readily available, and the standard is published. OTOH, engineers who write the drivers for the HBAs are rare. This, to me, is a non-proprietary open architecture.
I do believe that the FC fabric market will produce either a Gorilla or a Very Strong King. I'm wishy-washy on this for a few reasons, not least of which is my inexperience at GGing. I take most of my cues based on what I have seen in the LAN and WAN market over the last 10 years, and that means switching MUST be standardized to sell to the customer. They must inter-operate to accepted by the market. To me, that means they must have a non-proprietary open architecture. However, by building software into the switch that forces others mfrs. to adapt to it because their potential customers demand it, it then becomes proprietary. This is a function of market share, which is essentially doing what your customer wants you to do. On the washy side, though, switches are switches...the fastest one is the preferred one at the core of a network. Value-added software can only slow down a switch...if a switch is looking at more than the source and destination of a frame, and moving that frame from point A to point B, it's gonna be slower. Does that matter today? Tomorrow? It depends on how fast this SAN stuff ramps. This is where I see the BRCD v. ANCR battle right now. VIXL and ZOOX are starting from a bad post position. They both specialize in hubs, which are, yes, analogous to Ethernet hubs. They share bandwidth amongst devices, and don't have the performance of switches. Both cos. have announced switch plans, with VIXL actually delivering on them today. ZOOX has a new switching hub called Capellix that is making a lot of noise today, and I expect them to be the bigger of the two between VIXL and ZOOX.
I see now that I really need to do the financial number crunching on this, a task at which I am pitifully inept. Fortunately, I've got a good guide in TRFM, so watch this space.
I have broken my posts in two. I'll have the router sub-market out soon. To me, this is the exciting one, though, so stay tuned.
buck, who feels quite inadequate at this point... |