Server I/O Report
Didn't get to talk with everybody. Did manage to cop several sessions before getting busted. Able to glean some pertinent information and a little bit of knowledge. Some may appear later within relevant context.
Tech speak was rife as the majority of the 300 or so conference attendees were engineers or represented engineering from companies whose futures hinge upon this future server I/O architecture. Engineer speak is great, if you're an engineer, but it's another language, what does it mean for the individual investor? To quote one of the conference attendees, "We want to insure legacy systems that are shipping now." Well, this shareholder wants to insure his legacy within the tumult of technological change and needs to understand how those changes are going to affect the tomorrows of the companies he has invested in (and he wants to do it on the cheap ;^).
The following is my attempt to explain the information to myself, in terms that I can grasp in relation to my investments. I am no engineer, so all I am looking for is any remotely decipherable writing on the wall. If I exhibit any glaring misconceptions or oversimplify to the point of fallacy, I stand to be corrected by others in the know.
First off, I understand that Fibre Channel Arbitrated Loop is a dead end as far as both NGIO and Future I/O are concerned, because FCAL is a Bus architecture and the Bus is a dead end in server architecture. All the engineers I spoke with said so ("but the immediate market is FCAL, it's here now and it's spiffy and it won't go away soon" is what all the marketing reps say). The Engineers think it can't go away soon enough. They all shuddered when I suggested putting servers on a loop - "You don't want to do that!" Hey, what do I know?
This is where PCWeek's "Adaptec Axes Fibre Channel" chop piece went awry. ( zdnet.com ) As opposed to FCAL, Fibre Channel Switching (FCSW) is very much alive and, modified with the future architecture's "primitives," FCSW will be a prevalent networked storage and cluster interconnect (and PCI-X will remain backwards compatible through bridging primitives on either NGIO or Future I/O) according to the presentations of Compaq's Kenneth Jansen and Intel's Mitch Shults. It's true that both Brocade and Vixel are beginning preliminary R&D with Intel's NGIO primitives for FCSW, but others will be commencing work soon enough with both Future I/O and NGIO primitives as they develop. They have to. Of course, the industry would like to see one standard, and one may evolve, but as Intel's Mitch Shults joked "Remember what won out in the VESA-EISA battle - ISA." Mitch and Kenneth were real cards and real smart to boot.
Predictable FCAL and SCSI performance comparison data were presented that showed that FC achieved performance gains over SCSI only as more drives were added to the RAID. DUH! You can see where ADPT is setting its scuzzy sights. With the proliferation of SCSI RAIDs, CrossRoads' FC/SCSI bridge was predominant in the exhibits and may do well in this intermediate FCAL market.
About Future Server Architecture:
The "future" architecture is actually an adaptation of old mainframe switched channel technology, as is the paradigm of Storage Area Networks. Using Channels instead of a Bus frees up the CPU from managing I/O traffic because the flow is controlled by a switch-on-a-chip(s), perhaps an ASIC(s). This switch(es) communicates with a 4-pin (as opposed to 100-pin PCI) NGIO/FIO compliant device and manages the I/O into and out of Memory (SMP, ccNUMA). This allows the server processor(s) to utilize the advances in high bandwidth technology, something they don't do now. The pipe is fatter than the pump, the raw data stream would blast a hole through current silicon. They call this future server plumbing to control and increase the rate of flow a "Fabric" and although it may be based on the Fibre Channel Layer (channels), as is Gigabit Ethernet, it will be as different from the FC Switch (FCSW) Fabric as GE networking is from FC, but the underlying foundation is common. To paraphrase Sun's ad, "The server will BE the network." in the future architecture.
It's the rules that are being debated by the Intel-NGIO and the IBM/CPQ/HP-Future I/O camps. The actualization of "whatever" new architecture will depend on the outcomes of those debates. If rapid standardization can occur, expect to see rapid deployment as the base technology is already fundamentally familiar. By some estimates presented, 2002-2003 can see the first server deployments with compliant peripherals. There is a host of issues that are expected to be addressed and resolved incrementally through successive gens. In any event, this move to a new architecture bodes well for the server companies because there is a need for faster servers to utilize the bandwidth being built now. 60 GB/s DWDM is a bit more than any current server can handle.
For FC companies aiming to serve these future servers' future pipes, it's switch to Fabric or die.
Some Roaming Impressions:
ANCR displayed the beautifully trim MarkII and MarkII-8. They weren't plugged into anything.
This is in marked contrast to Vixel's operating setup, though Vixel did fork over extra for a private demo room. Walked into the Vixel room and saw their 8 port hub, their 16 port Rapport 4000 and their new 8001 8 port switch (bigger than a tank, must be crammed with STUFF, $8,100 retail) all looped up with some tape drives and a management console. I asked Tom Clark, Director of Technical Marketing, "So how about we bring down your loop and let's watch the LIPs go crazy," which he then promptly did. Their management software has the look and feel of HP's OpenView and is fully compatible with it, but its drill-down capability and graphics are much superior. So he popped a GBIC and I watched the loop crash, locate the port, and reinitialize the loop in about one second. We did it again 'cause I missed the first run-through. Then I had him remove a tape drive unit and watched the loop rediscover and self-heal. Pretty impressive stuff, this software, and could be a seller. They are porting it to CPQ's InSight Manager and Java.
Meanwhile, back at the Ancor booth, Tom Raeuchle, VP of engineering, told me to put my screwdriver away, no way was he going to let me take a look inside the MKII-8. Tom is a smart guy, before Ancor he was Director of Application Products at Cray by way of research scientist at Honeywell and a doctorate from Cornell's CS program. When I asked him about zoning, he proceeded to go over the talk that he was scheduled to give on hard zoning. That was fortunate, as that was the session that I got busted (gracefully) prior to. Will expound further later. We also talked about Public and Private Loops and The F_L port. When we talked about future developments in Fibre Channel, he mentioned that FC speed of 2 Gigs/sec will be seen very soon and the future for FCSW looked very positive - "Let's just say that I'm not selling any of my stock options." And then he reminded me to not put servers on a loop. Just wish the Ancor products were in action. The Ancor booth was well frequented though.
CMNT had ugly baseball hats and glossy literature. There were plenty of hats left when they packed up.
EMLX was tucked away in a corner and I didn't see any hardware.
QLGC was displaying their adapters and had uni-chassied Dual Dells "Clustered" through a Gadzoox Hub, providing server mirroring and failover capabilities. I thought of the admonishment to not put servers on a loop and shuddered.
EMC wasn't exhibiting but the company was out in force. When I asked Ancor's Tom Raeuchle about the FibreAlliance and EMC, he said "The FibreAlliance is about MIBs," twice. I opined that EMC simply declared its intent to jump start the market with its MIBs and the alliance partners simply said "sure, why not, we'll write to your MIBs if it will get us to market" and Tom said "That's it exactly, that's exactly right." Ain't I smart? It seemed to me that most of the EMC crew were marketing, and the two McData guys were the engineers. EMC circulated a lot.
LSI didn't have any fibre channel cards displayed yet, just scsi, but I spoke with (I lost his card) about their plans for FC. They are spinning their own FC ASIC following on the heels of their Symbios acquisition. I asked him about possible conflict of interest concerns since they fab their competitor's ASICs and also competing rivals' ASICs. His response was that divisions were in place under NDA to protect the technologies, otherwise they couldn't do business. We also talked about Public and Private Loops. More later.
Between stuffing my face with free food and pumping the reps for information at the open exhibit, managed to visit almost all of the exhibitors and some more observations will follow.
That's all for now.
Douglas |