Storage Review:
Thanks DownSouth!
Comments on the June 2000 Gilder Tech Report:
1. "As Dave hitz, VP of Network Appliance, which invented Network Attached Storage..."
I do not believe that NetApp has ever claimed to be the inventor of NAS. That distinction probably belongs to Auspex, where Dave Hitz, James Law, and several other key founders/execs of NetApp came from.
2. "To achieve the PACS goals, SNI integrates gear and supporting services from such companies as EMC and Sun to Network Appliance and LuxN, and uses bandwidth from Telecosm stars Global Crossing, Metromedia Fiber , and Yipes."
I conclude from this that Storage Networks (SNI) is not a product company, but a services or data hosting company. They are building infrastructure using products from others. This makes them consumers of technology products, not competitors to EMC, NetApp, et al.
3. "Taking a climactic step in the hollowing out of the computer, NetApp's NAS pulled the filer operating sub-system...out of the man server operating system...EMC is now making similar NAS appliances called Celerra"
Celerra is very dissimilar from NetApp's NAS product. Celerra is diskless. It requires EMC Symmetrix on the back-end to store data. Basically, Celerra is a network interface to Symmetrix. It is not a storage system. It does not replace the native Unix and/or Windows file system and does not provide for sharing of a single file between UNIX and Windows hosts in a secure manner.
4. "Procom uses the Linux operating system, which is already optimized for multiprocessors. NetApp uses a proprietary OS optimized for serial processors. Procom uses conventional Pentium processors. NetApp still employs leading edge Alpha chips to extort the utmost in execution speeds from a single processor and faces problems in adapting to the next Pentium generation."
GG is assuming that there is an advantage to multiprocessing in NAS OS architecture. Old mainframe war horses know that multiprocessing has very high overhead and delivers about 60% incremental program execution speed for every 100% increase in processor count. The incremental speed goes down with each new processor. This is because the management overhead in a multiprocessor OS is very high.
NetApp's clustered architecture, which currently supports only two filers operating as failover partners, allows the two filers to run independently of one another, giving 100% more throughput than a single filer. When one filer gets into trouble, the other filer takes over, without interruption, that filer's data and communications services.
That clustered architecture will be expanding to multiple partners in future products, I believe.
But there is a more fundamental flaw in GG's facts. He doesn't seem to realize that NetApp's OS started out on Intel 80486 CPU's and for several years NetApp's product line was Pentium-based on the low end and Alpha-based on the high-end. The migration of the ONTAP OS kernal from Intel to Alpha took one engineer a few days.
NTAP has preserved its OS portability so that moving to new Intel CPUs will be best described as "trivial".
Also, if NTAP saw an advantage to using a multi-processing (MP)architecture, upgrading ONTAP's ability to MP could be done in the confines of NetApp's engineering organization, though I doubt that MP would offer any real advantages over NetApp's current direction to more flexible clusterted configurations with faster CPUs from INTC, CPQ, or whomever.
5. "NetApp achieves fault tolerance through Compaq-Tandem's proprietary ServerNet. Procom obtains similar results from Ethernet links, while at the same time allowing for interoperability, an issue that has plagued EMC with its "all-EMC" installation of SANs."
NTAP's fault tolerance, except for CPU failure, is achieved through redundant components on every filer, including disk drives, controllers, power supplies, NVRAM batteries. ServerNet, aka "NUMA", is used to communicate between two clustered failover filer partners to keep the contents of nonvolatile RAM (NVRAM) duplicated on the partner so that if a catastrophic error, like CPU failure, occurs, the partner can perform a "soft failover" of the cached data, storage arrays, and network interfaces.
If such a soft failover architecture could be achieved via an ethernet interface, NTAP would/could/woulda/shoulda done that. I suspect that Procom's fault tolerance is far less reliable than NTAP's if ethernet is the medium. I also suspect that GG has taken a giant leap of faith that because Procom uses eithernet, their systems are interoperable (for redundancy's sake) with other brands, as GG implies.
6. "NetApp now faces the challenge of upgrading its system for multiprocessing and new applications."
The truth of this statement relies on the premise that multiprocessing is required by NetApp. I don't think that premise is valid. I am not at all sure what GG is referring to with the "new applications" phrase. Client written applications do not run on NTAP filers. That's part of the disintegration of the computer, that GG discusses. Any new applications related to NAS can be written by NTAP and/or third parties using JAVA for execution on a JAVA virtual machine running on the filer. OR, applications may be running on application servers accessing filers and filer API's, such as the NDMP or virtual interface (VI) (future) APIs.
All-in-all, GG is just getting his feet wet in the storage width world. His analysis is valuable, but it is a work in process which I encourage all to follow, but with a critical eye. |