ts, that post on TMF was brilliant! I am taking the liberty of pasting the key part of stocksure's thesis here:
Required reading:
, a new technology, set to appear next year, is going to get rid of both of the weaknesses that I've mentioned [NAS over IP network; NAS op sys overhead], and, when combined with the rate at which Netapp filers are scaling in terms of capacity, should allow the NAS/SAN battles to reach Stage Four. Allow me to present DAFS:
dafs.netapp.com
A standard pioneered by our favorite company (well, one of my favorite companies), DAFS, capable of being implemented both in NAS and SAN environments, and capable of running on ethernet, fibre channel, or any othe network protocol, allows for three major benefits:
1. It allows data requested by an application on a general-purpose server to be sent from a storage device to the application while bypassing the buffers within the operating system (i.e. Unix, NT, Linux) for the general-purpose server.
NAS/SAN implications: This feature should benefit both camps equally, with the only real winners being us consumers, who might have to wait a few seconds less than usual while an online purchase order gets processed, and thus might have the order go through before the e-tailer handling it goes bankrupt.
2. It allows a server handling a data request to bypass the IP protocol stack while accessing a filer hooked up to an IP network.
NAS/SAN implications: Since fibre channel SANs don't make use of IP as of right now (although they might in the future), this development's meaningless at this point in time for such networks. On the other hand, for block data transfers only meant to reach an application server, this goes a long way in equaling the performance differences between NAS and SAN systems.
3. It allows a server accessing an NAS device to bypass the latter's operating system completely en route to making a file request.
NAS/SAN implications: While obviously irrelevant to SANs, this is absolutely huge for the NAS market. First, as one might guess, when conmbined with benefit #2, it puts NAS systems on completely equal grounds with SANs with regards to application server requests for block data transfers; but it doesn't stop there. The ability to bypass the operating system of a filer also allows NAS systems to scale infinitely from a processing resource perspective.
The reason for this is simple: just as DAFS benefit #3 lets a Unix/NT server to directly access the disks on a Netapp filer, it also lets a Netapp filer directly access the disks of another Netapp filer. For example, if Continental Airlines were to have a configuration of 20 F840 filers for the management of its databases, and a request was made for a piece of data that happened to be on filer #18, but the CPU on that filer happened to be handling a number of requests at that time, the given request could be sent to, say, the CPU on filer #4, which could then access content on filer #18 just as quickly as it'd be able to access content on the disks that are directly attached to it. And if this isn't good enough, Continental could buy ten more Netapp filers, only with no disks installed, and use them as additional CPU resources for managing data requests made to the other filers. Or they could buy twenty more filers if they wanted to. Or fifty, for that matter, as many as they consider necessary; and just like that, there goes any potential scalability advantage that the Celerra/Symmetrix or any other NAS/SAN hybrid might have potentially were Netapp filers to continue to exist as standalone devices; and I guess we can't forget that those Netapp filers would still have a considerable cost advantage.
Of course, one problem could arise from the implementation of the kind of distributed architecture that I just outlined: difficulties surrounding the ability to efficiently manage those processing resources. After all, if the resources of such a system, possessing so many different individual processing units, aren't properly utilized, expenses related to CPU resource purchases will quickly run out of control, and the entire system will quickly turn into a mismanaged, convoluted computational bureaucracy, one that would make Joseph Heller roll over in his grave.
But fortunately enough, Network Appliance has already thought this problem out, as shown by their purchase of WebManage (http://biz.yahoo.com/bw/000905/ca_network.html), a company whose software, is able to channel individual data requests to the device best fit to handle a given request. Traditionally, such software (Akamai's FreeFlow is a good example) has been only used to manage requests made to internet servers carrying redundant content, whether housed in a single data farm or dispersed over a worldwide content delivery networks. Enterprise implementations, and implementatinos involving file servers that don't all carry the same content, have been rare; but such implementations would prove to be highly necessary if the kind of DAFS architecture that I outlined earlier were to take flight. Now, with this kept in mind, notice how the press release surrounding the WebManage buyout made reference to the use of the company's software in enterprise as well as internet environments. It all seems quite interesting, IMO.
Whenever I try to analyze the quality of the management team of a company that I happen to be invested in, or happen to be considering investing in, one of the things that I try to do is look at the moves the company's management has made in the recent past, and ask myself that, if I were in their position, would I have made the same moves, or would I have done something different. It becomes a test, of sorts; and of the numerous companies that I've put this "test" of mine, up until now, only two, Broadcom and Qualcomm, have passed with flying colors. It seems that it's time for me to add a third company to this list.
Eric |