SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Network Appliance
NTAP 111.56+2.1%Nov 28 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: im a survivor who wrote (9188)9/24/2001 1:02:03 PM
From: riposte  Read Replies (1) of 10934
 
Q&A: Cutting in on cutting edge technology

searchstorage.techtarget.com

Source: searchStorage
Date: 21 Sep 2001
Author: Michele Hope, Site Editor


As the CEO of Network Appliance, Dan Warmenhoven is a leading expert in NAS technology. During the recent Networld+Interop show in Atlanta, Site Editor Michele Hope talked with Warmenhoven, who is also the keynote speaker at the Storage Decisions 2001 conference being held next week in Chicago, about his company's position on NAS and how he thinks the technology will play out in a world that tends to be dominated by the talk of SANs.



searchStorage: Snap Manager for Exchange, your new product, was originally designed as a workaround to Microsoft's stance against NAS devices, correct?
Warmenhoven: When we started it, we thought of it as a workaround. As we got deeper into it, we decided that this was a great strategic driver. As I told the team sometime ago, "Congratulations. You've just gone from tactical to strategic." And, it's true. Over time, you're going to see us provide an interface for upstream Fibre Channel connections (SAN type), as well as, ultimately, iSCSI connections.



"In almost every benchmark I've seen, the issue is not the network, either the Fibre Channel or Gigabit Ethernet. The issue is typically that the storage subsystem is generally the bottleneck, and it gets right down to the drives." -- Dan Warmenhoven


searchStorage: So, are you hoping that Snap Manager will be iSCSI compliant?
Warmenhoven: Over time. It is not now. The standard isn't there yet. But, it's conceptually very similar. Take the SCSI command. Put it inside a TCP/IP transport. Take the covers off the command when it gets to the file system.

searchStorage: At some point, you want the filer to handle both block- and file-level data?
Warmenhoven: Yes, absolutely. In a single device. In certain examples, you can even treat the same data object as a file or a block. This is kind of what we do in database environments today. When you set up a filer today for a database, it uses NFS or whatever is the access vehicle. The entire database looks like one large file. So, the database engine -- whether that would be Oracle or whatever -- is actually managing its content as blocks. So, it's a block over file, or whatever you want to call it. You can still use those same kinds of techniques. You can access the system via block, yet you can still pick up the whole database for purposes of replication or backup using a filer.

searchStorage: How do you believe DAFS can help NAS devices?
Warmenhoven: I think DAFS is going to wind up being an application specific solution. Oracle 9i is structured in such a way that you could put a DAFS component right in it. So, we in conjunction with Oracle, developed a 9i/DAFS implementation. I'm hopeful -- although it's largely up to Oracle -- as to whether that gets productized and taken to market. But, I hope that will be the case. That is, essentially, a memory-mapped approach, which eliminates a lot of the overhead on the server system.

The issue today with databases on the NAS base, is that in certain environments, there are cases where you add load to the application server. The TCP/IP overhead consumes CPU cycles. If your server was already at the top of its CPU capacity, this creates a division. DAFS is a way of eliminating that overhead. So, for applications like database or Exchange Server, or whatever, one more efficient technique is to use DAFS. I think over time DAFS is going to become, I hope, part of the operating system structures in the area of InfiniBand, which is designed to be a memory-mapped structure anyways. So, that's kind of a convergence point for all this stuff later on.


searchStorage: We hear a lot about centralizing the physical storage all in one place. Are you moving away from that approach?
Warmenhoven: Yes. We feel as though it's not bad to have some centralized data. We're not saying you should have everything fully distributed. But the idea is that the data has more value the closer you get to the person who needs it. Issues of data access latencies and a variety of other things make it difficult to exploit the true value of the data otherwise. So, our idea is: Okay, fine, have a data center (or many), then take the content that people need from that, move it closer to them and provide facilities to be able to distribute it and recollect or recentralize it.

searchStorage: Can you provide some examples?
Warmenhoven: For instance, in branch office environments, one of the popular applications that customers seriously look at is eliminating the need to do backup remotely. Mirror that content back to a central location where you've got IT staff and you have high-performance tape devices, and then run the backup centrally. So, it's not just moving center-to-edge, it's moving from edge-to-center.

So, we think of it as being data on the move. And there are lots of other examples of pushing content out. It's not just raw data. It could be applications. Banks are a classical case with teleterminals. They'll run their own application set. Most of those terminals are virtually diskless. As soon as they boot up, they go get the application code off the server. How do you distribute to the server? This is a nice way of doing it. Create a directory and just push it out there. You want an update? Here you go. So, it's not just data in the sense of transaction data. It could be data of any type, video objects, software, almost anything.


searchStorage: What's your definition of SAN/NAS hybrids?
Warmenhoven: You can have the best of both worlds by blending the two into a single solution. So, we've got a very robust NAS solution. We're going to extend that to include block services. I think block services are fundamentally what defines a SAN. I don't think it's Fibre Channel necessarily. They do block-level access. So, I think a hybrid is in fact, storage systems which can accommodate both [file and block access] and can accommodate both very efficiently, and allow you to go back and forth between the two as application demands require it.

searchStorage: Why would people be interested in a NAS device versus a SAN?
Warmenhoven: There are really two driving factors. One is certainly the cost of ownership. The second is the simplicity of the entire NAS operation, and the familiarity of it.

According to a study conducted by Input, a research firm, they found that the cost of ownership for a Network Appliance solution was one-quarter that of the EMC solution. The price of the system was half EMC's price. And, when you looked at simplicity of operation and lots of manual tasks, it came way down. One of the things they found was that database administrators who spend more than half of their time just tuning databases, got up to 90% of their time to be now discretionary, so they can invest in their infrastructure. The strategy there is to have the system do all the work, try to automate the tasks that people do manually.

The second [factor] is the simplicity of the entire NAS operation, and the familiarity of it. We're using technologies in some ways that are 15 years old: Ethernet, TCP/IP, NFS, CIFS. These are architectures that IT staffs have been familiar with for years. So, it's not as if you need some special training in general. You're using infrastructure components where all of your staff is familiar with it. I had a CIO tell me that he was very, very frustrated by his SAN deployments and it had nothing to do with the hardware. The skills aren't available in widespread supply in the industry. They are when you're using a NAS approach. It's very straightforward. It's a network and system administrator. Most of the command interfaces are either Windows-like or Unix-like. So, they are skill sets that are easily leveraged in this kind of environment.


searchStorage: What do you say to people who talk about what they view as the limitations of Gigabit Ethernet in relationship to Fibre Channel, getting data to/from, etc.?
Warmenhoven: In almost every benchmark I've seen, the issue is not the network, either the Fibre Channel or Gigabit Ethernet. The issue is typically that the storage subsystem is generally the bottleneck, and it gets right down to the drives. Drives have gotten denser, but they haven't gotten commensurately faster. You can only spin the platters so fast, and the heads still have to move. It's not electronic. It's mechanical.

When I started at Network Appliance, we were shipping one gigabyte drives for the high-end, and they had about 25-millisecond access. Now, we're shipping 72 gigabyte drives, and they have about a 5-millisecond access. So, you have 72 times the data under one set of heads, and only 5 times the performance. Clearly, over time, you become what's referred to as "spindle-bound" or "head-bound." So, a terabyte now is 14 heads. I only have 14 of those little access arms to go get it. The issue is how do you optimize head seats. It has nothing to do with the network. That's why, as you get into performance benchmarks, customers are typically quite surprised. They really are. Their expectation is that if I run it over a network, it's going to go slower. We have the file system which is maximizing the utilization of the heads. That's where the bottleneck is. That's the thing we really address. The network never really was the bottleneck.


searchStorage: Do SAN and NAS have to be an either/or scenario?
Warmenhoven: SAN will ultimately become block over IP, iSCSI form. I do think it moves away from Fibre Channel towards GigE or 10 GigE. I believe it's also going to be facilitated by what's referred to as TOEs or TCP offloads on the NICs, that I think are going to remove all the performance issues off the servers. Then, I think the simplicity of the argument is going to be pretty compelling.

searchStorage: How are you staying ahead of your competitors, even vendors that may not have been very involved in the NAS space but are trying to move into that?
Warmenhoven: Most of the vendors who've talked about getting into the NAS market don't really have a clue what they are talking about. I would argue there is no such thing as a NAS market. We created it to give ourselves an identity.

The one application that we know of that absolutely requires a NAS solution is one that we call home directories. An example is your network drive, like your Disk G, that's a virtual drive. That's actually a server somewhere. That one requires the server to be a NAS server. That's the only application I know of that requires NAS. Everything else you can address by either NAS or SAN. So, NAS is nothing more than a technology alternative for interconnect. That, to me, doesn't define a market. There's got to be something distinctive about the application, the customer, whatever, that defines the market, right? So, I really view this whole analysis of markets to be somewhat flawed. We think what we're competing for is a piece of the NT and Unix open storage market, external storage.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext