To: DownSouth who wrote (25191 ) 5/23/2000 11:36:00 AM From: buck Read Replies (2) | Respond to of 54805
Buck, you remind me of the main problem that NTAP reps have had in selling their product. The prospect just can't quite accept the story on faith. It's a rare customer that will accept such claims on faith alone. Oh, lordy, but that were more of them. I would be happily raising cattle by now! I read the whole test case you presented and it is bulletproof. I have no doubts that NTAP does what they say it does in the configuration they tested against with the protocol they used for testing. Remember, I believe the NTAP story, and I believe it enough to invest in them heavily. What that paper describes is a set of IP network clients doing things on an IP network. This is a perfectly valid test. Since not a single client on that network could ever generate more than 4K per second in a transaction-oriented test, the NetApp filer shines. It should, because it is designed for just that. And, as the paper shows, it scales up to a large number of clients very well. And we all know what scalability means in our little Game here. To bottom-line it, as Tekboy asked, the NetApp filer is an excellent file system for servers that are used to do transaction processing. Believe me, I get it. My contention is that once you get past the need for 4KB blocks generated by 30-100 clients, this model does not scale. Here I am talking about all the other processing that goes on besides transaction processing. Data warehousing...data mining...database replication...backup and restore...seismic processing...CAD/CAM design...pre-press...account reconciliation...$$$ movement from bank to bank...video editing...all of the un-sexy apps that feed into the transaction processing model. Thes applications absolutely need to move data by the gigabyte, and they can't do it in 4K chunks or even 32K chunks. Those small chunks just waste too much processor time. They need 128K and 256K chunks streaming for minutes and hours at a time, just moving data from point A to point B. Tekboy Bottom-Line (hereafter referred to as TBL): imagine a pallet of shoe boxes that needs to be moved to the loading dock from the assembly line. Does it make more sense to carry each shoe box to a new pallet at the loading dock by hand one-by-one, or to pick up the whole pallet with a forklift and take it to the loading dock? These applications are where the EMCs of the world shine. And, somewhat contrary to our common sense, they are growing in need, not slowing or dropping. We tend to view the world as becoming more transaction oriented, what with our web usage, our WAP usage, our ATM usage, our IPG usage, etc. The fallacy lies in thinking that only transactions are needed to keep the e-world turning. The actuality is that there are TONS of back-end, back-office types of processing that goes on to support the e-world. And those processes are just like the forklift. They look slow and bulky, until you actually try to carry each shoe box by hand to the loading dock one at a time. You certainly wouldn't want to use the forklift for moving each shoe box to the correct spot on retail shelf, either. That is a job best left to small, quick-moving stockers. TBL: both of these products are essential to the growth of the e-world, and by extension, constitute a piece of the Gorilla Game. Both companies use very specialized software running on very specialized hardware to meet and beat customer expectations. Both companies have a vision of where the storage world is going, and surprise! They're basically the same vision. NetApp is coming at the world-wide explosion of data from the client-oriented, transaction-oriented view of the game, and EMC (and others) are coming at it from the big honkin' data-set view of the game. Each is trying to reach down into the others' markets. Each will be imminently successful (my WAG, based on the technology) during my investing time horizon (~5 years.) And I don't believe one will supplant the other for a LOOOOOONG time, even though each will have wins in the other's camp. So I still believe in both companies, and will remain invested in both, with no particular preference toward one or the other. buck PS I assumed that NetApp used UDP (Undefined Datagram Protocol) instead of TCP (Transmission Control Protocol) for two reasons: 1) I think NFS is or was originally based on UDP, because 2) UDP consumes fewer processor cycles for overhead than TCP. This was last confirmed, by me personally, at least four years ago, so it could have changed. TBL: UDP had less impact on a server's processor and hence was more efficient for some applications.