To: Thomas Mercer-Hursh who wrote (25901 ) 6/6/2000 8:00:00 AM From: DownSouth Respond to of 54805
assuming that network bandwidth is a significant factor in this benchmark, is it surprising that one might need 3X the number of NICs when the NICs themselves are 10X? Good question. In past benchmarks, before GbE, when everyone was using 100Base-T, the number of NICs used by NTAP and its competitors was about the same. So network bandwidth was about the same across the board. The comparative results were much like the sample I provided. Then I have to ask myself about this one disk controller versus three when we are talking about 58 vs 116 disks. Can we be talking at all about controller = controller? In the Sun case, are we talking SCSI? We sure aren't talking 38 disks per channel! So how many of what kind of channels were there in each of these controllers? They are both running Fibre Channel controllers so "controller=controller". (Please see the URL's that I provided for more detail about how each of the Sun controllers were configured.) Basically, one controller was configured with 4 drives, probably for the OS. The rest of the drives were on the other controllers.Depending on the requirements, this is one of the reasons that a specialized appliance can certainly be cost effective and performant. But, take a slightly different mix of requirement, one's where the specialized appliance has no functionality at all, and the balance can tip the other way. This third party, industry standard, industry audited benchmark is designed to measure NFS throughput. Of course, NTAP would do very poorly in a compute bound application benchmark. Especially since you cannot run apps on a filer. <g>but I've spent a lot of years configuring systems with one filesystem per disk, exactly because that was the best way *I* could spread the load intelligently and not depend on some average good algorithm. All of the best databases today provide some version of "storage areas" in which one can assign tables and distribute these areas across filesystems, *exactly* because, if one knows something about the frequency of access and where the hot spots are, one can beat any averaging algorithm. You have just pointed out one of the benefits of NTAP's technology. WAFL does not require any load balancing across the media. That is one more sys admin task that is made completely obsolete. As for the lack of RAID on the Sun system, who would be running a mission critical app w/o RAID? My point is that NTAP is RAID out of the box and the performance you are seeing is with RAID. The Sun system is not running RAID and the results would have been significantly different if it were.be careful about knocking more conventional architectures, especially when the system has to do more than store data. Not to mention, of course, that a system that is highly performant in a read-only environment, may be exactly the *wrong* choice in a transactional, high write, environment. But a NTAP does nothing more that store ("serve") data. That's the point!! It's an appliance. Suggest you study the SFS SPEC benchmark. It is designed to measure performance in a balanced set of NFS commands, including reads, opens, closes, writes. It is not biased toward what NTAP does best when it comes to file serving.