To: DownSouth who wrote (25897 ) 6/6/2000 12:33:00 AM From: Thomas Mercer-Hursh Read Replies (1) | Respond to of 54805
I am not going to pretend that this is my area of expertise, but being something of an OF (Old F**t) in computing, there were a couple of things in the comparison of these two systems that caught my attention, other than those which you pointed out. For staters we have: Network: Gigabit E-net vs Network: 100Mbit E-net Well gosh, assuming that network bandwidth is a significant factor in this benchmark, is it surprising that one might need 3X the number of NICs when the NICs themselves are 10X? Then I have to ask myself about this one disk controller versus three when we are talking about 58 vs 116 disks. Can we be talking at all about controller = controller? In the Sun case, are we talking SCSI? We sure aren't talking 38 disks per channel! So how many of what kind of channels were there in each of these controllers? As for: As you can see, Sun configured a very fat 3500 to hit these numbers One of my reactions is to think, well yes, I would expect them to have to deliver a fairly beefy system since one is a specialized appliance and the other a general purpose system. Depending on the requirements, this is one of the reasons that a specialized appliance can certainly be cost effective and performant. But, take a slightly different mix of requirement, one's where the specialized appliance has no functionality at all, and the balance can tip the other way. As for:and an unrealistic number of file systems (1 per drive, more or less). Sun?s config did not include any RAID, as that would have penalized their performance significantly. My reaction tends to be "huh?". True, for some purposes, like large databases, hardware level striping can be highly performant, reducing the number of filesystems, but I've spent a lot of years configuring systems with one filesystem per disk, exactly because that was the best way *I* could spread the load intelligently and not depend on some average good algorithm. All of the best databases today provide some version of "storage areas" in which one can assign tables and distribute these areas across filesystems, *exactly* because, if one knows something about the frequency of access and where the hot spots are, one can beat any averaging algorithm. I'm not knocking NTAP at all ... but be careful about knocking more conventional architectures, especially when the system has to do more than store data. Not to mention, of course, that a system that is highly performant in a read-only environment, may be exactly the *wrong* choice in a transactional, high write, environment.