Curtis,
As you know, some of the points presented by the poster that you took exception to, concerning LAN utilization metrics, are among the most hotly debated in all of networking.
Before commenting on the validity of his/her observations, I would first want to know the specific configuration of the network in question, and then have an opportunity to view the application transaction profiles that traversed this LAN. And equally important, I'd want to know how many stations the LAN supported.
One of the things that exacerbates this matter is the confusion caused by some otherwise believable individuals who make their living in the rags (and in texts, and studies), whose names many of us would recognize here in a flash, if I printed them. But I'm not paid to make enemies, so we'll leave their names blank.
Alternately, and depending on who you listen to, some blanks will cite the 30% to 40% utilization threshold to doom. Others will cite something in the area of, typically, 75% to 85% utilization before there is cause to start seriously thinking about implementing additional segments.
Keep in mind, we're talking shared media here, consistent with the study that you cited.
I suspect that each of these guesstimates concerning shared media performance represent a throw back to their own personal experiences, since, what the heck, they saw what they saw with their own two eyes. Some of them, granted, have done legitimate investigation. But here, too, depending on what their experimental models looks like and what they are trying to prove, they each yeild different results. More on this point shortly.
Problem is, very few shared Ethernets of the past -- of any appreciable size -- were configured exactly alike, nor did they perform alike. The number and complexity of the variables are just too many and too nitty, respectively, to get into here. [Many of those parameters were covered very nicely in the study you cited in the uplink, for anyone wanting to know what they are.]
Further exacerbating this matter is the persistent obfuscation surrounding the improvements that have taken place in the past nine years, namely, to switching (as opposed to shared media) and full-duplex modes of operation.
But it is useful to point out here that outside of most of the largest enterprises there are still a good number of half-duplex shared Ethernets in existence today, where folks have not found the need or the resources to upgrade, or where the Staples variety of harmonica hub does them just fine. I know of one school library where the librarian is still marveling over their 10Base2 thinnet installation, viewing it as their validation into the Gore Paradigm.
Your citing the now-popular study by Boggs, Mogul and Kent from 1988, however, is, IMO, equally dubious in debating this matter, when you look at: its intent; its place in history; and, the number of nodes that they used in their model to make their point. And the fact that they admittedly were not seeking to exactly replicate real world conditions.
Also, to bring the "real" numbers and improvements that you cited into proper focus, i.e., full-duplex, switched modes of Ethernet, a more-recent study that demonstrates these improvements would have been more useful. [If you or anyone else here knows of a more recent one of equal credibility, please post.]
Twenty-four stations in the first test by Boggs et al, and then twenty three stations in the second test? What were they thinking? I was at least glad to see that the second test reduced the segment lengths, which accounted for reduced collision resolution times. But they didn't go far enough to explore what takes place in real environments, where real station counts and segment lengths were, at the time, being pushed to their limits in large organizations (sometimes over the acceptable limits), across multiple bridged segments.
One must wonder why they limited their experimental model to 24 nodes. When, during the same year, in 1988, I recall Goldman Sachs and Merril Lynch were each pushing over a thousand nodes on similarly shared, bridged thicknet (10Base5) backbones of the multi-segment configuration design.
Let me cite some additional findings from a couple of more-recent studies that have taken place, w.r.t. and since the 1988 study you pointed to by Boggs, et al.
From "Ethernet: The Definitive Guide," by Charles E. Spurgeon, p. 331, re: additional Ethernet performance testing during the 1990s:
"In 1992, Speros Armyros published a paper showing the results of a new simulator for Ethernet that could accurately duplicate the real-world results reported by the Boggs, Mogul and Kent paper. This simulator made it possible to try out some more stress tests of the Ethernet system.
"These new tests replicated the results of the Boggs, Mogul and Kent paper for the 24 stations. The new tests also showed that under worst-case overload conditions, a single Ethenet channel with over 200 stations continually sending data would behave rather poorly, and access times would rapidly increase. "Access time" is the time it takes for a station to transmit a packet onto the channel, including any delays caused by collisions and by multiple packtes backing up in the stations buffers due to congestion of the channel.
"Further analysis of an Ethernet channel using the improved simulator was published by Mart Molle in 1994. Molle's analysis showed that the Ethernet binary exponential backoff algorithm was stable under conditions of constant overload on Ethernet channels with station populations under 200. However, once the set of stations increased much beyond 200, the algorithm begins to respond poorly. In this situation, the access time delays encountered when sending packets can become extremely unpredictable, with some packets encountering rather large delays."
<deleted: section on load definitions>
"The lessons learned in these studies make it clear that users trying to get work done on a constantly overloaded LAN system will perceive unacceptable delays in their network service. Although constant high network loads may feel like a network "collapse" to the users, the network system itself is in fact still working as designed; it's just that the load is too high to rapidly accommodate all of the people who wish to use the network. The delays caused by congestion in shared communication channels are somewhat like the delays caused by congested highways during commute times. Despite the fact that you are forced to endure long delays due to a traffic overload of commuters, the highway system isn't broken; it's just too busy. An overloaded Ethernet channel has the same problem."
Well, I have some nit issues with this last paragraph, but in the broader sense it's okay.
Having copied the above from the text, keep in mind that the author is referring to a shared bus-topology network (as did Boggs, et al) of a size much larger than the one cited by Boggs, et al.
On a personal note, I've seen Ethernets fall into dragass mode, too, with my own two eyes, in our own offices when multiple databases were being rebuilt during periods of heavy CADD activity. Yes, you get to learn what's going on when all goes silent. Earlier, we even saw this occur when using a 4/16 Mb/s Token Ring ;-)
Net-Net: Bandwidth hogs will kill ya every time, theoretical predictions to the contrary, or not.
Comments and corrections welcome.
FAC |