Hi Jim, I know you weren't being pretentious, I was paying you a kudo.
"In an ethernet-type network, which I understand is [relatively] unmanaged..."
Management covers an awful lot of territory. Today's Ethernets are highly managed, if you consider the degree of surveillance and monitoring that is available, and the capabilities that vendors have introduced to do remote provisioning and problem isolation.
I think that we need to differentiate between "Ethernet" and the products that have been produced to "manage" Ethernet.
Ethernet is a set of protocols and standard industry practices that are spelled out by the standards bodies. The "management" of Ethernet is often derived from proprietary vendor embellishments to, or extensioins of, those standards. Except for SNMP, which I'll address in a moment.
Actual network management platforms are products unto themselves. They simply measure how well an Ethernet or other network platform is doing, and support monitoring, configuration management, service provisioning and fault management.
Furthermore, overall network utilization (as well as individual user traffic profiling) can be closely tracked, and traps can be set to allow for fault isolation during abnormal conditions. All, of course, if you invest in the appropriate test and monitoring wares, and strategically distribute these devices (probes) throughout your network. Is this Ethernet, per se? Or, is this management introduced by third-party solutions?
The statement in your question was generally regarded as true during the earlier iterations of Ethernet, when thick coax (5BaseT) was first introduced. However, the manageability of Ethernet as we know it today has undergone many enhancements. To repeat, however, fundamental Ethernet is but a set of protocols. It's been left to individual vendors to build on those protocols in order to introduce their own flavors of "management."
There are some additional standards, however, that were written to help this process along. Most notably in the LAN and internetowkring space, Simple Network Management Protocol, or SNMP. And to a far lesser extent, and mostly in ITU-sanctioned areas, there is the common management information protocol, or CMIP. CMIP is an OSI standard, which is virtually non-existent in today's Ethernet LANs.
Again, each vendor is free to adopt those aspects of SNMP that they deem to be appropriate, or economically feasible, for them.
Vendors have introduced port level management, allowing administrators to remotely shut down and "turn up" individual user- and backbone- ports under software control, whereas this was a manual task during Ethernet's interim stages. Here again, these features are not spelled out in the "Ethernet" standards. They are vendor-specific "implementations."
As long as they can still deliver the basic Ethernet frame, with all of its framing parameters in place, then these additional features don't interfere with their products' interoperability. At the level of least common denominator.
This last caveat is an important one. For example, if I am using Cabletron hubs and their attendant SECURITY features, those features will not work with hubs from Cisco or Nortel. And vice versa. This goes Ditto for virtual LAN extensions, and enhancements to VoIP and other m-m applications.
That's why this is an important caveat to keep in mind, and why more and more shops are going with single vendor solutions today. Shades of Big Blue are upon us.
They have also enabled configuration management capabilities, thus allowing administrators to group or segregate users within or away from segments of other users, by community. This, too, is achieved under software control by the network admin, by mapping users' port addresses on the backplane.
Some other enhancements, such as prioritization and queuing have actually made their way into the standards. Others remain proprietary, and are useless unless the user decides to go with a single vendor. Some vendors are now proposing to implement quality of service attributes in their suites, while others, still, have proposed to add to their proprietary optical extensions in their physical layer interfaces, in order to extend their overall reach. [The latter, incidentally, has been true for a long time.]
"... is it possible for you to describe the mechanism by which the intended recipient "grabs" the appropriate data?"
Nothing fancy here, each Ethernet node is adapted with a unique address in the media access control (MAC) layer, found in the network interface card (NIC). These are usually burned in, permanent addresses, although sometimes addresses can be "written in" to satisfy certain applications.
A message intended for a given user will specify their MAC address. Or, a sender could set up a message to be sent as a "broadcast." This is often the way an administrator sends out network alerts and bulletins. In which case, all users within the targeted "broadcast domain" will receive the broadcast message, irrespective of their unique MAC address. Granted, this is oversimplified, but I think it explains the principles.
"... is it true, in a general sense, that overcapacity is the only means by which an ethernet can guarantee transmission/reception?"
Within certain constraints. Especially, where non-differentiation of service types is the case (i.e., in the absence of prioritization and other forms of traffic policy setting). We recently discussed this over in the FCTF. See Curtis' contribution to this topic at, and beyond:
Message 14393595
And here, too, "overcapacity," like the term "over-subscription," is a general term that needs to be more clearly defined. BTW, some shop terms used to connote overcapacity are: "head room," "breathing space," and "insurance," and they usually refer to the arithmetic difference between traffic levels at any point in time, and the maximum rated speed of the channel (or, the switching matrix, if we are discussing switching/routing node elements, as well).
So that, if a 10 Mb/s backbone was running at 30% utilization (or 3 Mb/s, average), then the head room on that backbone could be rated at 70%, or 7 Mb/s.
As more Ethernets turn to the switched mode, it is highly likely that you could receive very close to the full rated speed of the link, if all other engineering has been properly attended to.
Such matters would be, providing adequate uplink capacity (this usually refers to the rated speed of the backbone or other links to switches and servers), and ensuring that the switch itself can handle traffic in a non-blocking manner, once you have configured all of the ports and tended to other network sizing considerations. Not the least important of which is the actual throughput and transaction profile of each user, and the aggregate traffic produced by all users on the switch.
The statistical mix of all of these parameters begins to speak to the essence of traffic engineering. HTH.
FAC |