To: Frank A. Coluccio who wrote (832 ) 9/16/2000 11:13:07 AM From: justone Respond to of 46821 Frank: My understanding of the art of network simulation is that queuing theory can not be used when you have internet like network traffic, consisting of a large number of servers, and a larger number of clients, varying traffic demand types, and nomadic user behavior. So all you can do is build a highly scalable network architecture that responds rapidly to change, and find what works in the field, I suppose. While that is fine inside a building, I'm worried about the expense of running out to the neighborhood, and putting up routers on telephone poles (or whatever) to re-balance network loads because of high traffic users: and you still haven't solved jitter problems. An observation: it seems from your notes on LANS and what I've heard elsewhere that ~200 clients is a rough maximum on a shared medium. Maybe this is some sort of heuristic constant that can be used for neighborhood WANs as well as LANS? Maybe 200 is when you go get more bandwidth or break up the network into shorter segments? We could call it the 200 rule. There is some indication that HFC (cable) should handle no more than 50-200 homes as well, not 500-5000 as sometimes claimed. Of course, the big difference between cable modems and 10G ethernet (other than the downstream video bit) is that cable modems use a one hop managed network and thus have a chance to control real-time packets. Cable ( DOCSIS 1.1) supports collision detection ethernet / IEEE 802.3 upstream, but it also has differentiated services (diff-serve- they don't call it DS yet for some reason!) capabilities, which define a set of per-hop-behaviors (PHBs), which are supported by diff-serv capable routers, that are superior, in my opinion, to RSVP. See, microsoft.com , from microsoft, of all people, for a better explanation. Of course, the best heuristic is your own simple profound bit of poetry: "Bandwidth hogs will kill ya every time, theoretical predictions to the contrary, or not. "