SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : The *NEW* Frank Coluccio Technology Forum -- Ignore unavailable to you. Want to Upgrade?


To: Frank A. Coluccio who wrote (832)9/16/2000 9:34:53 AM
From: Curtis E. Bemis  Read Replies (1) | Respond to of 46821
 
The classic paper by Boggs, Mogul and Kent still stands. At
that time, in the mid eighties, there were many nay sayers.
Also, there were many erroneous chip set implementations to
the algorithms. In CSMA/CD media, ie. shared, there are still unresolved problems, one is known as the "capture effect" and there are methods to completely resolve that issue. To construct CSMA/CD networks today with hundreds of stations is quite foolhardy, especially since the HW
implementations can now keep up with the speed of the media. In 1988, not many chip-sets could do that at even 10 Mbps; today, 1GE chip sets can. All 1GE networks I have
ever seen or heard about are point-to-point 2 station networks and most modern 10, 100 networks are that way also.
I fail to see the need to rehash a decade of technology
implementation and developments in the IEEE802.3 suite, nor
discuss and analyze poor deployment practices including the
purchase of a multi-port repeater at Staples for $10 and gang them together to support hundreds of stations.
Point-to-point 10 GE implementations, full-duplex, will be used in the LAN,MAN and WAN, just as 100Mbps and 1GE is done
today, all with great success, low cost and simplicity.



To: Frank A. Coluccio who wrote (832)9/16/2000 11:13:07 AM
From: justone  Respond to of 46821
 
Frank:

My understanding of the art of network simulation is that queuing theory can not be
used when you have internet like network traffic, consisting of a large number of
servers, and a larger number of clients, varying traffic demand types, and nomadic
user behavior. So all you can do is build a highly scalable network architecture that
responds rapidly to change, and find what works in the field, I suppose. While that
is fine inside a building, I'm worried about the expense of running out to the
neighborhood, and putting up routers on telephone poles (or whatever) to
re-balance network loads because of high traffic users: and you still haven't solved
jitter problems.

An observation: it seems from your notes on LANS and what I've heard elsewhere
that ~200 clients is a rough maximum on a shared medium. Maybe this is some sort
of heuristic constant that can be used for neighborhood WANs as well as LANS?
Maybe 200 is when you go get more bandwidth or break up the network into
shorter segments? We could call it the 200 rule.

There is some indication that HFC (cable) should handle no more than 50-200
homes as well, not 500-5000 as sometimes claimed. Of course, the big difference
between cable modems and 10G ethernet (other than the downstream video bit) is
that cable modems use a one hop managed network and thus have a chance to
control real-time packets. Cable ( DOCSIS 1.1) supports collision detection
ethernet / IEEE 802.3 upstream, but it also has differentiated services (diff-serve-
they don't call it DS yet for some reason!) capabilities, which define a set of
per-hop-behaviors (PHBs), which are supported by diff-serv capable routers, that
are superior, in my opinion, to RSVP.

See, microsoft.com, from
microsoft, of all people, for a better explanation.

Of course, the best heuristic is your own simple profound bit of poetry:

"Bandwidth hogs
will kill ya every time,
theoretical predictions to the contrary,
or not.
"