Interesting chitchat on xDSL reflector re QoS/Stupid ... Doc Stafford and Jim Southworth ringing in ... lead-in below ... posted here as tutorial.
IS QUALITY OF SERVICE NECESSARY? AT&T drives to control net via technology but AT&T Labs researcher finds simpler is better.
By David S. Isenberg
Box: [Simply adding bandwidth could turn out to be the cheapest approach.]
AT&T carries the burdens of incumbency in a world exploding with disruptive technology. My concept of a Stupid Network tries to explain the disruptions that telcos must face, but AT&T still doesn't seem to get it.
Recently Dan Sheinbein, AT&T's vice president of network architecture & development, told the Star-Ledger (Newark, NJ) that the Stupid Network has "not been a particularly active area of discussion" at AT&T lately. Even more recently, Sheinbein told me, "On balance, AT&T's network is getting smarter."
To me, the Stupid Network - the dumb transport component of people's applications, designed simply to "deliver the bits, stupid" - is a consequence of the new abundance created by technology's headlong spurt. It's enabled by the Internet Protocol (IP). (See isen.com for details.)
AT&T Chairman Mike Armstrong says he's embraced IP, but his strategy is clearly "intelligent." He says that if AT&T controls the interfaces, specifications, protocols, standards and platforms of the network, it can weave them into a set of seamless services. If AT&T could pull this off, it would be able to hold back the rising tide of commoditization and reglue the delaminating value proposition. But to do that, somehow Armstrong would have to get AT&T back into the equipment game, stamp out IP, and repeal Moore's law.
THE APPARENT NEED FOR QUALITY OF SERVICE
At the edge of Armstrong's awareness, AT&T Labs mathematician Andrew Odlyzko is researching the economics of networks. He is no Stupid Network ideologue. In fact, he used to believe that the Internet needed such "intelligent" complications as Quality of Service (QoS) and differential pricing. Both of these make networks treat different kinds of data differently.
But now Odlyzko's research has led him to the conclusion that simpler is better. Odlyzko, who came to Bell Labs Research 23 years ago straight from his MIT doctorate, has convinced himself that simply adding bandwidth could "turn out to be the cheapest approach when one considers the costs of QoS solutions for the entire information technologies industry."
Internet telephony, introduced in 1995, made the apparent need for QoS acute. Until then, Internet traffic consisted of email and file transfers, and then web page information. For these applications, fast transmission is nice, but delays do not make them unusable. Not so with Internet telephony - people just can't have conversations when there's more than a few hundred milliseconds of delay.
Differential pricing is the first cousin of QoS. If you have different levels of service, you need some motivation for people to use the lower-grade service. Otherwise, the argument goes, people will always use the best service whether they need to or not.
But now Odlyzko thinks that even simple QoS schemes may be too complex. Two years ago, he proposed a very simple QoS plan. It used only differential pricing. He called it Paris Metro Pricing (PMP), after the Parisian subway system of letting people who pay more ride in "first class" cars. These cars are physically identical, but less crowded only because they cost more. In Odlyzko's vision, a PMP Internet would have two identical, parallel channels, and one would be designated "first class." It would cost more, so it'd have less traffic and provide better service. But Odlyzko now says that administering parallel channels would add more complexity than users or service providers desire.
100 LANE HIGHWAY, A FEW FAST CARS
Lightly loaded networks don't need QoS. They're adequate even for Internet telephony. Odlyzko found that on most data nets, traffic is surprisingly light. (His analogy for the typical corporate Intranet is "a 100-lane highway [for] a few fast cars.") Also, he says, other work showed only 40% of Internet congestion is due to transmission bottlenecks, and only a very few choke points are to blame.
As intelligence migrates to the edges of the Internet, so does network administration, Odlyzko says, "where it is wastefully duplicated," at great expense because it requires human expertise. He concludes that, "The complexity of the entire Internet is so great, that the greatest imperative should be to keep the system as simple as possible. The costs of QoS or pricing schemes are high, and should be avoided . . . we should seek the simplest scheme that works . . . "
And that simplest scheme, Odlyzko says, involves flat rate pricing and over-provisioned, lightly loaded networks with a single grade of best-effort service. This scheme takes advantage of rapidly improving routing and transmission technologies, and it doesn't mess with any of the properties that made the Internet great. But it'll be a hard one for AT&T to control.
[The article above appeared as Intelligence at the Edge #7, which is Isenberg's monthly column in America's Network, on March 1, 1999. Odlyzko's work can be found at research.att.com. Isenberg (http://isen.com/) thanks Jock Gill (http://www.penfield-gill.com) for comments on an earlier draft. Copyright 1999 Advanstar.]
[the general rebuttal]
The "more bandwidth" versus QoS debate has been going on for a while now. It especially heated up when gigabit Ethernet challenged ATM in the LAN a couple of years ago. Contrary to what proponents of adding "more bandwidth" say, continually increasing shared bandwidth to address application performance requirements is not cost effective or easy. With the "more bandwidth" approach, to guarantee a performance level for an application, deterministic bandwidth allocation is the only way to ensure service degradation (excessive packet/cell congestion and discard) does not occur when multiple applications are transmitting data at the same time. With deterministic bandwidth allocation, the available bandwidth must equal the total of all the peak transmission rates of all applications.
The following DSL example is illustrative: DSLAMs cross-connect ATM virtual circuits (VCs) terminating at customer premises to ATM switches in a transport network. Multiple VCs are statistically multiplexed onto a single, high-speed uplink (typically an ATM DS3 or OC-3) to an ATM switch. In the typical case, VCs receive no QoS guarantees, and traffic is transported using the UBR service class, which provides best effort service only. UBR connections have no QoS or traffic contracts within a network and are not subject to connection admission control (CAC) or usage parameter control (UPC). The only limitation on a UBR connection is its peak transmission rate. UBR sources can transmit cells at any rate up to their peak rates. If there is no resource (bandwidth) available to transport UBR cells, they are discarded. Because DSLAMs provide limited or no QoS, oversubscription of the uplink trunk results in degraded service for users, the level of severity depending on end user traffic patterns. Since UBR is the most common class of service supported on DSLAM uplink trunks, deterministic bandwidth allocation is the only way to ensure service degradation (excessive cell congestion and discard) does not occur when multiple users transmit data at the same time.
Deterministic bandwidth allocation does not take advantage of ATM's inherent statistical multiplexing capabilities, and restricts utilization of network resources. For example, an OC-3 (155 Mbps) uplink can support 155 users (VCs) at an access rate of 1 Mbps per user. Each user can transmit data at a rate of 1 Mbps, without contending for bandwidth with other users. If the number of users (VCs) assigned to the single OC-3 uplink is increased to 200, and each user transmits at a rate of 1 Mbps, users will receive bandwidth on a first come first serve basis. Some users will be unable to transmit (their cells will be discarded) until users who cease transmitting make bandwidth available. For bursty data traffic, like Internet browsing, deterministic bandwidth allocation is an inefficient way to guarantee bandwidth availability for users. Large amounts of bandwidth can sit idle until the user downloads at peak rates.
Deterministic bandwidth allocation is also costly. In the example above, in order to guarantee 1 Mbps service to all 200 users (VCs), the service provider would need to provision another DSLAM with an OC-3 uplink and an additional OC-3 port on the destination ATM switch. Another DSLAM is required because DSLAMs typically do not support more than one active DS3 or OC-3 uplink. During initial deployment of DSL access networks, where the number of users (VCs) is small, oversubscription of DSLAM uplinks may not cause performance degradation, depending on user traffic patterns. However, as more and more users are brought on to the network, contention for uplink bandwidth on DSLAMs will increase, resulting in rising cell loss ratios (CLRs) and declining performance.
In order to take advantage of the statistical multiplexing capability of ATM, multiple QoS parameters must be supported. Compared to increasing network capacity by provisioning additional DSLAMs with DS3 and OC-3 uplinks in access locations and adding corresponding DS3 and OC-3 ports on ATM switches in the transport network, implementing QoS is a cost-effective way to maximize the amount of users supported on network resources.
With QoS, each VC can be assigned a traffic contract that specifies the characteristics of a connection between a subscriber and the ATM network. Under the contract each VC can be allocated an amount of bandwidth that is less than the VC's peak transmission rate, but more than its average transmission rate (hereinafter referred to as the guaranteed bandwidth of the connection). As long as the sum of the guaranteed bandwidths of connections is equal to or less than the uplink bandwidth, it is possible for the sum of the peak transmission rates of the VCs to be greater than the uplink bandwidth. Bandwidth efficiency derived from statistical multiplexing increases as the guaranteed bandwidths of the VCs approach their average transmission rates, and decreases when they approach their peak rates. Using QoS in conjunction with statistical multiplexing enables service providers to guarantee bandwidth availability on uplink ports to more users (VCs) without proportionally increasing capacity, i.e., the number of DSLAMs and DS3 and OC-3 uplink ports required to connect to backbone ATM switches.
Applying QoS to the example above enables a single OC-3 uplink to provide guaranteed bandwidth to more than 155 users (VCs). If the average bit rate of 155 users is 512 kbps and their peak rate is 1 Mbps, QoS can be used to provide guaranteed bandwidth of 768 kbps to 155 users. An additional 45 users, each receiving guaranteed bandwidth of 768 kbps, can be added to the OC-3 uplink. All 200 users are guaranteed 768 kbps, and are able to burst to 1 Mbps provided resources are available. Even if users one through 155 simultaneously burst to their peak rate of 1 Mbps, QoS ensures resources are made available for users 156 through 200 to transmit at any rate up to their guaranteed bandwidth of 768 kbps.
Tom Mitchell Dir., Product Management Promatory Communications |