Network Neutrality Debate: A Case for Non-Neutrality January 20007 Network Performance Daily
networkperformancedaily.com
Christopher Yoo, Vanderbilt University School of Law
This article continues our series examining the issue of Network Neutrality.
[FAC: See introductory article of this series at URL at the bottom of this page. What is your take on this one?]
Professor Christopher Yoo joined the faculty of the Vanderbilt University School of Law in 1999, and his research focuses primarily on how technological innovation and economic theories of imperfect competition are transforming the regulation of electronic communications.
In addition to clerking for Justice Anthony M. Kennedy and working at the law firm of Hogan & Hartson under the supervision of now-Chief Justice John G. Roberts, Jr., he has also published “Network Neutrality and the Economics of Congestion” [PDF] in Georgetown Law, and “Beyond Network Neutrality” [PDF] in the Harvard Journal of Law and Technology.
We asked him to share his thoughts on Net Neutrality with us.
(Continued…)
I think that the Internet, as it exists today, is in many ways a reflection of its history and origins as primarily a platform for academics to communicate with one another. It grew up around the NSFnet, and was primarily a means for academics to exchange e-mail and text files. What has happened since the Internet has become commercialized with the drop of the privatization of the backbone and the drop of the commercialization restrictions, is that we've seen an explosion in the number of users. The number of possible connections has gone up quadratically with the number of total users; so the Internet has become much more complex.
Second, the number of ways that people use the Internet has become much more heterogeneous. People want different services from it. Instead of applications such as e-mail and Web pages, which are not particularly sensitive to delay. A delay of 500ms is almost imperceptible when you're loading an e-mail. When you're talking about VoIP, streaming media, and online gaming, a 500ms delay can render the service completely unusable.
The broader comment is that the architecture of the Internet is based on a thirty year-old technology, TCP/IP. If you talk to most technologists, they believe TCP/IP is now obsolete. In fact, some degree of change in the way that the Internet functions is probably inevitable - not just as a function of what users demand from the Internet, but also based on the idea of the need to grow to reflect new technical capability.
My concern is that most of the Network Neutrality proposals would freeze this old-world of TCP/IP. It would standardize it on one set of protocols to guarantee some sort of interoperability, and standardization by itself runs the risk of freezing the network into place and becoming an obstruction to technological progress. What we'll find is that protocols should change and grow, and if different people want radically different things from the Internet, the optimal solution might be to have different solutions for different populations.
For example, as I wrote in my published work on this subject, you could actually see a world in which you saw three different networks optimized in three different ways - one provider running the traditional Internet aimed at e-mail and Web browsing, another provider giving priority to time-sensitive applications such as Internet telephony and streaming media, and a third providing enhanced security features - and that would potentially allow multiple networks to survive, each targeting a different market segment that places a particularly high value on a particular set of features. All of these networks would continue to be interoperable with each other, but they would operate in slightly different ways by optimizing their networks for slightly different types of services.
[NPD: How exactly would ISPs choose to give priority? Are we talking about a QoS policy, or more direct routing with fewer hops?]
There are a couple ways that a network operator could choose to deviate from network neutrality. The first is to give a higher priority to time-sensitive applications. TCP/IP, for example, contains one field early-on, which allows you to signal the kind of application that's being associated with it. This is generally not used, but it's possible for a network operator to use that field to differentiate between different kinds of traffic. There are some problems - you have find a way to guarantee a way that people aren't misrepresenting the kind of traffic associated with each packet - but it's quite possible that they could find backup systems to do that, or to charge people for using priority routing regardless of the nature of the application being associated to it.
Second, you could set up a completely different architecture. The best example of this that I know of is Akamai. Under the current Internet, if you maintain what's often called an end-to-end Internet- the easiest way for me to explain this is: Imagine if you're sitting in L.A. and you put a query to CNN.com [in Atlanta,] the normal thing that would happen is that a query would go out over the public Internet, and it would go all the way to Atlanta and back. It would have to pass through several thousand miles, and several possible points of congestion, and the server in Atlanta may or may not be congested itself.
Akamai avoids this problem by maintaining 14,000 servers and caching content locally at 14,000 locations around the world. They claim in public reports that they serve as much as 15% of all the worldwide Internet traffic. In this case, you would put out a query [to CNN], and a local facility in Los Angeles would rewrite the query and direct the query to the Akamai query located closest to you. If the Los Angeles server was congested, it might send it to the Orange County server, Pasadena, or Santa Barbara, or one located topologically much closer. In either event, you could reroute it in a way that would avoid both the network congestion and server congestion, in a way that optimizes the network performance.
That's not giving a higher priority to the bits. It's a fundamentally different architecture, where you're changing the way things are working.
The last way could possibly be a form of deep packet inspections where you provide enhanced security or different aspects to it, which again, isn't priority-based routing, but it would allow a degree of non-neutrality because those sorts of functions under the most typical conceptions of Net Neutrality aren't supposed to happen in the network.
[NPD: Regarding your idea of differentiated services: Imagine - and this is true for many rural networks - a provider where there's only set of infrastructure - or perhaps two sets of infrastructures - which can deliver broadband capabilities. Would the company that controls neutrality policies for the end-user's network be the controller of the infrastructure, or the Internet account provider, when those two things differ?]
The traditional policy analysis is to look for the level of the chain of production that is most concentrated. That's where you have the problem. The most concentrated link in this chain of production is last-mile transmission. I would say that ISP services - e-mail hosting and those sorts of things - are relatively competitive and the barriers to entry are fairly low - there are no reasons we couldn't have multiple services. There are actually a much broader number of backbone providers than last-mile providers. The real chokepoint is in last-mile transmission. It's not just who provides the accounts, it's who's transmitting the packets at the last instance from the cable head-end or the telephone company central offices to residences or businesses.
[NPD: You mentioned that TCP/IP was, in your opinion, an “obsolete” protocol. We've recently covered a story about Windows Vista's Compound TCP, which essentially reworks much of the standard TCP protocol. There are many different versions of TCP available. Do you think there's still room for TCP improvements, or do we need to have a new protocol?]
I don't think anyone knows what the optimal protocol should be. What I can tell you is that the optimal protocol will change as technology changes. Consider what's happened in our lifetime. There have been times when memory and storage were the constraining factors. Network bandwidth was not the constraint at all.
We've now changed where we can now cache stuff locally, active and storage memory has become very cheap, and now we have a different set of different of optimization parameters. In a real sense, these are complements of each other, but functions that used to be carried on by the network can now be handled by local processing or remote processing, and that's constantly changing.
No matter what protocol you pick, it inevitably will favor certain applications and disfavor others. If you think about classic TCP/IP, it routes on a best-efforts basis, and on a first-come, first-serve basis. That necessarily hurts time-sensitive applications and applications that require QoS. We could standardize on another protocol but that in turn would have its own biases.
The concern I have is that it's not clear to me that we should be requiring the network to standardize on any one protocol. There's a tremendous incentive for networks to be interoperable with each other, but the fact that, in the details, some of them might operate at a slightly different set of optimization or routing principles is probably a good thing. I don't have a great deal of confidence that any set of experts, whether in the government or in the industry, will be able to anticipate correctly what the ideal protocol should be for today. It's also not clear to me, if people want different things from the Internet, whether it should all run on a single protocol. Even if they were somehow able to get it all perfectly right today, they'd have to keep pace with every subsequent change, so that as technology changes, the technology that's right today would be obsolete tomorrow, and they'd have to make the necessary adjustments quickly. That's very hard to do.
The better question, to me, is: “Do we have any reason to believe non-neutrality is a huge problem right now?” There isn't a lot of pressure on network operators to adapt in exactly the ways we would want them to.
My read on it is that there is a lot of pressure on network operators to deliver value to their consumers. To put it in a different context, an example that someone gave to me is that another aspect of network neutrality is devices, and they often mention the Treo.
The Treo had a hard time finding someone to get it as a carrier, and many people say this is an example of the problem of neutrality. I think T-mobile, the weakest of the carriers, took a chance and hooked up the Treo. And they started supporting the Treo. Guess what? It took off and was extremely successful and now Verizon and all the other providers are supporting it as well.
That makes total sense to me. What we have is a world in which new devices are coming on all the time. I'm sure every device manufacturer thinks that their new device is a wonderful and great thing. But not all of them are wonderful and great things, and it takes the test of the market.
[The Treo] found a provider that took a chance on them and realized, “This is our chance [as market-trailers,] to invest in a strategic partnership and shake something loose to increase our market position.” They found their market, they found a partner, they were proved in the market, and now other carriers are picking them up. That process of experimentation is healthy, because no one knew the Treo was going to be the successful device it turned out to be. Eventually, the Treo went to all the different wireless providers and finally convinced one to take a chance on them because of the merits of what they were doing. Now you have a very successful technology. I think that's a tremendous success story about how decentralized decision making can help promote technological progress.
[NPD: Is there anything else you wanted to add?]
I forgot to mention this when talking about the Akamai example - Akamai is able to provide service with lower latency and higher quality service, because they distribute the content. This provides greater protection against DoS attacks. It's a local storage solution instead of creating additional bandwidth, and it's a really interesting solution. Here's the rub… Akamai is a commercial service and is only available to people who are willing to pay for it. If CNN.com pays for it, and MSNBC.com does not, CNN.com will get better service.
Establishing and maintaining 14,000 local caches all over the Internet takes money. There's a lot of costs there, and we would expect that whoever benefits from creating those caches is going to have to finance and pay for it. In a real sense, I don't see that as a huge problem, I see that an ability to charge for those services is creating a new technological solution which is taking a lot of pressure off the Internet. It may not be the right solution for today, it may not be the right solution for every Web site, but it's certainly a very interesting one we should explore.
And that's a classic example of non-neutrality, where one Web site is able to get faster service because they're willing to pay more money to get a slightly different architecture - one that violates the end-to-end principle, because it requires re-writing queries locally, which is putting intelligence in the network in the way that network neutrality proponents traditionally abhor.
This is exactly the kind of experiment that I think is extremely promising, that some people think may displace the conventional Internet as we know it altogether, as a new form of overlay network, in much the same way that the Internet was an overlay network on top of the phone system, and it has now replaced the phone system, and telephony is just an application riding on this multipurpose network. It could be that Akamai, certainly for Web page browsing and some other functions, might provide a vastly superior solution. At a minimum, it creates a different set of tradeoffs between maintaining server infrastructure versus the long-haul networking infrastructure.
As the relative costs of those two components change, you would expect those two solutions to change as well.
Network Performance Daily plans to continue coverage of this issue on Monday, when we will publish remarks from Art Brodsky, Communications Director at Public Knowledge.
Other articles in this series:
* Network Neutrality Debate: An Introduction and Discussion networkperformancedaily.com
------ |