SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Wind River going up, up, up! -- Ignore unavailable to you. Want to Upgrade?


To: Allen Benn who wrote (2900)3/15/1998 11:41:00 PM
From: Allen Benn  Read Replies (1) | Respond to of 10309
 
The Evolution of the Internet - Part 2: Conflicting Theories

DARPA believes that Moore's Law mandates that sufficient processing power soon will be available to handle smart packets effectively. Since processors continue to double capacity every 18 months, they claim worries about the processing load associated with handling smart packets are misplaced, and surely are just temporal. Today's nodal processors may choke on the extra processing requirements, but not tomorrow's.

Unfortunately for DARPA, invocation of Moore's Law applies logically in a static communication environment. As George Gilder is now fond of saying, bandwidth is expanding even faster than Moore's Law. This would suggest that processors would fall farther and farther behind over time, as they would be completely overwhelmed by an ever-increasing on-rush of Internet traffic? If correct, George Gilder's newfound law of bandwidth growth conceivably is the kiss of death for smart packets.

Or, is it? It turns out that life is not quite as simple as Gilder represents. Backbone communication bandwidth may well continue to outgrow Moore's Law, but that growth probably will not extend to the extremities of the Internet, or so says Dr. Tennenhouse, the MIT professor on loan to direct DARPO/ITO. The outlook for expanding bandwidth is not so sanguine for thin clients operating on the extremities. The backbone may be travelling at light speed, but hand-held, portable devices will be limited to relatively snail-paced modems struggling to download multimedia data in human time. In terms of numbers, the vast majority of future devices will be operating on the extremities of the Internet with low-bandwidth. The cumulative affect of all those billions of low-bandwidth devices adding to broadband users will be huge requirements for backbone communications, which will need all the extra horsepower promised by Gilder.

And there is another theory that can't be ignored. Some claim that the Internet has succeeded mainly because its architecture is simplicity personified. Simplicity may be the Internet's most important feature, and is the reason why it survives without centralized organization and control. Add smart packets, and the Internet may suffer blackouts as packets run amok and add confusion. If the Internet fails to continually improve reliability and speed, users will be forced to seek expensive proprietary substitutes-as they did before the emergence of the Internet.

So, who is right, DARPA or the critics? Are smart packets a useful, even necessary, addition to the Internet, or would they just get in the way, slow things down, and maybe even make the Internet fragile and subject to purposeful or accidental sabotage?

The answer is they are both correct. There is a big role for smart packets in the Internet's future, but those packets must stay on the extremities and off the backbone, so as not to to impede the growth of bandwidth, or to make the Internet less trustworthy.

How will this be accomplished? Smart packets will implemented in such a way that, after leaving a source on the extremity, an individual smart packet will swim upstream, sticking to intelligent processing nodes and leaving a trail to be followed by subsequent packets from the same source. When the packet reaches a gateway to the high-bandwidth backbone, or the end of the line of intelligent processing nodes, it could do one of two things, the choice of which will depend on expectations about the packet processing capability at or near the server. First, the smart packet could simply transform itself to a standard IP packet and enter the backbone for delivery. Second, the smart packet could choose to remain in program format, but enter the backbone by wrapping itself in the cloth of a standard IP packet. In the second case, the true nature of the packet will be discovered on the other end, when another intelligent processing node realizes that the packet needs to be processed, not just passed along.

Since smart packets will always enter the backbone of the Internet as benign IP packets, they will not weigh it down with any requirement for extra processing, nor can smart packets make the Internet any less reliable or trustworthy. This will be enforced because intelligent processing nodes effectively will be prohibited on the backbone. The mechanism policing this prohibition will be Gilder's observation that the backbone bandwidth is outgrowing processing speeds, rather than anything formal. If Glider is correct, the Internet will be forced by economics and physics to abstain from promulgating intelligent processing nodes, but only on the backbone. Where processing capacity outgrows bandwidth, smart packets will become pervasive.

Remarkably, smart packets will be implemented in sub-nets on extremities of the Internet today without anyone being the wiser, and with no possibility of harming traditional Internet communications. Even though they represent a revolution in the Internet, they can and will develop evolutionarily without any possibility of negatively impacting traditional operation of the Internet.

Now let's look at implementation considerations.

Allen