SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : The *NEW* Frank Coluccio Technology Forum -- Ignore unavailable to you. Want to Upgrade?


To: Raymond Duray who wrote (4011)9/29/2001 2:48:41 AM
From: Frank A. Coluccio  Read Replies (1) | Respond to of 46821
 
Hello Ray,

re: "Now I don't know what is happening in your world, but ..."

I hope your question was rhetorical. I don't find myself aligning with this initiative; actually, I'm very uncomfortable with it. Nor do I fully understand its implications.

IMO, many of us (I raise my hand here) have strolled through the formative years of the commercial Internet fat, dumb and happy, surfing and e-mailing without much thought about its underlying philosophical implications, and with a sense of imunity to ever having to think about such matters.

The time may have arrived now that we have to re-revisit and re-examine the basic tenets of the Internet, this time a lot more closely, before some band of thieves walks off with it, while holding us prisoners to our own ignorance.

But alas, is there a forum for referendum to voice those concerns and grievances? The Internet is not an intuitive sort of thing that allows one to cite the usual types of precedence, due to its unregulated status, ironically. There is a major league conundrumous event taking place here, as you can see. In a way, there are regulated entities seeking to come into an unregulated domain with the goal of controlling it.

Re: your friend's identity crisis resulting from cyber fraud

We, too, have recently been victimized by a credit card defrauder. In my situation, however, I've been able to ascertain the methodology that was used: A bad check out clerk at a cosmetics stand. Nothing fancy, no high tech, easy to uncover its roots, but extremely frustrating to have cleared up, nonetheless.

You have bozoes in OR, we have bimbos in NY. This bimbo was stupid enough to screw up the delivery of certain items she had purchased, having them sent to "my" home [hence, the difficulty in proving that those were illicit purchases, because one of my family members signed for them when UPS rang the bell!] On other items, she was dumb enough to have items sent to "her" home! Telephone numbers validated, and all. Enough time wasted on that, tho.
---

Getting back to basics,

I've recently been looking more closely at the basic issues concerning "trust" on the Internet. I found the treatment below by Marjory S. Blumenthal and David Clark extremely helpful in sorting out many of the issues that had remained uncataloged in my former thinking, related to anonymity, intermediation, the end to end argument, hybridization of network models, walled gardening, and the many changing dynamics of the Internet due to new pressures and influences. I think you and others will find it interesting, and hopefully helpful, as well:

ana.lcs.mit.edu

"Rethinking the design of the Internet:
The end to end arguments vs. the brave new world"


Marjory S. Blumenthal
Computer Science & Telecommunications Board, NRC
mblument@nas.edu

David D. Clark
M.I.T. Lab for Computer Science
ddc@lcs.mit.edu

A version of this paper to appear in the ACM Transactions on Internet Technology
A version also to appear in Communications Policy in Transition: The Internet and Beyond,
edited by Benjamin Compaine and Shane Greenstein, MIT Press, Sept. 2001

Abstract

This paper looks at the Internet and the changing set of requirements for the Internet that are
emerging as it becomes more commercial, more oriented towards the consumer, and used for a
wider set of purposes. We discuss a set of principles that have guided the design of the Internet,
called the end to end arguments, and we conclude that there is a risk that the range of new
requirements now emerging could have the consequence of compromising the Internet’s original
design principles. Were this to happen, the Internet might lose some of its key features, in
particular its ability to support new and unanticipated applications. We link this possible
outcome to a number of trends: the rise of new stakeholders in the Internet, in particular Internet
Service Providers; new government interests; the changing motivations of the growing user base;
and the tension between the demand for trustworthy overall operation and the inability to trust
the behavior of individual users.

Introduction

The end to end arguments are a set of design principles that characterize (among other things)
how the Internet has been designed. These principles were first articulated in the early 1980s,
and they have served as an architectural model in countless design debates for almost 20 years.

The end to end arguments concern how application requirements should be met in a system.
When a general purpose system (for example, a network or an operating system) is built, and
specific applications are then built using this system (for example, e-mail or the World Wide
Web over the Internet), there is a question of how these specific applications and their required
supporting services should be designed. The end to end arguments suggest that specific
application-level functions usually cannot, and preferably should not, be built into the lower
levels of the system—the core of the network. The reason why was stated as follows in the
original paper:

"The function in question can completely and correctly be implemented only with the
knowledge and help of the application standing at the endpoints of the communications system.
Therefore, providing that questioned function as a feature of the communications systems itself is
not possible."


In the original paper, the primary example of this end to end reasoning about application
functions is the assurance of accurate and reliable transfer of information across the network.
Even if any one lower level subsystem, such as a network, tries hard to ensure reliability, data
can be lost or corrupted after it leaves that subsystem. The ultimate check of correct execution
has to be at the application level, at the endpoints of the transfer. There are many examples of
this observation in practice.

Even if parts of an application-level function can potentially be implemented in the core of the
network, the end to end arguments state that one should resist this approach if possible. There
are a number of advantages of moving application-specific functions up out of the core of the
network and providing only general-purpose system services there.

• The complexity of the core network is reduced, which reduces costs and facilitates future
upgrades to the network.

• Generality in the network increases the chances that a new application can be added
without having to change the core of the network.

• Applications do not have to depend on the successful implementation and operation of
application-specific services in the network, which may increase their reliability.

Of course, the end to end arguments are not offered as an absolute. There are functions that
can only be implemented in the core of the network, and issues of efficiency and performance
may motivate core-located features. Features that enhance popular applications can be added to
the core of the network in such a way that they do not prevent other applications from
functioning. But the bias toward movement of function “up” from the core and “out” to the edge
node has served very well as a central Internet design principle.

As a consequence of the end to end arguments, the Internet has evolved to have certain
characteristics. The functions implemented “in” the Internet—by the routers that forward
packets—have remained rather simple and general. The bulk of the functions that implement
specific applications, such as e-mail, the World Wide Web, multi-player games, and so on, have
been implemented in software on the computers attached to the “edge” of the Net. The edge-orientation
for applications and comparative simplicity within the Internet together have
facilitated the creation of new applications, and they are part of the context for innovation on the
Internet.

Moving away from end to end

For its first 20 years, much of the Internet’s design has been shaped by the end to end
arguments. To a large extent, the core of the network provides a very general data transfer
service, which is used by all the different applications running over it. The individual
applications have been designed in different ways, but mostly in ways that are sensitive to the
advantages of the end to end design approach. However, over the last few years, a number of
new requirements have emerged for the Internet and its applications. To certain stakeholders,
these various new requirements might best be met through the addition of new mechanism in the
core of the network. This perspective has, in turn, raised concerns among those who wish to
preserve the benefits of the original Internet design.

Continued at:

ana.lcs.mit.edu
====

You asked,

"Does it seem criminally stupid to you, as it does to me, that this group of bozoes proposing a Project Liberty should be more concerned about selling us videos on demand than on securing the integrity of the system?"

I frankly don't know what assigns to them any level of governance in doing 'anything' that moves one way or the other when it comes to securing 'the system.' Who are these people, anyway, what are their true motives, and what rights do they have to assume such authority?

I find myself mildly amused and a little bit curious over the scope and reach of their aspirations, which, collectively, would be no mean feat to pull off.

"Just wondering, Ray"

Yes, me too. Just wondering, FAC