SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : George Gilder - Forbes ASAP

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Thomas who wrote (1521)5/14/1999 11:05:00 AM
From: Scott C. Lemon  Read Replies (1) of 5853
 
Hello Thomas,

> You seem to be heading down a treacherous and slippery slope. Why
> do you think that the deployment of more (proprietary) intelligence
> into the core of the network is the way to "improve" the
> infrastructure?

Hmmm ... I'm wondering which comment that I made indicated the association that you state above?

I see these as to, inevitable, but unrelated things. First, all of the proxy/cache work, and object routing investigation and research, is based on industry standard research going on all over the world. The best place for links etc. is NLANR ( nlanr.net ). This is where much of the research is going on and the IETF specifications (RFCs) are being written. This is the work which will do the second part of what you state above ... "'improve' the
infrastructure" ...

As for the proprietary nature of vendors ... I don't know if there is anything we can do about this. Today if you connect a whole bunch of Cisco equipment together in a network, there are tricks and techniques that are used to optimize the data flow. As soon as you add a "third-party" router you have to let the Cisco know so that it will use more "standard" methods. So aren't these systems proprietary today?

I believe that the evolutionary process is fed by the constant attempts by vendors to "one-up" each other, and that they will continue to accept the support of standards, but then look for "proprietary" way to gain an advantage. If they are successful, then others will look to support the new technique, and most probably the vendor (after a while) will release the specs as a proposed standard and run it through the IETF process ... the whole time selling their "proprietary" solution.

I don't believe that I'm heading down any slope ... I'm simply stating what is already happening, and what is "inevitable" from my perspective.

> It seems to me that the more proprietary stuff you jam into
> the core of the network the less useful the network will become (as
> among the key drivers today are ubiquity and openness).

This is a common misunderstanding of layered architectures. If the overall architecture of the network is built properly, the "edges" are abstracted from anything happening in the "core". So just because the core of the network is doing sneaky things to optimize the transport of information, the edges don't realize it.

Think of this like your home telephone. You plug your phone into the wall, and operate it using the same methods, voltages, and "protocols" that have been around since the first phones ... but the "infrastructure" or "core" of the telephone network has massively changed into digital switching and various means of moving the information (fiber, satellite, etc.)

I guess that the issue that I see is that if some carrier decided to implement completely proprietary infrastructure "inside" their network that provide phenominal performance and value propositions then it's only at the edges of their network that they would have to be "compatible" and "standard".

What's the difference between the backplane in a high-speed router which is routing between dozens of connected networks, and two routers which are connected over a high-speed fiber that are routing between dozens of connected networks?

> Of course it is tempting for the equipment suppliers to move toward
> more proprietary equipment and protocols to build up and protect
> their own market position, but this is a real peril for the
> Internet.

This is another common discussion. I'm not sure where people think that standards in the Internet come from. They come from *anyone* that has a good "non-standard" idea that is proposed, discussed, edited, and "standardized". So yes, I agree that there is a group of vendors who use proprietary technologies to "lock-in" customers. But the rest of us struggle to find a "better solution" and then push these into standards so that we can interoperate. I think that if you step back and look at what is happening in the Internet it is this evolutionary process which builds standards.

> I do believe that the caching and object routing will be big.

I agree completely! ;-)

> That will raise big issues, though. I would much prefer to have
> more people thinking about the end-points (i.e. the *edges*). . .

This is the reason that I believe that caches will help. They will start from the edges (near sources and destinations) and then grow into the infrastructure ...

> Cheers,
> Thomas

Scott C. Lemon
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext