SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : AUTOHOME, Inc -- Ignore unavailable to you. Want to Upgrade?


To: Ted Schnur who wrote (10999)6/11/1999 4:35:00 AM
From: ahhaha  Respond to of 29970
 
If the ISP's connects directly to ATHM's backbone, there are no "last mile" considerations.

ATHM's network is private and is beyond the public domain. The public has control over the last mile because apparently the MSO is in the public domain. Since ATHM's private network ends where the MSO's public domain begins a copper ISP has no open access power over that segment of distribution. So the ISP must cut a superhosting agreement with ATHM for access to their ATM backbone.

(In the last mile ISP)Subscriber access...should not take up any more bandwidth then a typical ATHM subscriber.

I have argued that 10 ISPs could be easily accommodated, but 1000 couldn't, not within the existing allocated spectrum for this transmission mode. The spectrum would have to be reorganized and the equipment in the headend would have to be modified. The reason is that new switching would be required there and the trunk to the headend from a regional access point like a data center would have to increase its bandwidth capability. A 2 gig feed is not enough to support the breadth of demand if ATHM was the exclusive provider. ATHM's network is wholly inadequate to meet the coming demand. That's why ATHM is trying to scheme to encourage a lot of people to pull the same data since then the caching server strategy will be to avail. That possibility evaporates with many ISPs. Few of the pulls will be the same.

The MSO's are currently restricted by its exclusive agreement with ATHM, so an agreement between the MSOs and the copper ISP's is not an option.

The agreements are far more loose than you think. There are many escape clauses. The Cable Partners stayed together not because of exclusivity requirements, but because they saw it was in their interest. It is partially because communities fear the exclusivity connection of ATHM and MSOs that they are demanding that government declare the exclusion restraint of trade. An FCC declaration that exclusion is equivalent to discrimination would dissolve the Cable Partnership. The FCC isn't inclined to do that because they are still trying to use cable as a means to open local telephony, but the events in Portland may force their hand, since it invokes the power of Congress or of the Supreme Court. The FCC has said they prefer a market solution, but the people don't want that, so the decision will go to one of the other branches.

I would argue the caching servers are superfluous smoke and mirrors functions that make the network appear faster than it might be if there is a cache hit,...

Milo Medin wouldn't agree with you, but I tend to agree, but it is a function of the character of the load. The cache strategy works when you have many users pulling the same data like if 10,000 people are downloading a movie. The segment call is shunted. The upper segment isn't accessed. The servers in the headend can handle the load without an upstream packet call. But is this the way we use the 'Net now? You get 5000 users on a cluster interacting and your moving data at 33.6!

In the environment we are talking about, I am not convinced that the money spent on these systems should not be going into infrastructure, or other services such as web hosting, mirror sites, offsite data backup services, and so on.

It isn't going into that kind of thing because there's no money in it, but there's plenty of money saved by keeping the network flow brisk. Web hosting? ATHM wants to do that but they can't even support a sparse network.

caching servers are worth the investment? Either way, it's a great marketing pitch.

That's dangerous to say here. It's almost heresy.

Spend some extra money on a faster trunk between the first router and ATHM backbone. Then use very large, very fast caching servers on the fast backbone.

That is a necessity regardless of open access.

Advantage ATHM!

Well, you and I agree that ATHM is in the good position, but there are a few others on this thread who aren't so convinced.

I agree that the major problems with open access via the MSO's are legal and physical in nature, whereas IMHO, open access via ATHM's network is not.

But you were supposed to be doing more disagreein'.



To: Ted Schnur who wrote (10999)6/11/1999 5:01:00 PM
From: E. Davies  Respond to of 29970
 
I would argue the caching servers are superfluous smoke and mirrors functions that make the network appear faster that it might be if there is a cache hit, but slow the user down when there is a miss (or is it just a slow web site?).

Just to express an alternate view:

Do not forget we are talking broadband here. The *theory* is that the biggest loads on the system will come from things like audio and video clips, downloads etc. Not people scrolling through SI posts. There is a lot more commonality among people when you are talking major resource hogs at critical times. When IE5.0 came out *everyone* wanted to download it. In times of a major event *everyone* is going to the same news sites to check it out.
It is analagous to how it is not worth using doublespace to compress your hard drive, everything that really uses space is already compressed.

Also, I believe the extra time used to look in a cache is small enough that it does nothing to affect the users perception of performance even in the case of a cache miss.
Eric



To: Ted Schnur who wrote (10999)6/12/1999 11:47:00 AM
From: DownSouth  Respond to of 29970
 
I would argue the caching servers are superfluous smoke and mirrors functions that make the network appear faster that it might be if there is a cache hit, but slow the user down when there is a miss (or is it just a slow web site?). In a corporate environment where a lot of the intranet data is being served from well known servers and application, the hit rate is high enough to justify the investment. In the environment we are talking about, I am not convinced that the money spent on these systems should not be going into infrastructure, or other services such as web hosting, mirror sites, offsite data backup services, and so on.

Your opinion seems contrary to the decisions being made by those whose responsibility it is to improve their customers' web "experience" in the most effective manner. Caching is very appropriate, less expensive and very effective in providing an overall better experience (response time) for the most types of resource requests. Certainly it does not overcome all of the problems will all of the resource requests. Different types of infrastructure, as you mentioned, are required for improvement of things like e-mail.

There were doubters way back when memory caching and disk caching were introduced. The theory and practice are the same. It works, its economical, and it is reliable.