To: GraceZ who wrote (3800 ) 12/10/2001 12:34:06 AM From: ahhaha Read Replies (2) | Respond to of 24758 Yes, but ATT's net hasn't had enough time to be refined. After various tweaks one can get about 1 meg down and everyone is capped at 125 unless they have some special arrangement. ATT DNS are setup in a way so that W2000 and XP client side caching retains error states whose existence block your computer from reaching its intended addresses. You get "page not available" errors. The client OS and server OS are both trying to achieve stability in enhanced capability, but they discover a pocket of instability which occurs when the server side caching rate goes through the roof. The client side DNS caching software gets a notification that the server DNS address resolution has failed. The client OS stores this in the nearby cache so that when you try to log to that address, the console blocks and you get the dreaded non-progressing hour glass resulting in a report, "page not available". The page is most likely available but your computer is reporting an error state that exists in it merely as a report of a failed access. If you clear the client cache, and then try to access a bad server, i.e, one that returns that it can't fulfill its assigned task of name resolution, the same error is returned to the client cache that you just eliminated. It seems until ATT beefs up their system with new, more powerful DNS servers to compensate for the load that @Home was managing and offloading, the error state object's retention in the client cache must be reduced from a lifetime of 30 seconds to 0 seconds. The error is sent from the servers if it's bogged down, but your client doesn't retain that for future local reference. Hence a new call to the server later, may find the server unencumbered, so you go through without having an error state re-embedded in your cache, waiting for your next surfing call to confound you.