To: olivier asser who wrote (7973 ) 4/14/2005 9:57:47 AM From: StockDung Read Replies (1) | Respond to of 12465 Crappy, I have looked at his old site and it is roboted. Nothing has been distroyed as you stated and robots can also be taken off. A court order would do such a thing.web.archive.org */http://trading-places.net sites are not available because of Robots.txt or other exclusions. What does that mean? The Standard for Robot Exclusion (SRE) is a means by which web site owners can instruct automated systems not to crawl their sites. Web site owners can specify files or directories that are allowed or disallowed from a crawl, and they can even create specific rules for different automated crawlers. All of this information is contained in a file called robots.txt. While robots.txt has been adopted as the universal standard for robot exclusion, compliance with robots.txt is strictly voluntary. In fact most web sites do not have a robots.txt file, and many web crawlers are not programmed to obey the instructions anyway. However, Alexa, the company that crawls the web for the Internet Archive, does respect robots.txt instructions, and even does so retroactively. If a web site owner ever decides he / she prefers not to have a web crawler visiting his / her files and sets up robots.txt on the site, the Alexa crawlers will stop visiting those files and mark all files previously gathered as unavailable. This means that sometimes, while using the Internet Archive Wayback Machine, you may find a site that is unavailable due to robots.txt or other exclusions. Other exclusions? Yes, sometimes a web site owner will contact us directly and ask us to stop crawling or archiving a site. We comply with these requests.