Hey, I know, I'll write a WWW-exploring robot! Why not?

Programs that automatically traverse the web can be quite useful, but have the potential to make a serious mess of things. Robots have been written which do a "breadth-first" search of the web, exploring many sites in a gradual fashion instead of aggressively "rooting out" the pages of one site at a time. Some of these robots now produce excellent indexes of information available on the web.

But others have written simple depth-first searches which, at the worst, can bring servers to their knees in minutes by recursively downloading information from CGI script-based pages that contain an infinite number of possible links. (Often robots can't realize this!) Imagine what happens when a robot decides to "index" the CONTENTS of several hundred mpeg movies. Shudder.

The moral: a robot that does what you want may already exist; if it doesn't, please study the document World Wide Web Robots, Wanderers and Spiders (URL is: http://web.nexor.co.uk/mak/doc/robots/robots.html ) and learn about the emerging standards for exclusion of robots from areas in which they are not wanted. You can also read about existing robots there.


World Wide Web FAQ