A web crawler (also known as a web spider or web robot) is a program or automated script which browses the World Wide Web in a methodical, automated manner.
This process is called Web crawling or spidering.
Many legitimate sites, in particular search engines, use spidering as a means of providing up-to-date data.
Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine, that will index the downloaded pages to provide fast searches.
Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code.
Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for spam).
_________________________
_________________________
Comments
Post a Comment