+ 6

What is the function of a webcrawler?

4th May 2017, 1:00 PM
Shubh Saxena
Shubh Saxena - avatar
3 Answers
+ 30
A Web crawler , sometimes called a spider, is an Internet bot that systematically browses the World Wide Web , typically for the purpose of Web indexing (web spidering). Web search engines and some other sites use Web crawling or spidering software to update their web content or indices of others sites' web content. Web crawlers can copy all the pages they visit for later processing by a search engine which indexes the downloaded pages so the users can search much more efficiently. Crawlers consume resources on the systems they visit and often visit sites without approval. Issues of schedule, load, and "politeness" come into play when large collections of pages are accessed. Mechanisms exist for public sites not wishing to be crawled to make this known to the crawling agent. For instance, including a robots.txt file can request bots to index only parts of a website, or nothing at all
4th May 2017, 1:44 PM
JΞΜΔ đŸ‡šđŸ‡©đŸ‘‘
JΞΜΔ  đŸ‡šđŸ‡©đŸ‘‘ - avatar
+ 6
Data collection. Such scripts crawl through the web and collect information on websites, their links and content, directory structure, visits statistics and such. Google uses webcrawlers to map the web and add indexing and scoring to websites, for example.
4th May 2017, 1:06 PM
Kuba SiekierzyƄski
Kuba SiekierzyƄski - avatar
+ 4
fun. I made two one day. simple, one just goes to websites and gets links then goes to those links. The other saves all the images off a page to a folder.
4th May 2017, 1:09 PM
LordHill
LordHill - avatar