1 / 25

Crawlers and Spiders

User. Web crawler. Search. Indexer. The Web. Indexes. Crawlers and Spiders. Query Engine. Sec. 20.2. URLs crawled and parsed. URLs frontier. Web. Crawling picture. Unseen Web. Seed pages. Basic crawler operation. Create queue with “seed” pages Repeat

vaughn
Télécharger la présentation

Crawlers and Spiders

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. User Web crawler Search Indexer The Web Indexes Crawlers and Spiders Query Engine

  2. Sec. 20.2 URLs crawled and parsed URLs frontier Web Crawling picture Unseen Web Seed pages

  3. Basic crawler operation • Create queue with “seed” pages • Repeat • Fetch each URL on the queue • Parse fetched pages • Extract URLs they point to • Place the extracted URLs on a queue • Until empty queue or out of time • Q: What type of search is this? DFS? BFS? Other?

  4. Simple picture – complications • Web crawling isn’t feasible with one machine • All of the above steps should be distributed • Malicious pages • Spam pages • Spider traps – including dynamically generated • Even non-malicious pages pose challenges • Latency/bandwidth to remote servers vary • Webmasters’ stipulations • How “deep” should you crawl a site’s URL hierarchy? • Site mirrors and duplicate pages • Politeness – don’t hit a server too often

  5. What any crawler mustdo • Be Polite: • Respect implicit and explicit politeness considerations for a website • Only crawl pages you’re allowed to • Respect robots.txt (more on this shortly) • Be Robust: • Be immune to spider traps and other malicious behavior from web servers

  6. What any crawler shoulddo • Be capable of distributed operation: designed to run on multiple distributed machines • Be scalable: designed to increase the crawl rate by adding more machines • Be efficient: permit full use of available processing and network resources • Fetch pages of “higher quality” first • Have Continuous operation: Continue fetching fresh copies of a previously fetched page • Be Extensible: Adapt to new data formats, protocols

  7. URLs crawled and parsed Distributed crawling picture Unseen Web Seed Pages URL frontier Crawling thread

  8. URL frontier • Can include multiple pages from the same host • Must avoid trying to fetch them all at the same time • Must try to keep all crawling threads busy

  9. Explicit and implicit politeness • Explicit politeness: specifications from webmasters on what portions of site can be crawled • robots.txt • Implicit politeness: even with no specification, avoid hitting any site too often

  10. Robots.txt • Protocol for giving spiders (“robots”) limited access to a website, originally from 1994 • www.robotstxt.org/wc/norobots.html • Website announces its request on what can(not) be crawled • For a URL, create a file URL/robots.txt • This file specifies access restrictions

  11. Robots.txt example • No robot should visit any URL starting with "/yoursite/temp/", except the robot called “searchengine": User-agent: * Disallow: /yoursite/temp/ User-agent: searchengine Disallow:

  12. Processing steps in crawling • Pick a URL from the frontier • Fetch the document at the URL • Parse the URL • Extract links from it to other docs (URLs) • Check if URL has content already seen • If not, add to indexes • For each extracted URL • Ensure it passes certain URL filter tests • Check if it is already in the frontier (duplicate URL elimination) Which one? E.g., only crawl .edu, obey robots.txt, etc.

  13. Basic crawl architecture DNS WWW URL set Doc FP’s robots filters Fetch Parse Content seen? URL filter Dup URL elim URL Frontier

  14. Why you need a DNS • Every server is enumerated in an IP address • IPv4: 32 bits written as 4 decimal numerals up to 256, e.g. 149.130.12.213 (Wellesley College) • How many addresses can it represent? • IPv6: 128 bits written as 8 blocks of 4 hex digits each, e.g. AF43:23BC:CAA1:0045:A5B2:90AC:FFEE:8080 • How many addresses are in IPv6? • Client translates URLs to IP addresses, e.g. cs.wellesley.edu  149.130.136.19 • Uses authoritative sites for address translationa.k.a: “Domain Name Server” (DNS)

  15. DNS (Domain Name Server) • A lookup service on the internet • Given a URL, retrieve its IP address • Service provided by a distributed set of servers – thus, lookup latencies can be high (even seconds) • Common OS implementations of DNS lookup are blocking: only one outstanding request at a time • Solutions • DNS caching • Batch DNS resolver – collects requests and sends them out together DNS WWW URL set Doc FP’s robots filters Fetch Content seen? URL filter Dup URL elim Parse URL Frontier

  16. Parsing: URL normalization • When a fetched document is parsed, some of the extracted links are relative URLs • E.g., at http://en.wikipedia.org/wiki/Main_Page we have a relative link to /wiki/Wikipedia:General_disclaimer which is the same as the absolute URLhttp://en.wikipedia.org/wiki/Wikipedia:General_disclaimer • During parsing, must normalize (expand) such relative URLs DNS WWW URL set Doc FP’s robots filters Fetch Content seen? URL filter Dup URL elim Parse URL Frontier

  17. Content seen? • Duplication is widespread on the web • If the page just fetched is already in the index, do not further process it • This is verified using document fingerprints (Doc FP’s) or shingles DNS WWW URL set Doc FP’s robots filters Fetch Content seen? URL filter Dup URL elim Parse URL Frontier

  18. Filters and robots.txt • Filters – regular expressions for URL’s to be crawled/not • Once a robots.txt file is fetched from a site, need not fetch it repeatedly • Doing so burns bandwidth, hits web server • Cache robots.txt files DNS WWW URL set Doc FP’s robots filters Fetch Content seen? URL filter Dup URL elim Parse URL Frontier

  19. Duplicate URL elimination • For a non-continuous (one-shot) crawl, test to see if an extracted + filtered URL has already been passed to the frontier • For a continuous crawl – see details of frontier implementation DNS WWW URL set Doc FP’s robots filters Fetch Content seen? URL filter Dup URL elim Parse URL Frontier

  20. Distributing the crawler • Run multiple crawl threads, under different processes – potentially at different nodes • Geographically distributed nodes • Partition hosts being crawled into nodes • Hash used for partition • How do these nodes communicate? DNS WWW URL set Doc FP’s robots filters Fetch Content seen? URL filter Dup URL elim Parse URL Frontier

  21. Communication between nodes • The output of the URL filter at each node is sent to the Duplicate URL Eliminator at all nodes DNS WWW To other hosts URL set Doc FP’s robots filters Fetch Parse Content seen? URL filter Host splitter Dup URL elim From other hosts URL Frontier

  22. URL frontier: two main considerations • Politeness: do not hit a web server too frequently • Freshness: crawl some pages more often than others • E.g., pages (such as News sites) whose content changes often These goals may conflict each other. (E.g., simple priority queue fails – many links out of a page go to its own site, creating a burst of accesses to that site.)

  23. Politeness – challenges • Even if we restrict only one thread to fetch from a host, can hit it repeatedly • Common heuristic: insert time gap between successive requests to a host that is >> time for most recent fetch from that host

  24. URL frontier: Mercator scheme URLs Prioritizer K front queues Biased front queue selector Back queue router B back queues Single host on each Back queue selector Crawl thread requesting URL

  25. Mercator URL frontier • URL’s flow in from the top into the frontier • Front queues manage prioritization • Refresh rate sampled from previous crawls • Application-specific (e.g., “crawl news sites more often”) • Back queues enforce politeness • Each queue is FIFO Prioritizer 1 K 1 B Biased front queue selector Back queue router Heap Back queue selector

More Related