1 / 45

Crawling

Crawling. Ida Mele. Nutch. Apache Nutch is an open source Java implementation of a search engine We can use Nutch for crawling a portion of the Web Useful links: http://nutch.apache.org/ http://wiki.apache.org/nutch/ http://wiki.apache.org/nutch/NutchTutorial. Nutch: advantages.

lanai
Télécharger la présentation

Crawling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Crawling Ida Mele

  2. Nutch • Apache Nutch is an open source Java implementation of a search engine • We can use Nutch for crawling a portion of the Web • Useful links: • http://nutch.apache.org/ • http://wiki.apache.org/nutch/ • http://wiki.apache.org/nutch/NutchTutorial Crawling

  3. Nutch: advantages • Understanding • We have the source code and we can use it to see how a large search engine works • Nutch has been built using ideas from academia and industry, and it is very useful for researchers who want to try out new search algorithms Crawling

  4. Nutch: advantages • Transparency • The details of the ranking algorithms used by commercial search engines are secret, and usually there are economical reasons behind the ranked list of results • Nutch implementation is transparent. We know how the ranking algorithms work, and we can trust on the fairness of the final rankings Crawling

  5. Nutch: advantages • Extensibility • Nutch is a platform for adding search to heterogeneous collections of information • It allows to customize the search interface • We can extend the out-of-the-box functionality through the plugin mechanism Crawling

  6. Nutch vs. Lucene • Nutch is built on top of Lucene • Apache Lucene is a Java library for text indexing and searching • It ensures high-performance and full-featured text search • It provides support for any application that requires full-text search • It is used just for indexing and not for crawling Crawling

  7. Architecture • Nutch can be divided into two pieces: • crawler which fetches pages and turns them into an inverted index • searcher which answers users' search queries • The index is the interface between the crawler and the searcher • The crawler and searcher systems can be on separate hardware platforms Crawling

  8. Architecture • Crawler and searcher systems can be scaled independently • For example, if we have a highly trafficked search page that provides searching for a relatively modest set of sites, we may use a modest crawler infrastructure, and invest more substantial resources for supporting the searcher Crawling

  9. Crawler system • The crawler system is driven by the Nutch tool called crawl, and by other related tools to build and maintain the data structures • Data structures are: • the web database • a set of segments • the index Crawling

  10. WebDB • The web database (WebDB) is a data structure for mirroring the structure and properties of the web graph being crawled • It stores two types of entities: • Page: It is indexed by its URL and the MD5 hash of its contents. Other information: the # of outlinks, fetch information, the score of the page • Link: It represents the connection between the source page and the target page Crawling

  11. Segment • The segment is a collection of pages that are fetched and indexed by the crawler in a run • The fetchlist is a list of URLs to fetch, and it is generated from the WebDB • The fetcher output is the data retrieved from the pages in the fetchlist • Any segment has a lifespan (30 days is the default re-fetch interval) Crawling

  12. Index • Inverted index of all pages retrieved by the system • The index is created by merging all of the individual segment indexes • Nutch uses Lucene to build the index. Note that in Lucene there is the concept of segment, but it is different from the segment in Nutch: • In Lucene, the index segment is a portion of the index • In Nutch, the segment is a fetched and indexed portion of the WebDB Crawling

  13. Crawling • Nutch can operate at one of these three different scales: • Local filesystem • Intranet • Web • All scales have different characteristics. For example, crawling the file system is more reliable compared to the other two scales Crawling

  14. Crawling • For crawling billions of pages from the web, we must: • define the seed set (i.e., the set of pages we want to start with) • decide how many crawlers we use and how partition the work among them • decide how often we want to do the re-crawling • cope with broken links, unresponsive sites, and unintelligible or duplicate content Crawling

  15. Crawling • The crawling process is basically a cycle made of three steps: • the crawler generates a set of fetchlists from the WebDB (generate) • a set of fetchers downloads the content from the Web (fetch) • the crawler updates the WebDB with new links that were found (update) Crawling

  16. Crawling • Nutch observes: • Politeness: URLs with the same host are always assigned to the same fetchlist, so that a web site is not overloaded with requests from multiple fetchers in rapid succession • Robots Exclusion Protocol: It allows site owners to control which parts of their site may be crawled Crawling

  17. Crawling: low-level tools • Crawling is done by the crawl tool of Nutch, that is a front-end to lower-level tools • The crawl tool can be used to get started with crawling websites, but then we need to use the lower-level tools to perform re-crawls and other maintenance on the data structures built during the initial crawl Crawling

  18. Crawling: low-level tools • We can use the lower-level tools in sequence: • Create a new WebDB (admin db-create) • Inject root URLs into the WebDB (inject) • Generate a fetchlist from the WebDB in a new segment (generate) • Fetch content from URLs in the fetchlist (fetch) • Update the WebDB with links from fetched pages (updatedb) • Repeat steps 3-5 until the required depth is reached Crawling

  19. Crawling: low-level tools • Update segments with scores and links from the WebDB (updatesegs) • Index the fetched pages (index) • Eliminate duplicate content, and duplicate URLs, from the indexes (dedup) • Merge the indexes into a single index for searching (merge) Crawling

  20. Crawling: low-level tools • We create a new WebDB (step 1), and we populate it with some seed URLs (step 2) • Then we use the generate/fetch/update cycle (steps 3-6) • After the cycle, the crawler creates an index (steps 7-10). In particular, • each segment is indexed independently (step 8) • the duplicate pages are removed (step 9) • the individual indexes are combined into a single index (step 10) Crawling

  21. Running a crawl with Nutch • Download and unpack a Nutch distribution (for example, apache-nutch-1.1-bin.zip) • Make sure that the environment variable NUTCH_JAVA_HOME or JAVA_HOME is set with the Java home path: • Run the following command or add it to the .bashrc file: export NUTCH_JAVA_HOME=%pathJava Crawling

  22. Nutch configuration • All of Nutch's configuration files are in the conf subdirectory • The main configuration file is conf/nutch-default.xml. It contains the default settings, and should not be modified • To change a setting we can create or update the conf/nutch-site.xml file Crawling

  23. Nutch configuration • Add your agent name in the value field of the http.agent.name property of the file conf/nutch-site.xml, for example, we can use the name: Sapienza University • <property> • <name>http.agent.name</name> • <value>Sapienza University</value> • <description> HTTP 'User-Agent' request header. MUST NOT be empty - please set this to a single word uniquely related to your organization. </description> • </property> Crawling

  24. Url filter • The crawl tool uses a filter to decide which URLs can go into the WebDB (steps 2 and 5) • This can be used to restrict the crawling 
to the URLs that match any given pattern, specified by regular expressions • For example, if we want to restrict the domain to the DIS domain, we have to update the configuration file conf/crawl-urlfilter.txt Crawling

  25. Url filter • Open the file conf/crawl-urlfilter.txt and replace the line: +^http://([a-z0-9]*\.)*MY.DOMAIN.NAME/ with: +^http://([a-z0-9]*\.)*dis.uniroma1.it/ • The file conf/crawl-urlfilter.txt will contain: # accept hosts in MY.DOMAIN.NAME #+^http://([a-z0-9]*\.)*MY.DOMAIN.NAME/ +^http://([a-z0-9]*\.)*dis.uniroma1.it/ Crawling

  26. Example • Create a file called urls, that contains the root URLs. • These URLs will be used to populate the initial fetchlist. • For example, if we want to start from the home page of the department, we will use: echo ‘http://www.dis.uniroma1.it’ > urls Crawling

  27. Example • We run the crawler with: bin/nutch crawl urls -dir mycrawl -depth 5 > mycrawl.log where: • urls is the name of the file with the seed URLs • mycrawl is the name of the directory • 5 is the depth of the crawling • mycrawl.log is the name of the log file Crawling

  28. Results of the crawl • The directory mycrawl contains the following subdirectories: • crawldb • linkdb • segments • index • indexes Crawling

  29. Results of the crawl: readdb • The readdb tool parses the WebDB and displays portions of it in human-readable form • The stats option displays the number of pages and links: bin/nutch readdb mycrawl/crawldb -stats >stats.txt Then, we can use: more stats.txt Crawling

  30. Results of the crawl: readdb • The dump option gives the dump of the pages. Each page appears in a separate block, with one field per line. The ID field is the MD5 hash of the page contents. There is also information about when the pages should be next fetched (which defaults to 30 days), and the page scores • We issue the command: bin/nutch readdb mycrawl/crawldb -dump mydump then we use: more mydump/part-00000 Crawling

  31. Results of the crawl: readdb • The readdb tool also supports extraction of an individual page or link by URL or MD5 hash • For example, to examine the info of the page http://cclii.dis.uniroma1.it/airo/index.php we use the option url by issuing the command: bin/nutch readdb mycrawl/crawldb -url http://www.dis.uniroma1.it/airo/index.php Crawling

  32. Results of the crawl: readlinkdb • The readlinkdb tool can be used to create the dump of the link structure (the graph) by using the option dump: bin/nutch readlinkdb mycrawl/linkdb/ -dump mylinks • We can read the in-links by using: more mylinks/part-00000 Note that it gives us just the list of the in-links. For the out-links we have to merge the segments and read the result Crawling

  33. Results of the crawl: readseg • The crawl creates a few segments in timestamped subdirectories, one for each 
generate/fetch/update cycle • The readseg tool is the segment reader • The option list gives a summary of all of the generated segments: bin/nutch readseg -list -dir mycrawl/segments/ Crawling

  34. Results of the crawl: readseg • The option dump gives a dump of a given segment: bin/nutch readseg -dump mycrawl/segments/YYYYMMDDhhmmss/ dump_seg1 Where YYYYMMDDhhmmss is the name of the segment, and it is given by the date and time we created the segment • Then we can use: more dump_seg1/dump Crawling

  35. Results of the crawl: mergeseg • We have seen that the readlinkdb tool can be used to have the list of in-links • To have the out-links we need to merge the segments and read the result • We use the mergesegs tool: bin/nutch mergesegs whole-segments -dir mycrawl/segments/* • Then we can use the dump option of the readseg tool on the result of the merge: bin/nutch readseg -dump whole-segments/YYYYMMDDhhmmss/ dump-outlinks Crawling

  36. Exercise • We want to create the webgraph of a portion of the Web • First of all, install and configure Nutch • For the crawling: • Create the file with the seed set (example urls) • Update the conf/url-filter.txt file • Decide the depth of the crawling and crawl a portion of the web using the crawl tool. For example, for depth 5 we issue: bin/nutch crawl urls -dir mycrawl -depth 5 > mycrawl.log Crawling

  37. Exercise • Once the crawling is completed, you can create the webgraph • Download the directory with libraries lib.zip available at: http://www.dis.uniroma1.it/~mele/WebIR.html • Download the file set-classpath.sh available at: http://www.dis.uniroma1.it/~mele/WebIR.html Update the file set-classpath.sh with the path to your lib directory • Put the set-classpath.sh file in the Nutch home, open the terminal, and set the classpath with • source set-classpath.sh Crawling

  38. Exercise • Create the file with in-links using the following commands: • bin/nutch readlinkdb mycrawl/linkdb/ -dump mylinks • egrep -v $'^$' mylinks/part-00000 >inlinks.txt Crawling

  39. Exercise • Create the file with the out-links 1) Merge the segments: • bin/nutch mergesegs whole-segments -dir mycrawl/segments/* 2) Use readseg to read the segments, and then create the file with out-links: • bin/nutch readseg -dump whole-segments/YYYYMMDDhhmmss/dump-outlinks • cat dump-outlinks/dump | egrep 'URL|toUrl' >outlinks.txt Crawling

  40. Exercise • Print the in-links and out-links in the links.txt file by issuing the following commands: • java nutchGraph.PrintInlinks inlinks.txt >links.txt • java nutchGraph.PrintOutlinks outlinks.txt >>links.txt • Remove the duplicates: • LANG=C sort links.txt | uniq > cleaned-links.txt Crawling

  41. Exercise • Create the map of urls with the following commands: • cut -f1 links.txt >url-list.txt • cut -f2 links.txt >>url-list.txt • LANG=C sort url-list.txt | uniq > sorted-url-list.txt • java -Xmx2G it.unimi.dsi.util.FrontCodedStringList -u -r 32 umap.fcl < sorted-url-list.txt • java -Xmx2G it.unimi.dsi.sux4j.mph.MWHCFunction umap.mph sorted-url-list.txt Crawling

  42. Exercise • Create the graph: • java -Xmx2G nutchGraph.PrintEdges cleaned-links.txt umap.mph > webgraph.dat • numNodes=$(wc -l < sorted-url-list.txt) • java -Xmx2G nutchGraph.IncidenceList2Webgraph $numNodes webgraph • java -Xmx2G it.unimi.dsi.webgraph.BVGraph –g ASCIIGraph webgraph webgraph Crawling

  43. Indexing • Once the crawling operation is completed, we have the graph and the indexed pages • Remember that Nutch uses Lucene for the indexing phase • If we want to use MG4J for building the inverted index, we can collect the pages fetched during the crawling by using: wget -i sorted-url-list.txt • Then we can use MG4J for indexing and querying the resulting collection of web pages Crawling

  44. WEB Nutch readdb Link structure ParserDB db graph.txt getfiles ASCIIGraph files BVGraph MG4J QueryMG4J Query PageRank RankPR Crawling

  45. Homework • Repeat the exercise using a different seed set and/or depth. Create the corresponding webgraph. Compute the Pagerank for the nodes of the webgraph. Plot the distribution of the Pagerank values Crawling

More Related