1 / 79

The Buddhist who understood (your) Desire

The Buddhist who understood (your) Desire. It’s not the consumers’ job to know what they want. Engineering Issues. Crawling Connectivity Serving Web-scale infrastructure Commodity Computing/Server Farms Map-Reduce Architecture How to exploit it for Efficient Indexing

happy
Télécharger la présentation

The Buddhist who understood (your) Desire

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Buddhist who understood (your) Desire It’s not the consumers’ job to know what they want.

  2. Engineering Issues • Crawling • Connectivity Serving • Web-scale infrastructure • Commodity Computing/Server Farms • Map-Reduce Architecture • How to exploit it for • Efficient Indexing • Efficient Link analysis

  3. SPIDER CASE STUDY

  4. Mercator’s way of maintaining URL frontier Extracted URLs enter front queue Each URL goes into a front queue based on its Priority. (priority assigned Based on page importance and Change rate) URLs are shifted from Front to back queues. Each Back queue corresponds To a single host. Each queue Has time teat which the host Can be hit again URLs removed from back Queue when crawler wants A page to crawl How to prioritize? --Change rate; --page importance;

  5. Robot (4) • How to extract URLs from a web page? Need to identify all possible tags and attributes that hold URLs. • Anchor tag: <a href=“URL” … > … </a> • Option tag: <option value=“URL”…> … </option> • Map: <area href=“URL” …> • Frame: <frame src=“URL” …> • Link to an image: <imgsrc=“URL” …> • Relative path vs. absolute path: <base href= …> “Path Ascending Crawlers” – ascend up the path of the URL to see if there is anything else higher up the URL

  6. Focused Crawling • Classifier: Is crawled page P relevant to the topic? • Algorithm that maps page to relevant/irrelevant • Semi-automatic • Based on page vicinity.. • Distiller:is crawled page P likely to lead to relevant pages? • Algorithm that maps page to likely/unlikely • Could be just A/H computation, and taking HUBS • Distiller determines the priority of following links off of P

  7. Connectivity Server.. • All the link-analysis techniques need information on who is pointing to who • In particular, need the back-link information • Connectivity server provides this. It can be seen as an inverted index • Forward: Page id  id’s of forward links • Inverted: Page id  id’s of pages linking to it

  8. Large Scale Indexing

  9. What is the best way to exploit all these machines? • What kind of parallelism? • Can’t be fine-grained • Can’t depend on shared-memory (which could fail) • Worker machines should be largely allowed to do their work independently • We may not even know how many (and which) machines may be available…

  10. Map-Reduce Parallelism • Named after lisp constructs map and reduce • (reduce #’fn2 (map #’fn1 list)) • Run function fn1 on every item of the list, and reduce the resulting list using fn2 • (reduce #’* (map #’1+ ‘(4 5 6 7 8 9))) • (reduce #’* ‘(5 6 7 8 9 10)) • 151200 (=5*6*7*89*10) • (reduce #’+ (map #’primality-test ‘(num1 num2…))) • So where is the parallelism? • All the map operations can be done in parallel (e.g. you can test the primality of each of the numbers in parallel). • The overall reduce operation has to be done after the map operation (but can also be parallelized; e.g. assuming the primality-test returns a 0 or 1, the reduce operation can partition the list into k smaller lists and add the elements of each of the lists in parallel (and add the results) • Note that the parallelism in both the above examples depends on the length of input (the larger the input list the more parallel operations you can do in theory). • Map-reduce on clusters of computers involve writing your task in a map-reduce form • The cluster computing infrastructure will then “parallelize” the map and reduce parts using the available pool of machines (you don’t need to think—while writing the program—as to how many machines and which specific machines are used to do the parallel tasks) • An open source environment that provides such an infrastructure is Hadoop • http://hadoop.apache.org/core/ Qn: Can we bring map-reduce parallelism to indexing?

  11. MapReduce These slides are from Rajaram & Ullman

  12. Single-node architecture CPU Machine Learning, Statistics Memory “Classical” Data Mining Disk

  13. Commodity Clusters • Web data sets can be very large • Tens to hundreds of terabytes • Cannot mine on a single server (why?) • Standard architecture emerging: • Cluster of commodity Linux nodes • Gigabit ethernet interconnect • How to organize computations on this architecture? • Mask issues such as hardware failure

  14. CPU CPU CPU CPU Mem Mem Mem Mem Disk Disk Disk Disk Cluster Architecture 2-10 Gbps backbone between racks 1 Gbps between any pair of nodes in a rack Switch Switch Switch … … Each rack contains 16-64 nodes

  15. Stable storage • First order problem: if nodes can fail, how can we store data persistently? • Answer: Distributed File System • Provides global file namespace • Google GFS; Hadoop HDFS; Kosmix KFS • Typical usage pattern • Huge files (100s of GB to TB) • Data is rarely updated in place • Reads and appends are common

  16. Distributed File System • Chunk Servers • File is split into contiguous chunks • Typically each chunk is 16-64MB • Each chunk replicated (usually 2x or 3x) • Try to keep replicas in different racks • Master node • a.k.a. Name Nodes in HDFS • Stores metadata • Might be replicated • Client library for file access • Talks to master to find chunk servers • Connects directly to chunkservers to access data

  17. Warm up: Word Count • We have a large file of words, one word to a line • Count the number of times each distinct word appears in the file • Sample application: analyze web server logs to find popular URLs

  18. Word Count (2) • Case 1: Entire file fits in memory • Case 2: File too large for mem, but all <word, count> pairs fit in mem • Case 3: File on disk, too many distinct words to fit in memory • sort datafile | uniq –c

  19. Word Count (3) • To make it slightly harder, suppose we have a large corpus of documents • Count the number of times each distinct word occurs in the corpus • words(docs/*) | sort | uniq -c • where words takes a file and outputs the words in it, one to a line • The above captures the essence of MapReduce • Great thing is it is naturally parallelizable

  20. map map k k k v v v k k k v v v MapReduce: The Map Step Input key-value pairs Intermediate key-value pairs … … k v

  21. Intermediate key-value pairs Key-value groups reduce reduce k k v v k v v v k k k v v v k v v group k v … … k v k v MapReduce: The Reduce Step Output key-value pairs …

  22. MapReduce • Input: a set of key/value pairs • User supplies two functions: • map(k,v)  list(k1,v1) • reduce(k1, list(v1))  v2 • (k1,v1) is an intermediate key/value pair • Output is the set of (k1,v2) pairs

  23. Word Count using MapReduce map(key, value): // key: document name; value: text of document for each word w in value: emit(w, 1) reduce(key, values): // key: a word; value: an iterator over counts result = 0 for each count v in values: result += v emit(result)

  24. fork fork fork Master assign map assign reduce Input Data Worker Output File 0 write Worker local write Split 0 read Worker Split 1 Output File 1 Split 2 Worker Worker remote read, sort Distributed Execution Overview User Program

  25. Data flow • Input, final output are stored on a distributed file system • Scheduler tries to schedule map tasks “close” to physical storage location of input data • Intermediate results are stored on local FS of map and reduce workers • Output is often input to another map reduce task

  26. Coordination • Master data structures • Task status: (idle, in-progress, completed) • Idle tasks get scheduled as workers become available • When a map task completes, it sends the master the location and sizes of its R intermediate files, one for each reducer • Master pushes this info to reducers • Master pings workers periodically to detect failures

  27. Failures • Map worker failure • Map tasks completed or in-progress at worker are reset to idle • Reduce workers are notified when task is rescheduled on another worker • Reduce worker failure • Only in-progress tasks are reset to idle • Master failure • MapReduce task is aborted and client is notified

  28. How many Map and Reduce jobs? • M map tasks, R reduce tasks • Rule of thumb: • Make M and R much larger than the number of nodes in cluster • One DFS chunk per map is common • Improves dynamic load balancing and speeds recovery from worker failure • Usually R is smaller than M, because output is spread across R files

  29. Reading • Jeffrey Dean and Sanjay Ghemawat, MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html • Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, The Google File System http://labs.google.com/papers/gfs.html

  30. Partition the set of documents into “blocks” construct index for each block separately merge the indexes

  31. (map workers) (reduce workers)

  32. Dynamic Indexing “simplest” approach

  33. Efficient Computation of Pagerank How to power-iterate on the web-scale matrix?

  34. Source node (32 bit int) Outdegree (16 bit int) Destination nodes (32 bit int) 0 4 12, 26, 58, 94 1 3 5, 56, 69 2 5 1, 9, 10, 36, 78 Representing ‘Links’ Table • Stored on disk in binary format • Size for Stanford WebBase: 1.01 GB • Assumed to exceed main memory How do we split this?

  35. source node =  dest node Dest Links (sparse) Source Algorithm 1 s Source[s] = 1/N while residual > { d Dest[d] = 0 while not Links.eof() { Links.read(source, n, dest1, … destn) for j = 1… n Dest[destj] = Dest[destj]+Source[source]/n } d Dest[d] = c * Dest[d] + (1-c)/N /* dampening */ residual = Source – Dest /* recompute every few iterations */ Source = Dest }

  36. Analysis of Algorithm 1 • If memory is big enough to hold Source & Dest • IO cost per iteration is | Links| • Fine for a crawl of 24 M pages • But web ~ 800 M pages in 2/99 [NEC study] • Increase from 320 M pages in 1997 [same authors] • If memory is big enough to hold just Dest • Sort Links on source field • Read Source sequentially during rank propagation step • Write Dest to disk to serve as Source for next iteration • IO cost per iteration is | Source| + | Dest| + | Links| • If memory can’t hold Dest • Random access pattern will make working set = | Dest| • Thrash!!!

More Related