1 / 32

Lecture 12: MapReduce : Simplified Data Processing on Large Clusters

Lecture 12: MapReduce : Simplified Data Processing on Large Clusters. Xiaowei Yang (Duke University). Review. What is cloud computing? Novel cloud applications Inner workings of a cloud MapReduce: how to process large datasets using a large cluster Datacenter networking. Roadmap.

charis
Télécharger la présentation

Lecture 12: MapReduce : Simplified Data Processing on Large Clusters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 12: MapReduce: Simplified Data Processing on Large Clusters Xiaowei Yang (Duke University)

  2. Review • What is cloud computing? • Novel cloud applications • Inner workings of a cloud • MapReduce: how to process large datasets using a large cluster • Datacenter networking

  3. Roadmap • Introduction • Examples • How it works • Fault tolerance • Debugging • Performance

  4. What is MapReduce • An automated parallel programming model for large clusters • User implements Map() and Reduce() • A framework • Libraries take care of the rest • Data partition and distribution • Parallel computation • Fault tolerance • Load balancing • Useful • Google

  5. Map and Reduce • Functions borrowed from functional programming languages (eg. Lisp)‏ • Map()‏ • Process a key/value pair to generate intermediate key/value pairs • map (in_key, in_value) -> (out_key, intermediate_value) list • Reduce()‏ • Merge all intermediate values associated with the same key • reduce (out_key, intermediate_value list) -> out_value list

  6. Example: word counting • Map()‏ • Input <filename, file text> • Parses file and emits <word, count> pairs • eg. <”hello”, 1> • Reduce()‏ • Sums all values for the same key and emits <word, TotalCount> • eg. <”hello”, (1 1 1 1)> => <”hello”, 4>

  7. Example: word counting • map(String key, String value): • // key: document name • // value: document contents • for each word w in value: • EmitIntermediate(w, "1"); • reduce(String key, Iterator values): • // key: a word • // values: a list of counts • int result = 0; • for each v in values: • result += ParseInt(v); • Emit(AsString(result));

  8. Google Computing Environment • Typical Clusters contain 1000's of machines • Dual-processor x86's running Linux with 2-4GB memory • Commodity networking • Typically 100 Mbs or 1 Gbs • IDE drives connected to individual machines • Distributed file system

  9. How does it work? • From user: • Input/output files • M: number of map tasks • M >> # of worker machines for load balancing • R: number of reduce tasks • W: number of machines • Write map and reduce functions • Submit the job • Requires no knowledge of parallel or distributed systems • What about everything else?

  10. Step 1: Data Partition and Distribution • Split an input file into M pieces on distributed file system • Typically ~ 64 MB blocks • Intermediate files created from map tasks are written to local disk • Output files are written to distributed file system

  11. Step 2: Parallel computation • Many copies of user program are started • One instance becomes the Master • Master finds idle machines and assigns them tasks • M map tasks • R reduce tasks

  12. Locality • Tries to utilize data localization by running map tasks on machines with data • map() task inputs are divided into 64 MB blocks: same size as Google File System chunks

  13. Step 3: Map Execution‏ • Map workers read in contents of corresponding input partition • Perform user-defined map computation to create intermediate <key,value> pairs

  14. Step 4: output intermediate data • Periodically buffered output pairs written to local disk • Partitioned into R regions by a partitioning function • Send locations of these buffered pairs on the local disk to the master, who is responsible for forwarding the locations to reduce workers

  15. Partition Function • Partition on the intermediate key • Example partition function: hash(key) mod R • Question: why do we need this? • Example Scenario: • Want to do word counting on 10 documents • 5 map tasks, 2 reduce tasks

  16. Step 5: Reduce Execution‏ • The master notifies a reduce worker • Reduce workers iterate over ordered intermediate data • Data is sorted by the intermediate keys • Why is sorting needed? • Each unique key encountered – values are passed to user's reduce function • eg. <key, [value1, value2,..., valueN]> • Output of user's reduce function is written to output file on global file system • When all tasks have completed, master wakes up user program

  17. Observations • No reduce can begin until map is complete • Why? • Tasks scheduled based on location of data • If map worker fails any time before reduce finishes, task must be completely rerun • Master must communicate locations of intermediate files • MapReduce library does most of the hard work

  18. Fault Tolerance • Workers are periodically pinged by master • No response = failed worker • Reassign tasks if workers dead • Input file blocks stored on multiple machines

  19. Backup tasks • When computation almost done, reschedule in-progress tasks • Avoids “stragglers” • Reasons for stragglers • Bad disk, background competition, bugs

  20. Refinements • User specified partition function • hash(Hostname(urlkey)) mod R • Ordering guarantees • Combiner function • Partial merging before a map worker sends the data • Local reduce • Ex: <the, 1>

  21. Skipping Bad Records • The MapReduce library detects which records cause deterministic crashes • Each worker process installs a signal handler that catches segmentation violations and bus errors • Sends a “last gasp” UDP packet to the MapReduce master • Skip the record

  22. Debugging • Offers human readable status info on http server • Users can see jobs completed, in-progress, processing rates, etc.

  23. Performance • Tests run on 1800 machines • 4GB memory • Dual-processor # 2 GHz Xeons with Hyperthreading • Dual 160 GB IDE disks • Gigabit Ethernet per machine • Run over weekend – when machines were mostly idle • Benchmark: Sort • Sort 10^10 100-byte records

  24. Grep

  25. N Sort Normal No backup 200 tasks killed Normal 200 P200rocesses Killed

  26. Google usage

  27. More examples • Distributed Grep • Count of URL Access Frequency: the total access # to each url in web logs • Inverted Index: the list of documents including a word

  28. Conclusions • Simplifies large-scale computations that fit this model • Allows user to focus on the problem without worrying about details • Computer architecture not very important • Portable model

  29. Project proposal

  30. Count of URL Access Frequency • The map function processes logs of webpage requests and outputs <URL, 1>. • The reduce function adds together all values for the same URL and emits a <URL, total count> pair.

More Related