1 / 213

Large-Scale Data Processing with MapReduce

AAAI 2011 Tutorial. Large-Scale Data Processing with MapReduce. Jimmy Lin University of Maryland Sunday, August 7, 2011. These slides are available on my homepage at http :// www.umiacs.umd.edu/~jimmylin /.

kosey
Télécharger la présentation

Large-Scale Data Processing with MapReduce

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AAAI 2011 Tutorial Large-Scale Data Processing with MapReduce Jimmy Lin University of Maryland Sunday, August 7, 2011 These slides are available on my homepage at http://www.umiacs.umd.edu/~jimmylin/ This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United StatesSee http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

  2. First things first… • About me • Course history • Audience survey

  3. Agenda • Setting the stage: Why large data? Why is this different? • Introduction to MapReduce • MapReduce algorithm design • Text retrieval • Managing relational data • Graph algorithms • Beyond MapReduce

  4. Expectations • Focus on “thinking at scale” • Deconstruction into “design patterns” • Basic intuitions, not fancy math • Mapping well-known algorithms to MapReduce • Not a tutorial on programming Hadoop • Entry point to book

  5. Setting the stage Introduction to MapReduce MapReduce algorithm design Text retrieval Managing relational data Graph algorithms Beyond MapReduce Setting the Stage:Why large data?

  6. Source: Wikipedia (Everest)

  7. How much data? processes 20 PB a day (2008) Wayback Machine: 3 PB + 100 TB/month (3/2009) 6.5 PB of user data + 50 TB/day (5/2009) LHC: 15 PB a year(any day now) 36 PB of user data + 80-90 TB/day (6/2010) 640Kought to be enough for anybody. LSST: 6-10 PB a year (~2015)

  8. (Banko and Brill, ACL 2001) (Brants et al., EMNLP 2007) No data like more data! s/knowledge/data/g; How do we get here if we’re not Google?

  9. cheap commodity clusters + simple, distributed programming models = data-intensive computing for the masses!

  10. Setting the stage Introduction to MapReduce MapReduce algorithm design Text retrieval Managing relational data Graph algorithms Beyond MapReduce Setting the Stage:Why is this different?

  11. Parallel computing is hard! Fundamental issues Different programming models scheduling, data distribution, synchronization, inter-process communication, robustness, fault tolerance, … Message Passing Shared Memory Memory Architectural issues Flynn’s taxonomy (SIMD, MIMD, etc.),network typology, bisection bandwidthUMA vs. NUMA, cache coherence Common problems P1 P2 P3 P4 P5 P1 P2 P3 P4 P5 livelock, deadlock, data starvation, priority inversion… dining philosophers, sleeping barbers, cigarette smokers, … Different programming constructs producer consumer master mutexes, conditional variables, barriers, … masters/slaves, producers/consumers, work queues, … work queue slaves The reality: programmer shoulders the burden of managing concurrency… producer consumer (I want my students developing new algorithms, not debugging race conditions)

  12. Where the rubber meets the road • Concurrency is difficult to reason about • At the scale of datacenters (even across datacenters) • In the presence of failures • In terms of multiple interacting services • The reality: • Lots of one-off solutions, custom code • Write you own dedicated library, then program with it • Burden on the programmer to explicitly manage everything

  13. Source: Ricardo Guimarães Herrmann

  14. I think there is a world market for about five computers. The datacenter is the computer! Source: NY Times (6/14/2006)

  15. What’s the point? • It’s all about the right level of abstraction • Hide system-level details from the developers • No more race conditions, lock contention, etc. • Separating the what from how • Developer specifies the computation that needs to be performed • Execution framework (“runtime”) handles actual execution The datacenter is the computer!

  16. “Big Ideas” • Scale “out”, not “up” • Limits of SMP and large shared-memory machines • Move processing to the data • Cluster have limited bandwidth • Process data sequentially, avoid random access • Seeks are expensive, disk throughput is reasonable • Seamless scalability • From the mythical man-month to the tradable machine-hour

  17. Setting the stage Introduction to MapReduce MapReduce algorithm design Text retrieval Managing relational data Graph algorithms Beyond MapReduce Introduction to MapReduce

  18. Typical Large-Data Problem • Iterate over a large number of records • Extract something of interest from each • Shuffle and sort intermediate results • Aggregate intermediate results • Generate final output Map Reduce Key idea: provide a functional abstraction for these two operations (Dean and Ghemawat, OSDI 2004)

  19. Roots in Functional Programming Map f f f f f Fold g g g g g

  20. MapReduce • Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* • All values with the same key are sent to the same reducer • The execution framework handles everything else…

  21. k1 v1 k2 v2 k3 v3 k4 v4 k5 v5 k6 v6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 3 6 8 reduce reduce reduce r1 s1 r2 s2 r3 s3

  22. MapReduce • Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* • All values with the same key are sent to the same reducer • The execution framework handles everything else… What’s “everything else”?

  23. MapReduce “Runtime” • Handles scheduling • Assigns workers to map and reduce tasks • Handles “data distribution” • Moves processes to data • Handles synchronization • Gathers, sorts, and shuffles intermediate data • Handles errors and faults • Detects worker failures and restarts • Everything happens on top of a distributed FS

  24. MapReduce • Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* • All values with the same key are reduced together • The execution framework handles everything else… • Not quite…usually, programmers also specify: partition (k’, number of partitions) → partition for k’ • Often a simple hash of the key, e.g., hash(k’) mod n • Divides up key space for parallel reduce operations combine (k’, v’) → <k’, v’>* • Mini-reducers that run in memory after the map phase • Used as an optimization to reduce network traffic

  25. k1 v1 k2 v2 k3 v3 k4 v4 k5 v5 k6 v6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 combine combine combine combine a 1 b 2 c 9 a 5 c 2 b 7 c 8 partition partition partition partition Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c c 2 2 3 9 6 8 8 reduce reduce reduce r1 s1 r2 s2 r3 s3

  26. Two more details… • Barrier between map and reduce phases • But we can begin copying intermediate data earlier • Keys arrive at each reducer in sorted order • No enforced ordering across reducers

  27. “Hello World”: Word Count

  28. MapReduce can refer to… • The programming model • The execution framework (aka “runtime”) • The specific implementation Usage is usually clear from context!

  29. MapReduce Implementations • Google has a proprietary implementation in C++ • Bindings in Java, Python • Hadoop is an open-source implementation in Java • Original development led by Yahoo • Now an Apache open source project • Emerging as the de facto big data stack • Rapidly expanding software ecosystem • Lots of custom research implementations • For GPUs, cell processors, etc. • Includes variations of the basic programming model Most of these slides are focused on Hadoop

  30. UserProgram (1) submit Master (2) schedule map (2) schedule reduce worker split 0 (6) write output file 0 (5) remote read worker split 1 (3) read split 2 (4) local write worker split 3 output file 1 split 4 worker worker Input files Map phase Intermediate files (on local disk) Reduce phase Output files Adapted from (Dean and Ghemawat, OSDI 2004)

  31. How do we get data to the workers? SAN Compute Nodes NAS What’s the problem here?

  32. Distributed File System • Don’t move data to workers… move workers to the data! • Store data on the local disks of nodes in the cluster • Start up the workers on the node that has the data local • A distributed file system is the answer • GFS (Google File System) for Google’s MapReduce • HDFS (Hadoop Distributed File System) for Hadoop

  33. GFS: Assumptions • Commodity hardware over “exotic” hardware • Scale “out”, not “up” • High component failure rates • Inexpensive commodity components fail all the time • “Modest” number of huge files • Multi-gigabyte files are common, if not encouraged • Files are write-once, mostly appended to • Perhaps concurrently • Large streaming reads over random access • High sustained throughput over low latency GFS slides adapted from material by (Ghemawat et al., SOSP 2003)

  34. GFS: Design Decisions • Files stored as chunks • Fixed size (64MB) • Reliability through replication • Each chunk replicated across 3+ chunkservers • Single master to coordinate access, keep metadata • Simple centralized management • No data caching • Little benefit due to large datasets, streaming reads • Simplify the API • Push some of the issues onto the client (e.g., data layout) HDFS = GFS clone (same basic ideas)

  35. From GFS to HDFS • Terminology differences: • GFS master = Hadoop namenode • GFS chunkservers = Hadoop datanodes • Functional differences: • File appends in HDFS is relatively new • HDFS performance is (likely) slower For the most part, we’ll use the Hadoop terminology…

  36. HDFS Architecture HDFS namenode Application /foo/bar (file name, block id) File namespace HDFS Client block 3df2 (block id, block location) instructions to datanode datanode state (block id, byte range) block data HDFS datanode HDFS datanode Linux file system Linux file system … … Adapted from (Ghemawatet al., SOSP 2003)

  37. Namenode Responsibilities • Managing the file system namespace: • Holds file/directory structure, metadata, file-to-block mapping, access permissions, etc. • Coordinating file operations: • Directs clients to datanodes for reads and writes • No data is moved through the namenode • Maintaining overall health: • Periodic communication with the datanodes • Block re-replication and rebalancing • Garbage collection

  38. Putting everything together… namenode job submission node namenode daemon jobtracker tasktracker tasktracker tasktracker datanode daemon datanode daemon datanode daemon Linux file system Linux file system Linux file system … … … slave node slave node slave node

  39. Setting the stage Introduction to MapReduce MapReduce algorithm design Text retrieval Managing relational data Graph algorithms Beyond MapReduce MapReduce Algorithm Design

  40. MapReduce: Recap • Programmers must specify: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* • All values with the same key are reduced together • Optionally, also: partition (k’, number of partitions) → partition for k’ • Often a simple hash of the key, e.g., hash(k’) mod n • Divides up key space for parallel reduce operations combine (k’, v’) → <k’, v’>* • Mini-reducers that run in memory after the map phase • Used as an optimization to reduce network traffic • The execution framework handles everything else…

  41. k1 v1 k2 v2 k3 v3 k4 v4 k5 v5 k6 v6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 combine combine combine combine a 1 b 2 c 9 a 5 c 2 b 7 c 8 partition partition partition partition Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 9 8 reduce reduce reduce r1 s1 r2 s2 r3 s3

  42. “Everything Else” • The execution framework handles everything else… • Scheduling: assigns workers to map and reduce tasks • “Data distribution”: moves processes to data • Synchronization: gathers, sorts, and shuffles intermediate data • Errors and faults: detects worker failures and restarts • Limited control over data and execution flow • All algorithms must expressed in m, r, c, p • You don’t know: • Where mappers and reducers run • When a mapper or reducer begins or finishes • Which input a particular mapper is processing • Which intermediate key a particular reducer is processing

  43. Tools for Synchronization • Cleverly-constructed data structures • Bring partial results together • Sort order of intermediate keys • Control order in which reducers process keys • Partitioner • Control which reducer processes which keys • Preserving state in mappers and reducers • Capture dependencies across multiple keys and values

  44. Preserving State Mapper object Reducer object one object per task state state configure configure API initialization hook map reduce one call per input key-value pair one call per intermediate key close close API cleanup hook

  45. Scalable Hadoop Algorithms: Themes • Avoid object creation • Inherently costly operation • Garbage collection • Avoid buffering • Limited heap size • Works for small datasets, but won’t scale!

  46. Importance of Local Aggregation • Ideal scaling characteristics: • Twice the data, twice the running time • Twice the resources, half the running time • Why can’t we achieve this? • Synchronization requires communication • Communication kills performance • Thus… avoid communication! • Reduce intermediate data via local aggregation • Combiners can help

  47. Shuffle and Sort Mapper intermediate files (on disk) merged spills (on disk) Reducer Combiner circular buffer (in memory) Combiner other reducers spills (on disk) other mappers

  48. Word Count: Baseline What’s the impact of combiners?

  49. Word Count: Version 1 Are combiners still needed?

  50. Word Count: Version 2 Key: preserve state acrossinput key-value pairs! Are combiners still needed?

More Related