1 / 87

MapReduce Theory and Practice

MapReduce Theory and Practice. http://net.pku.edu.cn/~course/cs402/2010/ 彭波 pb@net.pku.edu.cn 北京大学信息科学技术学院 7/15/2010. Some Slides borrow from Jimmy Lin and Aaron Kimball. 大纲. Functional Language and MapReduce MapReduce Basic MapReduce Algorithm Design Hadoop and Java Practice.

bushey
Télécharger la présentation

MapReduce Theory and Practice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MapReduceTheory and Practice http://net.pku.edu.cn/~course/cs402/2010/ 彭波 pb@net.pku.edu.cn 北京大学信息科学技术学院 7/15/2010 Some Slides borrow from Jimmy Lin and Aaron Kimball

  2. 大纲 • Functional Language and MapReduce • MapReduce Basic • MapReduce Algorithm Design • Hadoop and Java Practice

  3. Functional Language and MapReduce

  4. What is Functional Programming? • In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the imperative programming style that emphasizes changes in state.[1]

  5. Example Summing the integers 1 to 10 in Java: total = 0; for (i = 1; i  10; ++i) total = total+i; The computation method is variable assignment. 5

  6. Example Summing the integers 1 to 10 in Haskell: sum [1..10] The computation method is function application. 6

  7. Why is it Useful? • The abstract nature of functional programming leads to considerably simpler programs; • It also supports a number of powerful new ways to structure and reason about programs.

  8. Functional Programming Review • Functional operations do not modify data structures: • they always create new ones • Original data still exists in unmodified form • Data flows are implicit in program design • Order of operations does not matter

  9. Functional Programming Review fun foo(l: int list) = sum(l) + mul(l) + length(l) • Order of sum() and mul(), etc does not matter • They do not modify l

  10. Functional Updates Do Not Modify Structures fun append(x, lst) = let lst' = reverse lst in reverse ( x :: lst' ) The append() function above reverses a list, adds a new element to the front, and returns all of that, reversed, which appends an item. But it never modifies lst!

  11. Functions Can Be Used As Arguments fun DoDouble(f, x) = f (f x) It does not matter what f does to its argument; DoDouble() will do it twice. A function is called higher-order if it takes a function as an argument or returns a function as a result

  12. Map map f lst: (’a->’b) -> (’a list) -> (’b list) Creates a new list by applying f to each element of the input list; returns output in order.

  13. Fold fold f x0 lst: ('a*'b->'b)->'b->('a list)->'b Moves across a list, applying f to each element plus an accumulator. f returns the next accumulator value, which is combined with the next element of the list

  14. fold left vs. fold right • Order of list elements can be significant • Fold left moves left-to-right across the list • Fold right moves from right-to-left SML Implementation: fun foldl f a [] = a | foldl f a (x::xs) = foldl f (f(x, a)) xs fun foldr f a [] = a | foldr f a (x::xs) = f(x, (foldr f a xs))

  15. Example fun foo(l: int list) = sum(l) + mul(l) + length(l) How can we implement this by map and foldl?

  16. Example (Solved) fun foo(l: int list) = sum(l) + mul(l) + length(l) fun sum(lst) = foldl (fn (a,x)=>a+x) 0 lst fun mul(lst) = foldl (fn (a,x)=>a*x) 1 lst fun length(lst) = foldl (fn (a,x)=>a+1) 0 lst

  17. map Implementation • This implementation moves left-to-right across the list, mapping elements one at a time • … But does it need to? fun map f [] = [] | map f (x::xs) = (f x) :: (map f xs)

  18. Implicit Parallelism In map • In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements • If order of application of f to elements in list is commutative, we can reorder or parallelize execution • This is the “secret” that MapReduce exploits

  19. References • http://net.pku.edu.cn/~course/cs501/2008/resource/haskell/functional.ppt • http://net.pku.edu.cn/~course/cs501/2008/resource/haskell/

  20. MapReduce Basic

  21. Typical Large-Data Problem Iterate over a large number of records Extract something of interest from each Shuffle and sort intermediate results Aggregate intermediate results Generate final output Map Reduce Key idea: provide a functional abstraction for these two operations (Dean and Ghemawat, OSDI 2004)

  22. Roots in Functional Programming Map f f f f f Fold g g g g g

  23. MapReduce Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are sent to the same reducer The execution framework handles everything else…

  24. k1 v1 k2 v2 k3 v3 k4 v4 k5 v5 k6 v6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 3 6 8 reduce reduce reduce r1 s1 r2 s2 r3 s3

  25. MapReduce Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are sent to the same reducer The execution framework handles everything else… What’s “everything else”?

  26. MapReduce “Runtime” Handles scheduling Assigns workers to map and reduce tasks Handles “data distribution” Moves processes to data Handles synchronization Gathers, sorts, and shuffles intermediate data Handles errors and faults Detects worker failures and restarts Everything happens on top of a distributed FS (later)

  27. MapReduce Programmers specify two functions: map (k, v) → <k’, v’>* reduce (k’, v’) → <k’, v’>* All values with the same key are reduced together The execution framework handles everything else… Not quite…usually, programmers also specify: partition (k’, number of partitions) → partition for k’ Often a simple hash of the key, e.g., hash(k’) mod n Divides up key space for parallel reduce operations combine (k’, v’) → <k’, v’>* Mini-reducers that run in memory after the map phase Used as an optimization to reduce network traffic

  28. k1 v1 k2 v2 k3 v3 k4 v4 k5 v5 k6 v6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 combine combine combine combine a 1 b 2 c 9 a 5 c 2 b 7 c 8 partition partition partition partition Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c c 2 2 3 9 6 8 8 reduce reduce reduce r1 s1 r2 s2 r3 s3

  29. Two more details… • Barrier between map and reduce phases • But we can begin copying intermediate data earlier • Keys arrive at each reducer in sorted order • No enforced ordering across reducers

  30. “Hello World”: Word Count Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterable<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, value);

  31. MapReduce can refer to… Usage is usually clear from context! The programming model The execution framework (aka “runtime”) The specific implementation

  32. MapReduce Implementations • Google has a proprietary implementation in C++ • Bindings in Java, Python • Hadoop is an open-source implementation in Java • Development led by Yahoo, used in production • Now an Apache project • Rapidly expanding software ecosystem • Lots of custom research implementations • For GPUs, cell processors, etc.

  33. UserProgram (1) submit Master (2) schedule map (2) schedule reduce worker split 0 (6) write output file 0 (5) remote read worker split 1 (3) read split 2 (4) local write worker split 3 output file 1 split 4 worker worker Input files Map phase Intermediate files (on local disk) Reduce phase Output files Adapted from (Dean and Ghemawat, OSDI 2004)

  34. MapReduce Algorithm Design

  35. “Everything Else” • The execution framework handles everything else… • Scheduling: assigns workers to map and reduce tasks • “Data distribution”: moves processes to data • Synchronization: gathers, sorts, and shuffles intermediate data • Errors and faults: detects worker failures and restarts • Limited control over data and execution flow • All algorithms must expressed in m, r, c, p • You don’t know: • Where mappers and reducers run • When a mapper or reducer begins or finishes • Which input a particular mapper is processing • Which intermediate key a particular reducer is processing

  36. Tools for Programmer • Cleverly-constructed data structures • Bring partial results together • Sort order of intermediate keys • Control order in which reducers process keys • Partitioner • Control which reducer processes which keys • Preserving state in mappers and reducers • Capture dependencies across multiple keys and values

  37. Preserving State Mapper object Reducer object one object per task state state configure configure API initialization hook map reduce one call per input key-value pair one call per intermediate key close close API cleanup hook

  38. Scalable Hadoop Algorithms: Themes • Avoid object creation • Inherently costly operation • Garbage collection • Avoid buffering • Limited heap size • Works for small datasets, but won’t scale!

  39. Importance of Local Aggregation • Ideal scaling characteristics: • Twice the data, twice the running time • Twice the resources, half the running time • Why can’t we achieve this? • Synchronization requires communication • Communication kills performance • Thus… avoid communication! • Reduce intermediate data via local aggregation • Combiners can help

  40. Shuffle and Sort Mapper intermediate files (on disk) merged spills (on disk) Reducer Combiner circular buffer (in memory) Combiner other reducers spills (on disk) other mappers

  41. Word Count: Baseline What’s the impact of combiners?

  42. Word Count: Version 1 Are combiners still needed?

  43. Word Count: Version 2 Key: preserve state acrossinput key-value pairs! Are combiners still needed?

  44. Design Pattern for Local Aggregation • “In-mapper combining” • Fold the functionality of the combiner into the mapper by preserving state across multiple map calls • Advantages • Speed • Why is this faster than actual combiners? • Disadvantages • Explicit memory management required • Potential for order-dependent bugs

  45. Combiner Design • Combiners and reducers share same method signature • Sometimes, reducers can serve as combiners • Often, not… • Remember: combiner are optional optimizations • Should not affect algorithm correctness • May be run 0, 1, or multiple times • Example: find average of all integers associated with the same key

  46. Computing the Mean: Version 1 Why can’t we use reducer as combiner?

  47. Computing the Mean: Version 2 Why doesn’t this work?

  48. Computing the Mean: Version 3 Fixed?

  49. Computing the Mean: Version 4 Are combiners still needed?

  50. Algorithm Design: Running Example • Term co-occurrence matrix for a text collection • M = N x N matrix (N = vocabulary size) • Mij: number of times i and j co-occur in some context (for concreteness, let’s say context = sentence) • Why? • Distributional profiles as a way of measuring semantic distance • Semantic distance useful for many language processing tasks

More Related