1 / 58

XORing Elephants: Novel Erasure Codes for Big Data

XORing Elephants: Novel Erasure Codes for Big Data. Maheswaran Sathiamoorthy , Megasthenis Asteris , Dimitris Papailiopoulos , Alexandros G Dimakis , Ramkumar Vadali , Scott Chen, Dhruba Borthakur USC, UT Austin, and Facebook

yazid
Télécharger la présentation

XORing Elephants: Novel Erasure Codes for Big Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. XORing Elephants: Novel Erasure Codes for Big Data MaheswaranSathiamoorthy, MegasthenisAsteris, DimitrisPapailiopoulos, Alexandros G Dimakis, RamkumarVadali, Scott Chen, DhrubaBorthakur USC, UT Austin, and Facebook The 39th international conference on Very Large Data Bases(VLDB), 2013

  2. ASolutiontotheNetwork Challengesof DataRecoveryin Erasure-coded Distributed StorageSystems : AStudyon theFacebookWarehouse Cluster K.V.Rashmi,NiharShah,D.Gu, H.Kuang,D.Borthakur,K.Ramchandran UC Berkeley, Facebook 2013 IEEE ISIT(International Symposium on Information Theory) & The 5th USENIX Workshop on Hot Topics in File and Storage Technologies, HotStorage 2013

  3. Outlines • Introduction • Locally Repairable Codes (LRC) • Performance evaluation results • Conclusion

  4. Distributed storage systems • Numerous disk failures per day • Failures are the norm rather than the exception • Must introduce redundancy for reliability • Moving from replication to coding

  5. Data distribution • Encode and distribute a data file to n storage nodes. Data File: “INC”

  6. Data collector • Data collector can retrieve the whole file by downloading from any k storage nodes. “INC” 

  7. Three kinds of disk failures • Transient error due to noise corruption • repeat the disk access request • Disk sector error • partial failure • detected and masked by the operating system • Catastrophic error • total failure due to disk controller for instance • the whole disk is regarded as erased 7

  8. Frequency of node failures Goal: Design new codes that have easier repair Number of failed nodes over a single month in a 3000 node production cluster of Facebook. 20 node failures/day * 15TB = 300TB if 8% RS coded, 588TB network traffic/day. (average total network: 2PB/day) ~30% of network traffic is repair in a normal day. 8

  9. Distributed storage system • Encode a data file and distribute it to ndisks • (n,k) recovery property • The data file can be rebuilt from any kdisks. • Repair • If a node fails, we regenerate a new node by connecting and downloading data from any d surviving disks. • Aim atminimizing the repair bandwidth(Dimakis et al 2007). • A coding scheme with the above properties is called a regenerating code. Dimakis, Brighten, Wainwright and Ramchandran, Network coding for distributed storage systems, IEEE INFOCOM, 2007.

  10. Repetition scheme • GFS: Replicate data 3 times • Gmail: Replicate data 21 times

  11. 2x Repetition scheme Divide the datafile into 2 parts 1G A A, B 1G Data Collector B 1G A 1G Cannot toleratedouble disk failures B

  12. 1G Repair is easy for repetition-based system New node A A B A Repair bandwidth =1G B

  13. Reed-Solomon Code Divide the file into 2 parts A A, B Data Collector B A+B It can toleratedouble disk failures A+2B 13

  14. Repair requires essentially decoding the whole file A A New node 1G B 1G A+B Repair bandwidth = 2G A+2B 14

  15. Storing with an (n,k) MDS code • An (n,k) erasure code provides a way to: • Take k packets and generate n packets of the same size such that any k out of n suffice to reconstruct the original k • Optimal reliability for that given redundancy. Well-known and used frequently, e.g. Reed-Solomon codes, Array codes, LDPC and Turbo codes. • Each packet is stored at a different node, distributed in a network. Exampe of a n=5, k=4 code. Single parity: P1= 1+2+3+4

  16. Current Hadoop architecture • 3x replication is HDFS current default. • Very large storage overhead. • As data grows faster than infrastructure, 3x is too expensive.

  17. Facebook introduced Reed-Solomon (HDFS RAID) • ‘cold’ (i.e. rarely accessed) files are switched from 3-replication to (14,10) Reed Solomon. • HDFS RAID. Uses Reed-Solomon erasure code

  18. Limitations of classical codes • Currently only 8% of Facebook’s data warehouse is RS encoded. (still significant saving) • Our goal: move to 40-50% of coded data in RS • Save petabytes • Bottleneck: code repair • Goal: design new codes that have easier repair

  19. Repair metrics of interest • The number of bits communicated in the network during single node failures (Repair bandwidth) • Capacity known for two points only. My 3-year old conjecture forintermediate points was just disproved. [ISIT13] • The number of bits read from disks during single node repairs (Disk IO) • Capacity unknown. • Only known technique is bounding by Repair Bandwidth • The number of nodes accessed to repair a single node failure (Locality) • Capacity computed [ISIT12]. Very few explicit code constructionsknown. • Real systems started using these codes recently.[UsenixATC12, VLDB13]

  20. Some experiments • 100 machines on Amazon EC2 • 50 machines running HDFS RAID (Facebookversion, (14,10) Reed Solomon code ) • 50 running our version LRC HDFS Regenerating code • 50 files uploaded on system, 640MB per file • Killing nodes and measuring network traffic, disk IO, CPU, etc during node repairs

  21. What we observe? • LRC storage codes reduces bytes read by roughly 2.6x • Network bandwidth reduced by approximately 2x • We use 14% more storage. Similar CPU. • In several cases 30~40% faster repairs. Study on larger scale-on going. • Gains can be much more significant if larger codes are used (i.e. for archival storage systems).

  22. Erasure Coding: MDS N=4 A N=3 K=2 A B File or Data Object A B A+B A+2B A+B (3,2) MDS Code (Single parity) Raid 5 Mode (4,2) MDS Code Raid 6 Mode

  23. Erasure Coding VS. Replica (4,2) MDS Erasure Code Replication A A File or Data Object A B B A A+B B Low Redundancy Level Low Storage Cost B A+2B

  24. Improvement of MDS • Regenerating Code • Repairing lost encoded fragments from existing encoded fragments. • A new class of erasure code. • Reduce repair bandwidth. • Increase number of surviving node connected.

  25. Reed Solomon Codes Reed Solomon codes Repair traffic = M • Conventional repair: • Repair whole file and reconstruct data in new node File of size M A A Node 1 B B Node 2 B A+B Node 3 A A A+B A+2B Node 4 n = 4, k = 2

  26. Regenerating Codes [Dimakis et al.’10] Regenerating codes Repair traffic = 0.75M • Repair in regenerating codes: • Downloads one chunk from each node (instead of whole file) • Repair traffic: save 25% for (n=4,k=2), while same storage size • Using network coding: encode chunks in storage nodes File of size M A A Node 1 B B C D C Node 2 D C A+C Node 3 A A B+D A+C B B A+D Node 4 A+B+C B+C+D n = 4, k = 2 Alexandros G. Dimakis, Brighten Godfrey, Yunnan Wu, Martin J. Wainwright, KannanRamchandran: Network coding for distributed storage systems.IEEE Transactions on Information Theory 56(9): 4539-4551, 2010

  27. MDS and Regenerating Code • MDS code: • High complexity • Uses a random linear network coding • “Repair-by-transfer regenerating code” : • Less complexity • Its process is addition of two packets using bit-wise exclusive OR (XOR)

  28. Regenerating codes • Theorem: It is possible to functionally repair an (n,k) MDS code by communicating • Further, there is a tradeoff between the storage per node and repair communication

  29. Minimum Distance • Distance: of a code dis the minimum number of erasures after which data is lost. • Locality: number of blocks rwe have to access to reconstruct a single block. • Reed-Solomon (10,14) (n=14, k=10). d = n – k + 1=5 • Singleton (1964) showed a bound on the best distance possible: • Reed-Solomon codes achieve the Singleton bound (hence called MDS) • Easy lemma: any MDS code must have trivial locality r=k.

  30. Definition 1 (Minimum Code Distance). The minimum distance d of a code of length n, is equal to the minimum number of erasures of coded blocks after which the file cannot be retrieved. • Definition 2(Block Locality). An (k, n-k) code has block locality r, when each coded block is a function of at most r other coded blocks of the code. • Lemma 1. MDS codes with parameters (k, n-k) cannot have locality smaller than k.

  31. Theorem 1. There exist (k, n-k, r) Locally Repairable codes with logarithmic block locality r = log(k) and distance dLRC= n -(1 + δk) k + 1. Hence, any subset of k (1 + δk) coded blocks can be used to reconstruct the file, where δk = • Corollary 1. For fixed code rate R = k/n, the distance of LRCs is asymptotically equal to that of (k, n-k)-MDS codes LRCs are constructed on top of MDS codes (and the most common choice will be a Reed-Solomon code).

  32. Locality-distance tradeoff • Theorem: Any (n,k) code with locality r can have distance at most: • r=k (trivial locality) gives Singleton Bound. • Any non-trivial locality will hurt the fault tolerance of the storage system • Pyramid codes (Huang et al) achieve this bound for message-locality • (Gopalan, Huang, Simitci, Yekhanin) had shown this tradeoff for scalar linear codes. We generalize to arbitrary codes. • No explicit codes known for all-symbol locality. [Papailiopoulos, D, ‘Locally Repairable Codes’, ISIT 2012]

  33. General locality result • key idea: allow each block to be a little larger than M/k • For ε = 0, retrieves previous bound • For ε = 1/r , we can find explicit simple codes • Setting r=logn, we get near MDS codes with logarithmic locality and simple construction

  34. Locally Repairable Codes (LRC) • A randomized an explicit family of codes that have logarithmic locality on all coded blocks and distance that is asymptotically equal to that of an MDS code. We call such codes (k, n - k, r) Locally Repairable Codes (LRC)

  35. Locally Repairable Codes (LRC) Figure 2: Locally repairable code implemented in HDFS-Xorbas. The four parity blocks P1, P2, P3, P4 are constructed with a standard RS code and the local parities provide efficient repair in the case of single block failures. The main theoretical challenge is to choose the coeffcientsci to maximize the fault tolerance of the code.

  36. Introduce “locality” to RS X1 X2 X3 X4 X5 A strip 5 File blocks C2 C3 C4 C1 C5 A Local Parity Block S1 X1C1+X2C2+X3C3+X4C4+X5C5 = S1 38

  37. How locally repairable codes work If a single block failure! For example, if block X3 is lost! 39

  38. How locally repairable codes work A strip (10 file blocks) RS Code (4 parity blocks) X1C1+X2C2+X3C3+X4C4+X5C5=S1 X6C6+X7C7+X8C8+X9C9+X10C10=S2

  39. How to keep the parity blocks satisfy 1. We set C5’=C6’=1 2. Set S1+S2+S3=0 3. Set P1C1’+P2C2’+P3C3’+P4C4’=S3 If P2 failure: 41

  40. MTTDL (Mean-Time To Data Loss)

  41. Performance evaluation on Amazon EC2 and Facebook's cluster • HDFS Bytes Read : corresponds to the total amount of data read by the jobs initiated for repair. • Network traffic : represents the total amount of data sent from nodes in the cluster. • Repair Duration : is simply calculated as the time interval between the starting time of the first repair job and the ending time of the last repair job.

  42. Evaluation on Amazon EC2 • Setting • The metrics measured during the 200 file experiment. • During the course of the experiment, we simulated 8 failure events. • A failure event consists of the termination of one or more DataNodes. • In our failure pattern, the first four failure events consisted of singleDataNodes terminations, the next two were terminations of triplets of DataNodes and finally two terminations of pairs of DataNodes. • In total, 3 experiments were performed on the above setup, successively increasing the number of files stored (50, 100, and 200 files). • Fig. 4 depicts the measurement from the last case(200 file).

  43. Evaluation on Amazon EC2--HDFS Bytes Read Be encoding to one stripe with 14 and 16 size in HDFS-RS and HDFS-Xorbas.

  44. Evaluation on Amazon EC2--Network Traffic Be encoding to one stripe with 14 and 16 size in HDFS-RS and HDFS-Xorbas.

  45. Evaluation on Amazon EC2--Repair Duration Be encoding to one stripe with 14 and 16 size in HDFS-RS and HDFS-Xorbas.

  46. Evaluation on Facebook's clusters-- Cluster network traffic • Since block repair depends only on blocks of the same stripe, using larger files that would yield more than one stripe would not affect our results. • An experiment involving arbitrary file sizes use Facebook’s clusters.

  47. Evaluation on Facebook's clusters-- Cluster Disk Bytes Read

  48. Evaluation on Facebook's clusters-- Cluster average CPU utilization

More Related