1 / 39

Repairable Fountain Codes

Repairable Fountain Codes. Megasthenis Asteris , Alexandros G. Dimakis IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 32, NO. 5, MAY 2014. Outline. Introduction Problem Description Prior Work Repairable Fountain Codes A Graph Perspective Simulations Conclusion. Introduction.

nizana
Télécharger la présentation

Repairable Fountain Codes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Repairable Fountain Codes MegasthenisAsteris, AlexandrosG. Dimakis IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 32, NO. 5, MAY 2014

  2. Outline • Introduction • Problem Description • Prior Work • Repairable Fountain Codes • A Graph Perspective • Simulations • Conclusion

  3. Introduction • Fountain codes [2], [3], [4] form a new family of linear erasure codes with several attractive properties. • For a given set of k input symbols, produces a potentially limitless stream of output symbols. • Randomly selected subset of (1+ ɛ)k encoded symbols, a decoder should be able to recover the original k input symbols with high probability (w.h.p.) for some small overhead ɛ. [2] J. W. Byers, M. Luby, M. Mitzenmacher, and A. Rege, “A digital fountain approach to reliable distribution of bulk data,” [3] M. Luby, “LT codes,” [4] A. Shokrollahi, “Raptor codes,”

  4. Introduction • This paper design a new family of Fountain codes that combine multiple properties appealing to distributed storage. • systematic form • efficient repair • locality

  5. Introduction • A key observation is that in a systematic linear code, locality is strongly connected to the sparsity of parity symbols [11] • i.e., the maximum number of input symbols combined in a parity symbol. • A parity symbol along with the systematic symbols covered by it form a local group. • The smaller the size of the local group, the lower the locality of the symbols in it. [11] P. Gopalan, C. Huang, H. Simitci, and S. Yekhanin, “On the locality of codeword symbols,”

  6. Introduction • The availabilityof a systematic symbol naturally extends the notion of locality, measuring the number of disjoint local groups the symbol belongs to. • define the availabilityof a systematic symbol as the number of disjoint sets of encoded symbols which can be used to reconstruct that particular symbol.

  7. Outline • Introduction • Problem Description • Prior Work • Repairable Fountain Codes • A Graph Perspective • Simulations • Conclusion

  8. Problem Description • Given k input symbols, elements of a finite field , wewant to encode them into n symbols using a linear code. • An input vector u ∈ produces a codewordv = uG∈. • by a k × n generator matrix G.

  9. Problem Description • G to have the following properties: • Systematic form • Rateless property • MDS property • Low locality • High Availability

  10. Problem Description • For any code, any sufficiently large subset of encoded symbols should allow recovery of the original data. • In the case of MDS codes, an information theoretically minimum subset of k encoded symbols suffices to decode. • When equipped with systematic form, the generator matrix of an MDS code affords no zero coefficient in the parity generating columns.

  11. Problem Description • This paper require that for ɛ > 0, a set of k’= (1+ ɛ)krandomly selected encoded symbols suffice to decode with high probability. • This property called as near-MDS. • It is impossible to recover the original message with high probability if the parities are linear combinations of fewer than Ω(log k) input symbols.

  12. Outline • Introduction • Problem Description • Prior Work • Repairable Fountain Codes • A Graph Perspective • Simulations • Conclusion

  13. Prior Work • In LT codes, the average degree of the output symbols, i.e., the number of input symbols combined into an output symbol, is O (log k). • However, that sparsityin this case does not imply good locality, since LT codes lack systematic form.

  14. Prior Work • Raptor codes, a different class of Fountain codes. • The core idea is to precodethe input symbols prior to the application of an appropriate LT code. • The k input symbols can be retrieved in linear time by a set of (1 + ɛ)k encoded symbols • However, the original Raptor design does not feature the highly desirable systematic form. • does not imply good locality.

  15. Prior Work • [4], Shokrollahi provided a construction that yields a systematic flavor of Raptor codes. • Encoding is not applied directly on the input symbols. • Complexity O(). • The source symbols appear in the encoded stream • But the parity symbols are no longer sparse in the original input symbols, [4] A. Shokrollahi, “Raptor codes,”

  16. Prior Work • [16], Gummadiproposes systematic variants of LT and Raptor codes that feature low (even constant) expected repair complexity. • However, the overhead ɛrequired for decoding is suboptimal: it cannot be made arbitrarily small. [16] R. Gummadi, “Coding and scheduling in networks for erasures and broadcast,”

  17. Outline • Introduction • Problem Description • Prior Work • Repairable Fountain Codes • A Graph Perspective • Simulations • Conclusion

  18. Repairable Fountain Codes • Anew family of Fountain codes that are systematic and also have sparse parities. • Each parity symbol is a random linear combination of up to d = Ω(logk)randomly chosen input symbols. • Require that a set of k’= (1+ ɛ)k randomly selected encoded symbols • with probability of failure vanishing like 1/poly(k).

  19. Repairable Fountain Codes • Given a vector u of k input symbols in • The code is a linear mapping of u to a vector v of higher dimension n through a k × n matrix G. • A single parity symbol is constructed in a two step process.

  20. Repairable Fountain Codes • The parityis the linear combination of the symbols selected in the first step, weighted with the coefficients drawn in the second step. • First, d input symbols are successively selected uniformly at random, independently, with replacement. • Then, a coefficient is uniformly drawn from

  21. Repairable Fountain Codes U is the set of k input symbols V is the set of n encoded symbols.

  22. Repairable Fountain Codes • Returning to the matrix representation • Generator matrices G = [ | P ]. • Every encoded symbol corresponds to a column of G. • P, the part responsible for the construction of the parity symbols • a random matrix whose columns are sparse, each bearing at most d(k) nonzero entries.

  23. Repairable Fountain Codes • In summary, decreasing d(k) improves the locality of the code • How small d(k) can be to ensure that randomly selected set of k’= (1+ɛ)k symbols is decodable.

  24. Repairable Fountain Codes

  25. Repairable Fountain Codes • From the two theorems, it follows that our codes achieve optimal locality with a logarithmic degree for every parity symbol. • Original data is reconstructed in O() using Maximum Likelihood (ML) decoding • Wiedemannalgorithm can reduce complexity to O(polylog(k)) on average.

  26. Repairable Fountain Codes • Theorem 3. Let rk be the total number of parities generated, with each parity symbol constructed as a linear combination of d(k) = c log(k) independently selected symbols uniformly at random with replacement. • The expected number of parities covering a systematic symbol u is

  27. Repairable Fountain Codes Availability

  28. Outline • Introduction • Problem Description • Prior Work • Repairable Fountain Codes • A Graph Perspective • Simulations • Conclusion

  29. A Graph Perspective • First, consider a balanced random bipartite graph where |U| = |V | = k and each vertex of V is randomly connected to d(k) = c log k nodes in U. • A classical result by Erd˝osand Renyi[14] shows that these graphs will have perfect matchingswith high probability. [14] P. Erd˝os and A. R´enyi, “On random matrices,”

  30. A Graph Perspective • However, the graphs are unbalanced • |U| = k and |V | = k’ = (1+ɛ)k vertices, for ɛ≥ 0

  31. A Graph Perspective

  32. Outline • Introduction • Problem Description • Prior Work • Repairable Fountain Codes • A Graph Perspective • Simulations • Conclusion

  33. Simulations • This paper experimentally evaluate the probability that decoding fails when a randomly selected subset of encoded symbols is available at the decoder. • set the rate equal to ½. • the generator matrix comprises the k columns of the identity matrix and k parity generating columns. • d(k) = c log(k).

  34. Simulations d(k) = c log(k), with c = 6. k = 100,300,500. Expected decoding overhead ɛ= 1− 2Pe.

  35. Simulations d(k) = c log(k), with c = 4. k = 100,300,500.

  36. Outline • Introduction • Problem Description • Prior Work • Repairable Fountain Codes • A Graph Perspective • Simulations • Conclusion

  37. Conclusion • This paper introduce a new family of Fountain codes that are systematicand also have parity symbols with logarithmic sparsity.

  38. Conclusion • Main technical contribution is a novel random matrix result: a family of random matrices with non independent entries have full rank with high probability. • The analysis builds on the connections of matrix determinants to flows on random bipartite graphs, using techniques from [14], [15]. [14] P. Erd˝os and A. R´enyi, “On random matrices,” [15] A. G. Dimakis, V. Prabhakaran, and K. Ramchandran, “Decentralized erasure codes for distributed networked storage,”

  39. Conclusion • One disadvantage of this construction is higher decoding complexity O(). • Wiedemann’s algorithm [13] can be used to decode in O(polylogk) time. • O(k log k) for LT and O(k) for Raptor but offer no locality. [13] D. Wiedemann, “Solving sparse linear equations over finite fields,”

More Related