1 / 25

Rateless Feedback Codes

Rateless Feedback Codes. Jesper H. Sørensen , Toshiaki Koike- Akino , and Philip Orlik 2012 IEEE International Symposium on Information Theory Proceedings. Outline. Introduction Background of rateless codes Rateless feedback codes LT feedback codes Raptor feedback codes

chika
Télécharger la présentation

Rateless Feedback Codes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rateless Feedback Codes Jesper H. Sørensen, Toshiaki Koike-Akino, and Philip Orlik 2012 IEEE International Symposium on Information Theory Proceedings

  2. Outline • Introduction • Background of rateless codes • Rateless feedback codes • LT feedback codes • Raptor feedback codes • Numerical results

  3. Introduction • On the internet • When a strong feedback channel is available • Using automatic repeat-request (ARQ) schemes. • Difficult to employ this scheme in many practical systems. • With no frequent feedback • A rateless coding can achieve reliable communications. • LT codes • Raptor codes • Complexity : O(klogk)

  4. Introduction • If the receiver has m feedback opportunities (0 < m < k) , • What scheme should we applied ? • Doped fountain coding [3] • The receiver feeds back information on undecoded symbols. • Making the transmitter able to transmit input symbols • Accelerate the decoding process. • Real-time oblivious erasure correcting [4] • Telling how many of the k input symbols have been decoded • The transmitter chooses a fixed degree • Maximizes the probability of decoding new symbols.

  5. In this paper • We propose LT feedback codes and Raptor feedback codes. • Can use any amount of feedback opportunities . • The feedback tells the encoder which source symbols have been recovered. • Helping the encoder to modify the degree distribution • Our goal • To decrease the coding overhead especially for short k. • We will show that the proposed feedback scheme can decrease both the coding overhead and the complexity

  6. Background of rateless codes • LT codes encoding process : • Randomly choose a degree d by sampling Ω(d). • Choose uniformly at random d of the k input symbols. • Perform XOR of the d chosen input symbols. • LT decoding process : • Find all degree-1 output symbols and put the neighboring input symbol into ripple. • Symbols in the ripple are processed one by one • We potentially reduce a buffered symbols to degree one , we call this a symbol release

  7. Rateless feedback codes • The ripple size is an important parameter in the design of LT codes. • This design contains two steps: • Find a suitable ripple evolution to aim for • Find a degree distribution which achieves that ripple evolution. • We first focus on the case of a single feedback located when f1ksymbols have been decoded.

  8. LT feedback codes • The feedback informs the transmitter which input symbols have been decoded. • The first f1k symbols has no influence on the release of the symbols received after the feedback. • The probability that a symbol of degree d is released when L input symbols remain unprocessed :

  9. LT feedback codes • Symbols received after the feedback • Based on the reduced set of input symbols of size (1- f1)k. • Their releases are independent of the processing of the first f1k input symbols. • Their release probabilities is • We can give the ripple a boost at an intermediate point. k = 100 f1 = 0.5

  10. LT feedback codes • The ripple size should be kept larger than , for some positive constant c . • Every time a symbol is processed, the ripple size is either increased or decreased by one with equal probabilities. • Viewing the ripple evolution as a simple random walk. • The expected distance from the origin after z steps is

  11. LT feedback codes • Due to the feedback, the ripple evolution can be viewed as two random walk . • Ripple evolution :

  12. LT feedback codes • The next step in the design is to find a degree distribution which achieves the proposed ripple evolution . • We let Q(L) denotes the expected number of releases in the (k − L) –th decoding step. • We want to let the vector R(L) map into Q(L) .

  13. LT feedback codes • The achieved Q(L) can be expressed as a function of the applied degree distribution, , through (1). • This is done for the general case without feedback as • For the case with feedback, we have contributions from two different degree distributions ,

  14. LT feedback codes • The solution tells the degrees amount before and after the feedback in order to achieve the desired ripple evolution. • n1 and n2 are the free parameters. • Normalizing the solution vectors with n1 and n2 provides the degree distributions. • We propose to use the least-squares nonnegative solution for (7) to achieve close to the desired ripple evolution.

  15. LT feedback codes

  16. LT feedback codes • Consider m feedback opportunities whose locations are at fi , for i =1, 2,...,m. • The decoding can be viewed as m+1 random walks . • The proposed ripple evolution :

  17. LT feedback codes

  18. LT feedback codes • An important performance metric is the average degree, • Showing that LT feedback codes decrease the average degree, and thereby computational complexity.

  19. Raptor feedback codes • Raptor coding is a concatenation of an LT code and a highrate block code. • A weight w : • To make it more strict. • To avoid the case that the ripple lacks robustness around the feedback point

  20. Raptor feedback codes k = 128, f1 = 0.75, c1 = c2 = 1 and d= 2.7 Fig. 5. The ripple evolution achieved in the Raptor feedback code compared to the one proposed for LT feedback codes. • A slight lack of robustness near the feedback point • andnear the end of decoding

  21. Numerical results • 5000 runs for c1 = c2 = 1 • The best performance : for k = 32, 64 and 96 • for k = 32, 64,96 and 128.

  22. Numerical results

  23. Numerical results • Raptor feedback codes ,for zero, one and two feedback opportunities, we choose d of 5, 3 and 2.5 respectively.

  24. Numerical results • The average overhead and complexity of LT feedback codes for k= 128 as a function of the number of feedback opportunities.

  25. Reference

More Related