1 / 54

Generalized Communication System: Error Control Coding Occurs In Right Column.

Generalized Communication System: Error Control Coding Occurs In Right Column. Message Bits. Codeword: message and parity bits. For Example, H here has three parity checks on the 6 codeword bits:. Robert Gallagher, 1963: Low Density Parity Check Codes.

smaureen
Télécharger la présentation

Generalized Communication System: Error Control Coding Occurs In Right Column.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GeneralizedCommunication System: Error Control Coding Occurs In Right Column.

  2. Message Bits Codeword: message and parity bits.

  3. For Example, H here has three parity checks on the 6 codeword bits:

  4. Robert Gallagher, 1963: Low Density Parity Check Codes. Introduced all the key concepts: 1.) Low Density of H, 2.) Tree-based “hard” bit decoding, 3.) Probabilistic decoding from soft demodulation, more precisely, the a posteriori probabilities. The amount of computation required led to these codes being forgotten, until: David MacKay: “Good Error-Correcting Codes Based on Very Sparse Matrices.” IEEE Trans. Info. Thry. 1999. D. MacKay and R. M. Neal: “Near Shannon Limit Performance of LDPC Codes.” Electronics Letters, 1996.

  5. S. Chung, G. David Forney, Jr. and Thomas Richardson: “On the Design of Low-Density Parity-Check Codes within 0.0045 dB of the Shannon Limit.” IEEE Comm. Letters, Feb. 2001. But(!) the block length (N) of the code used was The research on LDPC codes since then has been work to get results close to this, but with reasonable block length, and more efficient decoding.

  6. i.e.

  7. Construction Methods for LDPC Codes 1.) Finite Geometry Methods: Lin and Costello’s Text. 2.) Consult the recent literature! 3.) Gallagher’s original random process. Start with L many rows, staggered as shown:

  8. Then “stack” column permutations of on top of itself:

  9. Other methods of modifying H in obtain higher girths exist, such as deleting a row or column. See Lin and Costello’s text for further information.

  10. 1.) Hard Bit Flip Decoding 2.) Soft Bit Flip Decoding 3.) Modification of the Sum-Product Algorithm for SBG8 1.’) Hard Bit Flip Decoding can be summarized as “voting on which bit(s) are bad, by those checks which indicate an error,” and then one flips the bits by the idea: “worst first.” Consider the following part of an H matrix :

  11. If checks C1, C2 and C3 indicate an error (i.e. incorrect parity in the bits they check), then C1 “votes” that bits b1, b2, and b3 are in error, C2 votes that b1, b4, and b5 are in error, etc. So after the voting is done, bit b1 will have received 3 votes, the other bits only 1 vote. So then bit b1 is flipped to its opposite. If after flipping the estimate is still not a codeword, the process is iterated for some fixed number of times.

  12. The results gave surprisingly good performance. The sparsity of H matters to concentrate the votes: if H were dense, intuitively most rows (checks) of H would check a good proportion of the bits, and so many rows would indicate an error, and so most bits would receive roughly the same number of votes. 2.’) Soft Bit flip decoding was an attempt by the author to extend the previous idea to the case of soft demodulation to 8 levels of output. The problems in adapting hard bit flip are: (a) how to assign “votes to be in error” based on a bit’s current estimate, and (b) to what level should one “flip” the worst bits.

  13. The author used the following scheme, “linear weighting:” The voting was averaged over the number of checks (rows of H) used.

  14. The question now is how to flip the worst bit, i.e. the one(s) with the most votes-to-be-in-error. The following scheme was tried: the total number of votes was averaged by the column weight. Then:

  15. Unfortunately, the using the extra information of soft demodulation didn’t significantly improve the performance. 3.’) Finally, the Sum-Product Algorithm gives the best performance, but has the highest complexity. It is based on estimating the Likelihood Ratio: This is the (ratio of ) probabilities that a transmitted bit is a 1, or 0, given the received estimates of the bits in the codoword.

  16. There are numerous ways to estimate this ratio. Often, the Log of the ratio is used. The sum-product algorithm is an iterative update algorithm: the ratio, for each bit, is recalulated many times, with the probabilistic estimate (that each transmitted bit was a 1) being updated each time. The author eschewed a common way of calculating this ratio that uses the arctanh and tanh function, since it was felt that these transcendental functions would not be easily calculated in hardware. However, the author’s way of calculating these ratios (even with adaptations for the SBG8 channel) requires more multiplications. The author’s implementation gave exceptionial results, considering that the codes were short, and the channel not the ideal real-valued output AWGN channel of the literature, but the author’s hypothesized SBG8 channel.

  17. The first two examples are taken from Lin and Costello.

More Related