1 / 63

Near Shannon Limit Performance of Low Density Parity Check Codes

Near Shannon Limit Performance of Low Density Parity Check Codes. David J.C. MacKay and Radford M. Neal , Electronics Letters 29th Vol. 32 No. 28, August 1996. Outline. Features of LDPC Codes History of LDPC Codes Some Fundamentals A Simple Example Properties of LDPC Codes

Télécharger la présentation

Near Shannon Limit Performance of Low Density Parity Check Codes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Near Shannon Limit Performance of Low Density Parity Check Codes David J.C. MacKay and Radford M. Neal, Electronics Letters 29th Vol. 32 No. 28, August 1996.

  2. Outline • Features of LDPC Codes • History of LDPC Codes • Some Fundamentals • A Simple Example • Properties of LDPC Codes • How to construct? • Decoding • Concept of Message Passing • Sum Product Algorithm • Concept of Iterative Decoding • Channel Transmission • Decoding Algorithm • Decoding Example • Performance • Cost

  3. Shannon Limit (1/2) • Shannon Limit: • Describes the theoretical maximum information transfer rate of the communication channel (channel capacity), for a particular levels of noise. • Given: • A noisy channel with channel capacity C • Information transmitted at a rate R

  4. Shannon Limit (2/2) • If R < C, there exist codes that allow the probability of error at the receiver to be made arbitrarily small. • Theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, C. • If R > C, all codes will have a probability of error greater than a certain positive minimal level. • This level increases as the rate increases. • Information cannot be guaranteed to be transmitted reliably across a channel at rates beyond the channel capacity.

  5. Features of LDPC Codes • Low-Density Parity-Check Codes (LDPC Codes) is an error correcting code. • A method of transmitting a message over a noisy transmission channel. • Approaching Shannon capacity • For example, 0.3 dB from Shannon limit (1999). • An closer design from (Chung:2001), 0.0045 dB away from capacity. • Linear decoding complexity in time. • Suitable for parallel implementation.

  6. History of LDPC Codes • Also known as Gallager codes, by whom developed the LDPC concept in his doctoral dissertation at MIT in 1960. • Long time being ignored due to requirement of high complexity computation. • In 90’s, Rediscovered by MacKay and Richardson/Urbanke.

  7. Some Fundamentals • The structure of a linear block code is described by the generator matrix G or the parity-check matrix H. • H is sparse. • Very few 1's in each row and column. • Regular LDPC codes: • Each column of H had a small weight (e.g. 3), and the weight per row was also uniform. • H was constructed at random subject to these constraints. • Irregular LDPC codes: • The number of 1's per column or row is not constant. • Usually irregular LDPC codes outperforms regular LDPC codes.

  8. A Simple Example (1/6) www.wikipedia.org • Variable Node: box with an '=' sign. • Check Node: box with a '+' sign. • Constraints: • All lines connecting to a variable node have the same value, and all values connecting to a check node must sum, modulo two, to zero (they must sum to an even number).

  9. A Simple Example (2/6) • There are 8 possible 6 bit strings which correspond to valid codewords: • 000000, 011001, 110010, 111100, 101011, 100101, 001110, 010111. • This LDPC code fragment represents a 3-bit message encoded as 6 bits. • The redundancy is been used to aid in recovering from channel errors.

  10. A Simple Example (3/6) • The parity-check matrix representing this graph fragment is:

  11. A Simple Example (4/6) • Consider the valid codeword: 101011 • Been transmitted across a binary erasure channel, and received with the 1st and 4th bit erased: ?01?11 • Belief Propagation is particularly simple for the binary erasure channel. • Consists of iterative constraint satisfaction.

  12. A Simple Example (5/6) • Consider the erased codeword: ?01?11 • In this case: • The first step of belief propagation is to realize that the 4th bit must be 0 to satisfy the middle constraint. • Now that we have decoded the 4th bit, we realize that the 1st bit must be a 1 to satisfy the leftmost constraint.

  13. A Simple Example (6/6) • Thus we are able to iteratively decode the message encoded with our LDPC Code. • We can validate this result by multiplying the corrected codeword r by the parity-check matrix H: • Because the outcome z (the syndrome) of this operation is the 3 x 1 zero vector, we have successfully validated the resulting codeword r. (mod 2)

  14. Properties of LDPC Codes • The structure of a linear block code is completely described by the generator matrix G or the parity-check matrix H. • r = Gm • r: codeword, • G: generator matrix, • m: input message • HG = 0 • Hr = 0 • A Low-Density Parity-Check Code is a code which has a very sparse, random parity check matrix H. • Typically, column weight is around 3 or 4.

  15. How to construct? (1/4) • Construction 1A: • An Mby N matrix is created at random with: • weight per column t (e.g., t = 3). • weight per row as uniform as possible. • overlap between any two columns no greater than 1. • No cycle graph.

  16. How to construct? (2/4) • Construction 2A: • Up to M/2 of the columns: designated weight 2 columns • Such that there is zero overlap between any pair of columns. • The remaining columns: made at random with weight 3. • The weight per row: as uniform as possible. • Overlap between any two columns of the entire matrix no greater than 1.

  17. How to construct? (3/4) • Construction 1B and 2B: • A small number of columns are deleted from a matrix produced by Constructions 1A and 2A. • The bipartite graph has no short cycles of length less than some lengthl.

  18. How to construct? (4/4) • Above constructions do not ensure all the rows of the matrix are linearly independent. • The M by N matrix created is the parity matrix of a linear code with rate at least R = K/N, where K = N - M. • The generator matrix of the code can be created by Gaussian elimination.

  19. Decoding • Decoding problem is to find the most probable vector x iteratively, such that: Hx mod 2 = 0 • Gallager's algorithm may be viewed as an approximate Belief Propagation (BP) algorithm. • Message Passing (MP).

  20. Concept of Message Passing(1/4) • How to know the number of soldiers stand in line? • Each soldier plus 1 to the number hearing from the neighbor on one side, then pass the result to the neighbor on the opposite side.

  21. Concept of Message Passing (2/4) • In the beginning, every soldier knows that: • There exists at least one soldier (himself). • Intrinsic Information. Figure 1. Each node represent a soldier & Local rule: +1

  22. Concept of Message Passing (3/4) • Start from the leftmost and rightmost soldiers. • Extrinsic Information. Figure 2. Extrinsic Information Flow.

  23. Concept of Message Passing (4/4) • The total number = [left + right] + [oneself] • Overall Information = Extrinsic Information + Intrinsic Information. Figure 3. Overall Information Flow.

  24. Sum Product Algorithm (1/4) • During decoding, apply Sum Product Algorithm to derive: • Extrinsic Information (Extrinsic Probability) • Overall Information (Overall Probability) • In this paper, channel transmission: • BPSK through AWGN. • Binary Phase-Shift Keying. • Additive White Gaussian Noise.

  25. Sum Product Algorithm (2/4) • A simple example: • If local rule:

  26. Sum Product Algorithm (3/4) Form 1. valid codeword for rule: • (Overall Probability with m2 = 0): • P10P20P30 + P11P20P31 • (Extrinsic Probability with m2 = 0): • P10P30 + P11P31

  27. Sum Product Algorithm (4/4) • Likelihood Ratio: , where ( with m1 or m2 is similar )

  28. Concept of Iterative Decoding (1/6) • A simple example: • If received a codeword with Likelihood Ratio: ( “10110” is an invalid codeword )

  29. Concept of Iterative Decoding (2/6) • A simple example: • Calculate Extrinsic Probability by Check Nodes:

  30. Concept of Iterative Decoding (3/6) • A simple example: • We then obtain the Overall Probability of 1st. Round: ( “10010” is an valid codeword )

  31. Concept of Iterative Decoding (4/6) • A simple example: • 2nd. Round :

  32. Concept of Iterative Decoding (5/6) • A simple example: • 2nd. Round :

  33. Concept of Iterative Decoding (6/6) • A simple example: • We then obtain the Overall Probability of 2nd. Round: ( “10010” is an valid codeword )

  34. Channel Transmission • BPSK through AWGN: • A Gaussian channel with binary input ±a and additive noise of variance σ2 = 1. • t: BPSK-modulated signal, • AWGN channel: • Posterior probability:

  35. Decoding Algorithm (1/6) • We refer to the elements of x as bits, to the rows of H as checks, and denote: • the set of bits n that participate in check m by N(m): • the set of checks in which bit n participates by M(n): • a set N(m) with bit nexcluded by N(m)\n

  36. Decoding Algorithm (2/6)

  37. Decoding Algorithm (3/6) N(1) = {1, 2, 3, 6, 7, 10}, N(2) = {1, 3, 5, 6, 8, 9}, … etc M(1) = {1, 2, 5}, M(2) = {1, 4, 5}, … etc N(1)\1 = {2, 3, 6, 7, 10}, N(2)\3 = {1, 5, 6, 8, 9}, … etc M(1)\1 = {2, 5}, M(2)\4 = {1, 5}, … etc

  38. Decoding Algorithm (4/6) • The algorithm has two parts, in which quantities qmn and rmn associated with each non-zero element in the H matrix are iteratively updated: • qxmn:the probability that bit n of x is x, given the information obtained via checks other than check m. • rxmn:the probability of check m being satisfied if bit n of x is considered fixed at x, and other bits have a separable distribution given by the probabilities . • The algorithm would produce the exact posterior probabilities of all the bits of the bipartite graph defined by the matrix H.

  39. Decoding Algorithm (5/6) • Initialization: • q0mnand q1mn are initialized to the values f0n and f1n • Horizontal Step: • Define: • For each m, n: • Set:

  40. Decoding Algorithm (6/6) • Vertical Step: • For each n and m, and for x = 0, 1 we update: • We can also update the “pseudoposterior probabilities” q0n and q1n, given by: • , where αmnis chosen such that q0mn+ q1mn = 1

  41. Decoding Example (1/10)

  42. Decoding Example (2/10)

  43. Decoding Example (3/10) • BPSK through AWGN: • Simulated a Gaussian channel with binary input ±a and additive noise of variance σ2 = 1. • t: BPSK-modulated signal, • AWGN channel:

  44. Decoding Example (4/10) • BPSK through AWGN: • Posterior probability:

  45. Recall: Decoding Algorithm • Input: The Posterior Probabilities pn(x). • Initialization: Let qmn(x) = pn(x). 1. Horizontal Step: (a). Form the δq matrix from qmn(0) - qmn(1) (at sparse non-zero location). (b). For each nonzero location (m, n), let δrmn be the product of δqmatrix elements along its row, excluding the (m, n) position. (c). Let rmn(1) = (1 -δrmn ) / 2, rmn(0) = (1 +δrmn ) / 2 2. Vertical Step: For each nonzero location (m, n) let qmn(0) be the product along its column, excluding the (m, n) position, times pn(0). Similarly for qmn(1). Then normalize.

  46. Decoding Example (5/10) • Initialization: Let qmn(x) = pn(x).

  47. Decoding Example (6/10) • Iteration 1: Horizontal Step: (a) (b)

  48. Decoding Example (7/10) • Iteration 1: Horizontal Step: (c)

  49. Decoding Example (8/10) • Iteration 1: Vertical Step: (a) (b)

  50. Decoding Example (9/10) • Iteration 1: After Vertical Step: Hc mod 2 ≠ 0 Recall: update pseudoposterior probabilitiesqn(1) and qn(0), given by:

More Related