1 / 26

Introduction to Reed-Solomon Coding ( Part I )

Introduction to Reed-Solomon Coding ( Part I ). Introduction. One of the most error control codes is Reed-Solomon codes. These codes were developed by Reed & Solomon in June, 1960.

kevork
Télécharger la présentation

Introduction to Reed-Solomon Coding ( Part I )

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Reed-Solomon Coding( Part I )

  2. Introduction • One of the most error control codes is Reed-Solomon codes. • These codes were developed by Reed & Solomon in June, 1960. • The paper I.S. Reed and Gus Solomon, “ Polynominal codes over certain finite fields ”, Journal of the society for industrial & applied mathematics.

  3. Reed-Solomon (RS) codes have many applications such as compact disc (CD, VCD, DVD), deep space exploration, HDTV, computer memory, and spread-spectrum systems. • In the decades, since RS discovery, RS codes are the most frequency used digital error control codes in the world.

  4. Effect of Noise

  5. digital data 0 1 0 1 1 0 0 1 1 0 0 1 0 0 0 Reconstructed data 0 1 0 1 1 0 0 0 1 0 0 1 0 1 0 encoder 0 0 0 0 check bits, r=2 1 1 1 1 information bits, k=1 block length of code, n=3 000, 111  code word a ( n, k ) code, n=3, k=1, and r=n-k=3-1=2 code rate, p=k/n=1/3 encoder 000 111 000 111 111 000 000 receiver 000 101 000 111 111 010 001  decoder 000 111 000 111 111 000 000

  6. A (7,4) HAMMING CODE A (7,4) hamming code n=7, k=4, r=n-k=7-4=3, p=4/7. 0101 1100 1001 0000 I1 I2 I3 I4 encoder receiver  decoder

  7. Let a1, a2, ..., ak be the k binary of message digital. Let c1, c2, ..., cr be the r parity check bits. An n-digital codeword can be given by a1a2a3...akc1c2c3...cr n bits The check bits are chosen to satisfy the r=n-k equations, 0 = h11a1h12a2...h1kakc1 0 = h21a1h22a2...h2kakc2 . . (1) . 0 = hr1a1hr2a2...hrkakcr

  8. Equation (1) can be writen in matrix notation, h11 h12 ... h1k 1 0 ... 0 a1 0 h21 h22 ... h2k 0 1 ... 0 a2 0 . . . . ak = 0 . c1 0 . c2 0 . . . hr1 hr2 ... hrk 0 0 ... 1 cr 0 rn n1 r1  H  T = 0

  9. Let E be an n1 error pattern at least one error, that is e1 0 e2 0 . . E = . = ej = 1 . . . . en 0 Also let R be the received codeword, that is r1 a1 0 r2 a2 0 . . . R = . = T + E = ak + ej = 1 . c1 . . . . rn cr 0

  10. Thus S = H  R = H  (T+E) = H  T + H  E = H  E  S = H  E where S is an r1 syndorme pattern. Problem, for given S, Find E s1 h11 h12 0 s2 h21 h22 0 . = . e1 + . e2 + ... + . en (2) . . . . . . . . sr hr1 hr2 1

  11. Assume e1=0, e2=1, e3=0, ..., en=0 s1 h12 s2 h22 . = . . . . . sr hr2 The syndrome is equal to the second column of the parity check matrix H. Thus, the second position of received codeword is error.

  12. A (n,k) hamming code has n=r+k=2r-1, where k is message bits and r=n-k is parity check bits. • The rate of the hamming code is given by • Hamming code is a single error correcting code. • In order to correct two or more errors, cyclic binary code, BCH code and Reed-Solomon code are developed to correct t errors, where t≧1.

  13. Single-error-correcting Binary BCH code In GF(24), let p(x)=x4+x+1 be a primitive irreducible polynomial over GF(24). Then the elements of GF(24) are

  14. information bits parity check bits • The parity check matrix of a (n=15,k=11) BCH code for correcting one error is • Encoder: • Let the codeword of this code is

  15. Decoder: • Let received word be R = C + E codeword error pattern H‧R=H(C+E)=H‧C+H‧ET=H‧ET = where • Let R=C+E=(11100101001001)+(00100....0) =(11000101001001)

  16. Let Information polynomial be I(x)= • The codeword is Information polynomial parity check polynomial R(x) I(x)

  17. Note that C(x)=Q(x)‧g(x) where g(x) is called a generator polynomial, C(x) is a codeword if and only if C(x) is a multiple of g(x). • For example, to encode a (15,11) BCH code, the generator polynomial is g(x)=x4+x+1, where α is a order of 15 in GF(24) and is called a minimum polynomial of α.

  18. To encode, one needs to find C3,C2,C1,C0 or R(x) = such that satisfies • To show this, dividing I(x) by g(x), one obtains I(x)=Q(x)g(x)+R(x) • Encoder C(x)=I(x)+R(x)=Q(x)*g(x) • Since C(x) is a multiple of g(x); then C(x)=I(x)+R(x) is a (15,11) BCH code.

  19. Example : I(x)=Q(x)g(x)+R(x) C(x)=Q(x)g(x)=I(x)+R(x) = =111001010011001 …

  20. To decode, let the error polynomial is E(x)= • The received word polynomial is R’(x)=C(x)+E(x)= • The syndrome is = = is the error location in a received word.

  21. Double-error-correcting Binary BCH code • To encode a (n=15, I=7) BCH code over GF(24), which can correct two errors. • Let C(x)=K(x)g1(x)g2(x) where g1(α) is the minimal polynomial of α. => g1(α) = 0 g2(α3) is the minimal polynomial of α3. => g2(α3) = 0

  22. The minimal polynomial of αis • The minimal polynomial of α3 is • The generator polynomial of a(15,7) BCH code is

  23. Reed-Solomon (RS) code • An RS code is a cyclic symbol error-correcting code. • An RS codeword will consist of I information or message symbols, together with P parity or check symbols. The word length is N=I+P. • The symbols in an RS codeword are usually not binary, i.e., each symbol is represent by more than one bit. In fact, a favorite choice is to use 8-bit symbols. This is related to the fact that most computers have word length of 8 bits or multiples of 8 bits.

  24. In order to be able to correct ‘t’ symbol errors, the minimum distance of the code words ‘D’ is given by D=2t+1. • If the minimum distance of an RS code is D, and the word length is N, then the number of message symbols I in a word is given by I = N – ( D – 1 ) P = D – 1.

More Related