1 / 37

FEC Linear Block Coding

FEC Linear Block Coding. Matthew Pregara & Zachary Saigh Advisors: Dr. In Soo Ahn & Dr. Yufeng Lu Dept. of Electrical and Computer Eng. Table of Contents. Motivation Introduction Hamming Code Example Tanner Graph/Hard Decision Decoding Simulink Simulation on Hamming Code

Télécharger la présentation

FEC Linear Block Coding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FEC Linear Block Coding Matthew Pregara & Zachary Saigh Advisors: Dr. In Soo Ahn & Dr. Yufeng Lu Dept. of Electrical and Computer Eng.

  2. Table of Contents • Motivation • Introduction • Hamming Code Example • Tanner Graph/Hard Decision Decoding • Simulink Simulation on Hamming Code • VHDL Simulation Results • Low Density Parity Check Code • Timeline and Division of Labor • Conclusion

  3. Motivation • FEC: Forward Error Correction • Adds redundancy to message. • Detects AND Corrects Errors. • ARQ: Automatic Repeat Request • Detects errors and requests retransmission.

  4. Motivation (taken from [1])

  5. Redundancy • Add extra bits (Parity Bits) to message. • Increases number of bits sent, requiring larger transmission bandwidth. • Decreases number of errors in message. • Parity bits • Information from multiple message bits

  6. Linear Block Coding • Block Codes are denoted by (n, k). • k = message bits (message word) • n = message bits + parity bits (codeword) • # of parity bits: m = n - k • Code Rate R = k/n • Example: (7,4) code • 4 message bits • +3 parity bits • = 7 codeword bits • Code rate R = 4/7

  7. Block Coding Diagram

  8. Linear Block Coding cont.

  9. Error Detection

  10. Hamming Code • G matrix is derived from a primitive polynomial, a factor of xn +1. • Systematic G matrix is obtained from G matrix. • H matrix is determined from systematic G matrix. • G = [Ik | P], where P = parity matrix • H = [PT | In-k]; HT =

  11. Encoding • Message m, made of k bits, is post-multiplied by the G matrix using Modulo-2 arithmetic.

  12. Constructing G • Starts with xn+1, n = 7. • Factorize x7+1 = (x+1)(x3+x2+1)(x3+x+1). • Find solution for one of the terms. • (x3+x2+1) => [1 1 0 1]

  13. Fill matrix with solution (shifting 1 entry in each row) Fill empty entries with 0’s Result is the systematic G matrix Find RREF I4 P

  14. Encoding

  15. Encoding Example

  16. Decoding • Received codeword of n bits is post-multiplied by HT to obtain the syndrome.

  17. Decoding Recall: • G = [Ik | P]; P = parity matrix • H = [PT | In-k ] • S = syndrome

  18. Syndrome Table

  19. Example continued

  20. Correcting Errors • In this case the 2nd bit is corrupted • Invert the corrupted bit according to the location found by the syndrome table

  21. Tanner Graph and Hard Decision Decoding (8,4) Example (2458) (1236) (3678) (1457)

  22. Hard Decision Decoding

  23. Variable Node Decisions

  24. Simulink Model of (7,4) Hamming Code

  25. Simulation ResultsField Programmable Gate Array(FPGA) 26

  26. Encoding 27 Decoding

  27. Simulation ResultsHardware Description Language 28

  28. Low Density Parity Check Code • Offers performance close to Shannon’s channel capacity limit. • Can correct multiple bit errors. • Low decoder complexity.

  29. (taken from [1]

  30. LDPC Code • Start with H matrix first • The entries of 1 in H are sparsely populated • From the H matrix, find the systematic G matrix • The entries of 1 in G are not sparsely populated, causing encoder complexity.

  31. LDPC Code • Decoding is done iteratively. • To operate close to Shannon limit under very low SNR, the dimensions of H matrix are very large. • For real-time operations, the dimensions of H matrix are constrained by hardware , and decoding time allowed, and others.

  32. Zack MATLAB Encoder Simulink design of encoder/channel Implementation of VHDL on FPGA Performance analysis of FPGA implementaiton Matt MATLAB Decoder Simulink design of Decoder Implementation of Xilinx system generator Performance analysis of MATLAB/Simulink implementation Division of Labor

  33. Timeline

  34. Conclusion • LDPC coding is to be implemented. • Preliminary investigations are performed. • Examined Hamming coder/decoder under Matlab and Simulink environment. • Tanner graph representation of LDPC examples researched. • Plans to choose an LDPC coding scheme and to implement it using FPGA.

  35. References [1] Valenti, Matthew. Iterative Solutions Coded Modulation Library Theory of Operation. West Virginia University, 03 Oct. 2005. Web. 23 Oct. 2012. <www.wvu.edu>. [2] B. Sklar, Digital Communications, second edition: Fundamentals and Applications, Prentice-Hall, 2000. [3] Xilinx System Generator Manual, Xilinx Inc. , 2011.

  36. Q and A’s • Thank you for listening. • Any questions or suggestions are welcome.

More Related