1 / 42

Department of Electrical Engineering École Polytechnique de Montréal

Department of Electrical Engineering École Polytechnique de Montréal. David Haccoun, Eng., Ph.D. Professor of Electrical Engineering Life Fellow of IEEE Fellow , Engineering Institute of Canada. Engineering training in Canada. 36 schools/faculties. 3. 1. 2. 1. Vancouver. 2. 11. 13.

lapis
Télécharger la présentation

Department of Electrical Engineering École Polytechnique de Montréal

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Department of Electrical Engineering École Polytechnique de Montréal David Haccoun, Eng., Ph.D. Professor of Electrical Engineering Life Fellow of IEEE Fellow , Engineering Institute of Canada

  2. Engineering training in Canada 36 schools/faculties 3 1 2 1 Vancouver 2 11 13 1 2 Montréal Undergraduate students Canada: 55,000 Québec: 14,600 Toronto

  3. ÉcolePolytechnique, cradle of engineering in Québec The oldest engineering school in Canada . The third-largest in Canada for teaching and research. The first in Québec for the student body size. Operating budget $85 million Canadian Dollars (C$). Annual research budget $60.5 million C$. Annual grants and research contracts $38 million C$. 15 Industrial Research Chairs. 24 Canada Research Chairs. 7863 scientific publications over the last decade. 220 professors, and 1,100 employees. 1,000 graduates per year, and 30,000 since 1873.

  4. 11 engineering programs • Biomedical • Civil • Chemical • Electrical • Geological • Industrial • Computer • Software • Mechanical • Mining • Engineering physics

  5. Our campus Polytechnique

  6. Novel Iterative Decoding Using Convolutional Doubly Orthogonal Codes A simple approach to capacity David Haccoun Éric Roy, Christian Cardinal

  7. Modern Error Control Coding Techniques Based on Differences Families • A new classof threshold decodablecodes leading to simple and efficient error control schemes. • No interleaver, at neither encoding nor decoding • Far less complex to implement than turbo coding schemes, attractive alternatives to turbo coding at moderate Eb/N0 values • High rate codes readily obtained by puncturing technique • Low complexity and high speed FPGA-based prototypes at bit rate >100 Mbps. • Extensions to recursive codes  Capacity • Rate adaptive schemes Punctured Codes • Reduced latency Simplified Codes • Reduced complexity 7

  8. 8

  9. D2 D Dm A J m – Set of connection positions – Number of connection positions – Memory length – Coding span One-Dimensional NCDO Codes • Nonrecursive systematic convolutional (NSC) encoder ( R = 1/2 ) Information sequence Shift register of length m 0 1 2 m-1 m ... ... D1 AWGN Channel =0 =m ... ... ... ... J Parity sequence 9

  10. + • Simpleorthogonal properties : CSOC Differencesaredistinct Example of Convolutional Self-Orthogonal Code CSOC, R=1/2, J=4, m=15, 10

  11. Example of CSOC, J=4, DistinctSimple Differences • All the simple differences are distinct • CSOC codes are suitable for threshold decoding 11

  12. Threshold (TH) Decoding of CSOC • CSOC are Non iterative, systematic and non recursive • Well known symbol decoding technique that exploits the simply-orthogonal properties of CSOC • Either hard or soft-input soft-output (SISO) decoding • Very simple implementation of majority logic procedure 12

  13. D D D S > < D D D Example of One-Step Threshold DecoderJ = 3, A= {0, 1, 3}, dmin= 4 Soft outputs in LLR (2 -1)=1 (3 -0)=3 (3 -1)=2 0 0 1 ûi Decoded bits 1 =0 3 =3 2 =1 = tanh/tanh-1(sum-product)or add-min (min-sum) operator , are LLRs values representing the received symbols , 13

  14. Novel Iterative Error Control Coding Schemes • Extension to Iterative Threshold Decoding • Convolutional Self-Doubly-Orthogonal Codes : CSO2C • All the differences (j -k )are distinct; • The differences of differences (j -k )–(l -n ),j  k, k  n, n  l, l  j, must be distinctfrom all the differences (r -s ),r s ; • The above differences of differences are distinctexcept for the unavoidable repetitions • Decoder exploits thedoubly-orthogonal properties of CSO2C • Asymptotic error performance (dmin=J+1 ) at moderate Eb/N0 Issues : Search and determination of new CSO2Cs Extention of Golomb rulers problem (unsolved) 14

  15. Example of CSO2C, J=4, Differences of Differences (1,3,1,3)=((-12)-(12))= -24 (2,0,1,0)=((13) -( -3))= 16 (2,0,2,0)=((13)-(-13))= 26 (2,1,0,1)=((10)-( 3))= 7 (2,1,2,0)=((10)-(-13))= 23 (2,1,2,1)=((10)-(-10))= 20 (2,3,0,1)=(( -2)-( 3))= -5 (2,3,0,3)=(( -2)-( 15))= -17 (2,3,1,0)=(( -2)-( -3))= 1 (2,3,1,3)=(( -2)-( 12))= -14 (2,3,2,0)=(( -2)-(-13))= 11 (2,3,2,1)=(( -2)-(-10))= 8 (2,3,2,3)=(( -2)-( 2))= -4 (3,0,1,0)=((15)-( -3))= 18 (0,1,0,1)=(( -3)-( 3))= -6 (0,2,0,1)=((-13)-( 3))= -16 (0,2,0,2)=((-13)-(13))= -26 (0,3,0,1)=((-15)-( 3))= -18 (0,3,0,2)=((-15)-(13))= -28 (0,3,0,3)=((-15)-(15))= -30 (1,0,1,0)=(( 3)-( -3))= 6 (1,2,0,2)=((-10)-(13))= -23 (1,2,1,0)=((-10)-( -3))= -7 (1,2,1,2)=((-10)-(10))= -20 (1,3,0,2)=((-12)-(13))= -25 (1,3,0,3)=((-12)-(15))= -27 (1,3,1,0)=((-12)-( -3))= -9 (1,3,1,2)=((-12)-(10))= -22 (3,0,2,0)=((15)-(-13))= 28 (3,0,3,0)=((15)-(-15))= 30 (3,1,0,1)=((12)-( 3))= 9 (3,1,2,0)=((12)-(-13))= 25 (3,1,2,1)=((12)-(-10))= 22 (3,1,3,0)=((12)-(-15))= 27 (3,1,3,1)=((12)-(-12))= 24 (3,2,0,1)=(( 2)-( 3))= -1 (3,2,0,2)=(( 2)-( 13))= -11 (3,2,1,0)=(( 2)-( -3))= 5 (3,2,1,2)=(( 2)-( 10))= -8 (3,2,3,0)=(( 2)-(-15))= 17 (3,2,3,1)=(( 2)-(-12))= 14 (3,2,3,2)=(( 2)-( -2))= 4 • All the differences of differences are distinct • These codes are suitable for iterativethreshold or belief propagation decoding 15

  16. Spans of some best known CSO2C encoders • Issue : minimization of memory length (span) m of encoders • Lower bound on span 16

  17. Non-Iterative Threshold Decoding for CSOCs Approximate MAP valueli: Extrinsic Information Received Inform. Symb. = + : Addmin operator; where Decision rule : , otherwiseûi= 0 ûi=1 if and only ifli 0 CSOC i is an equation of independent variables 17

  18. Depends on the simple differences Estimation of at Iteration  Feedforward for future symbols Feedback for past symbols depends on the simple differences and on the differences of differences Iterative Threshold Decoding for CSO2Cs General Expression: Iterative Expressions: • 1 Iteration:DistinctDifferences • 2 Iterations: Distinct Differences of differences • Distinct Differences of differences from Differences 18

  19. Iterative Threshold Decoder Structure for CSO2Cs Delay m Delay m Delay m Delay m Delay m Delay m Forward-Only Iterative Decoder Last Iteration ... ... Soft output Soft output Soft output Soft output Information symbols threshold threshold threshold ... threshold ... decoder decoder decoder decoder Iteration Iteration Iteration Iteration  =1  =M  =2  =I Hard Decision From channel Parity-check symbols ... ... Decoded Information symbols • No interleaver • One ( identical ) decoder per iteration • Forward-only operation Features: 19

  20. Block Diagram of Iterative Threshold Decoder (CSO2Cs) • One-step TH decoding per iteration • Iterative TH decoder ( M iterationsM one-step decoders) • Each one-step decoder for a distinct bit Latency m bits Latency m bits Latency m bits Input For Output For Total Latency M mbits 20

  21. Iterative Belief Propagation (BP) Decoder of CSO2C p p w w - t Mm t u u w w - t Mm DEC 1 DEC 2 DEC M t 0 l l ( 2 ) ( M ) ˆ l ( 1 ) u - - - - 2 t Mm t m t Mm t m 1 Threshold Decoder (TH) Latency m bits Latency m bits Latency m bits BP Decoder p p w w - t Mm t u w u w - t Mm DEC 1 DEC 2 DEC M t (BP) ( ) M v { } Latency m bits Latency m bits - , t Mm j Latency m bits 0 l l ( 2 ) ( M ) ˆ l ( 1 ) u - - - - 2 t Mm t m t Mm t m 1 M(BP)~ ½M(TH) BP Latency ~ ½ TH Latency 1-step BP complexity ~ J X 1-step TH complexity 21

  22. Error Performance Behaviors of CSO2Cs J=9, A={0, 9, 21, 395, 584, 767, 871, 899, 912} BP Waterfall region TH Waterfall region BP Error floor region TH Error floor region TH, 8 iterations Both BP and TH decoding approach the asymptotic error performance in error floor region BP, 8 iterations BP, 4 iterations 22

  23. Analysis Results of CSO2Cs • Effects of Code Structure on Error Performance • With iterative decoding, error performance depends essentially on the number of connections , rather than on memory lengths (spans) . • Shortcomings of CSO2Cs • Best known codes: rapid increase of encoding spans with J: • Optimal codes unknown (Minimum span m ) • Improvements : Span Reduction • Reduce span by relaxing conditionson the double orthogonality at small degradation of the error performance Simplified S-CSO2C • Search and determination of new S-CSO2Cs with minimal spans 23

  24. Definition of S-CSO2Cs Normalized simplification factor , • The set of connection positions A satisfies : • All the differences (j -k ) are distinct ; • The differences of differences (j -k)-(l -n ), j  k, k  n, n  l, l  j, are distinct from all the differences (r -s ), r  s ; • The differences of differences are distinct except for the unavoidable repetitions and a number of avoidable repetitions Maximal number of distinct differences of differences (excluding the unavoidable repetitions) Number of repeated differences of differences (excluding the unavoidable repetitions) • Search and determination of newshort spanS-CSO2Cs yielding value 24

  25. Comparison of Spans of CSO2Cs and S-CSO2Cs 25

  26. Uncoded BPSK coding gain asymptotic coding gain Performance Comparison for J=10 S-CSO2C 26

  27. Performance Comparison for J=8 Codes (BP Decoding) CSO2C: A = { 0, 43, 139, 322, 422, 430, 441, 459 } S-CSO2C: A = { 0, 9, 22, 55, 95, 124, 127, 129 } 27

  28. Eb/No = 3.5 dB 8th iteration CSO2C BER S-CSO2C 14000 3000 Latency (x 104 bits) Performance Comparison CSO2Cs / S-CSO2Cs (TH Decoding) 28

  29. Small Span Analysis of Orthogonality Properties (span) Convolutional Self-Orthogonal Codes (CSOC) Simple Orthogonality Extension Orthogonalproperties of set A Convolutional Self-Doubly-Orthogonal Codes (CSO2C) Double Orthogonality Large Span Relaxed Conditions Relaxed Double Orthogonality Simplified CSO2C (S-CSO2C) Substantial Span Reduction 29

  30. Analysis of Orthogonality Properties (computational tree) Decoded symbol • The computational tree represents the symbols used by the decoder to estimate each information symbol in the iterative decoding process. • Error performances function of Independency VS Short cycles • Analysis shows that the parity symbols are limiting the decoding performancesof the iterative decoder because of their degree 1in the computational tree (no descendant nodes). • Impact : The decoder does not update these values over the iterative decoding process : limiting error performances. LLR for final hard decision Iter (-1) Iter (-2) • Simple orthogonality  Independence of inputs overONE iteration • Double orthogonality  Independence of inputs overTWO iterations 30

  31. Analysis of Orthogonality Properties (cycles) Conditions on associated sets Cycles on Graphs Codes No 4-cycles Distinct differences CSOC Minimization of Number of 6-cycles Uniformly Distributed Distinct differences from difference of differences CSO2C S-CSO2C Minimization of Number of 8-cycles Distinct differences of differences Uniformly Distributed A Number of Additional 8-cycles A number of repetitions of differences of differences Approximately Uniformly Distributed 31

  32. Summary of Single Register CSO2Cs • Structure of Tanner Graphs for Iterative Decoding • No 4–cycles • A minimal number of 6–cycles which are due to the unavoidable repetitions • A minimal number of 8–cycles • Uniform distribution of the 6 and 8–cycles Relaxing doubly orthogonal conditions of CSO2C adds some 8-cycles leading to codes with substantially reduced coding spans S-CSO2C • Error performance • Asymptotic coding gain • Correspond to the minimum Hamming distance at moderate Eb/N0 values. 32

  33. In order toimprovetheerror performancesof the iterative decoding algorithmthe degree of the parity symbolsmust be increased Extension : Recursive Convolutional Doubly-OrthogonalCodes (RCDO) Solution : Use Recursive Convolutional Encoders (RCDO) 33

  34. 3rd register 2nd register 1st register RCDO codes • RCDO are systematic recursive convolutional encoder • RCDO can be represented by their sparse parity-check matrixHT(D) Forward connections Feedback connections • RCDO encoder example : R=3/6, 3 inputs 6 outputs 34

  35. The parity-check matrix HT(D) completely defined the RCDO codes. The memory of the RCDO encoder mis defined by the largest shift register of the encoder Each line of HT(D)represents one output symbol of the encoder. Each column of HT(D) represents one constraint equation. Protograph representation of a RCDO codes is defined by HT(D). The degree distributions of the nodes in the protograph become important in the convergence behavior of the decoding algorithm. Regular RCDO (dv, dc) : dv = degree of variable (rows) dc = degree of constraint (col.) (same numbers of nonzero elements of HT(D) ) RCDO protograph structure Irregular RCDO protograph 35

  36. RCDO doubly-orthogonal conditions • The analysis of the computational tree of RCDO codes shows that, as for the CSO2C, three conditions based on the differences must be respected by the connection positions of the encoder. • For RCDO the decoding equations are completely independent over 2 decoding iterations. • Estimation of parity symbols are now improved from iteration to iteration • Resulting in improving the error performances 36

  37. 50thiter LDPC n=1008 decoder limit’ RCDO (3,6) 1.10 dB Increasing number of shift registers RCDO codes error performances • Error performances of RCDO (3,6) codes, R=1/2, 25th iteration • Characteristics : • Small shift registers • Error performances VS number of shift registers • Low number of iterations compared to LDPC • The complexity per decoded symbol of all the decoders associated with the RCDOs ( in this figure ) is smaller than the one offered by the LDPC decoder of block length 1008. Attractive for VLSI implementation.

  38. Characteristics : Coding rate-15/30 15 registers m = 149 Regular HT(D) (3,6) 40th Iteration Close to optimal convergence behavior of the iterative decoder. After 40 iterations  0.4 dB Low error floor RCDO codes error performances • Asymptotic error performances of RCDO close to BP decoder limit 38

  39. Comparisons • Error performances comparisons with other existing techniques Pb = 10-5 CSO2C good error performances at moderate SNR RCDO good error performance at low SNR 39 • Figure from : C. Schlegel and L. Perez,Trellis and Turbo coding, Wiley, 2004.

  40. Comparison of the techniques Block lengthN, IterationsM 40

  41. Conclusion • New iterative decoding technique based on systematic doubly orthogonal convolutional codes : CSO2C, RCDO. • CSO2C : good error performances at moderate Eb/No : • Single shift register encoder; J dominant • Recursive doubly orthogonal convolutional codes RCDO. • Error performances improvement at low Eb/No. • Multiple shift registers encoder ; mdominant • Error performances comparable to those of LDPC block codes. • Simpler encoding and decoding processes. • Attractive for VLSI high speed implementations • Searching for optimal CSO2C & RCDO codes : open problem 41

  42. Merci THANK YOU 42

More Related