1 / 23

Code design: Computer search

Code design: Computer search. Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do not try several generator matrices for the same code? Avoid nonminimal matrices (and of course catastrophic ones).

Télécharger la présentation

Code design: Computer search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Code design: Computer search • Low rate: • Represent code by its generator matrix • Find one representative for each equivalence class of codes • Permutation equivalences? • Do not try several generator matrices for the same code? • Avoid nonminimal matrices (and of course catastrophic ones)

  2. Code design: Computer search (2)* • High rate codes • Represent code by its parity check matrix • Find one representative for each equivalence class of codes • Avoid nonminimal matrices • Problem: Branch complexity of rate k/n codes • One solution: • Use a bit-oriented trellis • Can limit complexity of bit-level trellis to +, 0  min(n-k,k) • Reference:E. Rosnes and Ø. Ytrehus,Maximum length convolutional codes under a trellis complexity constraint, J. of Complexity 20 (2004) 372-403.

  3. Code design/ Bounds on codes* • Heller bound: dfree best minimum distance in any block code of the same parameters as any truncated/terminated code • McEliece: For rate (n-1)/n codes of dfree =5, n 2(+1)/2 • Sphere packing bounds • Applies to codes of odd dfree • Similar to the Hamming/sphere packing bound for block codes • Corollary: For rate (n-1)/n codes of dfree =5, n 2/2 • Reference: E. Rosnes and Ø. Ytrehus,Sphere packing bounds for convolutional codes, IEEE Trans. On Information Theory, to appear (2004)

  4. QLI codes Skip this.

  5. Code design: Construction from block codes • Justesen: • Constr 12.1 …skip • Constr 12.2 …skip • Other attempts • Wyner code d=3 • Generalizations d=3 and 4. Can be shown to be optimum • Reference:E. Rosnes and Ø. Ytrehus,Maximum length convolutional codes under a trellis complexity constraint, J. of Complexity 20 (2004) 372-403. • Other algebraic (”BCH-like”) constructions (not in book) • Warning: Messy

  6. Implementation issues • Decoder memory • Metrics dynamic range (normalization)* • Path memory • Decoder synchronization • Receiver quantization

  7. Decoder memory • Limit to how large  can be • In practice  is seldom more than 4 (16 states) • Exists implementation with  = 14 (BVD)

  8. Path memory • For large information length: • Impractical to store complete path before backtracing • Requires a large amount of memory • Imposes delay • Practical solution: • Select integer  • After  blocks: Start backtracing by either • Starting from the survivor in an arbitrary state, or the survivor in the state with best metrics • Try backtracing from all states, and select the first k-bit block that occurs most frequently in these backtracings • This determines the first k-bit information block. Subsequent information blocks are decoded successively in the same way

  9. Truncation of the decoder • Not maximum likelihood • The truncated decoder can make two kinds of errors • EML = The kind of error that an ML decoder would also make • Associated with low-weight paths (paths of weight dfree, dfree+1, …) • P(EML )  Adfree Pdfree where Pd depends on the channel and is decreasing rapidly as the argument d increases • ET = The kind of error that are due to truncation, and which an ML decoder would not make • Why would an ML decoder not make those errors? • Because if the decoding decision is allowed to ”mature”, the erroneous codewords will not preferred over the correct one. However, the truncated decoder enforces an early decision and thus makes mistakes

  10. Truncation length • Assume the all zero word is the correct word • P(ET)  Ad()  Pd() ,where d() is the minimum weight of a path that leaves state zero at time 1 and that is still unmerged at time  • Want to select d() such that P(EML ) >> P(ET) • The minimum length  such that d() > dfree is called the truncation length. Often 4-5 times the memory m. Determined e. g. by modified Viterbi algorithm

  11. Example

  12. R=1/2, =4 code, varying path memory

  13. Synchronization • Symbol synchronization: Which n bits belong together as a branch label? • Metrics evaluation: If metrics are consistently bad this insicates loss of synchronization • May be enhanced by code construction* • May embed special synchronization patterns in the transmitted sequence • What if the noise is bad? • Decoder: Do we know that the encoder start in the zero state? • Not always • Solution: Discard the first (5m) branches

  14. Quantization Skip this

  15. Other complexity issues • Branch complexity (of high rate encoders) • Punctured codes • Bit level encoders (equivalence to punctured codes)* • Syndrome trellis decoding (S. Riedel)* • Speedup: • Parallell processing • One processor per node • Several time instants at a time / pipelining (Fettweiss)* • Differential Viterbi decoding

  16. SISO decoding algorithms • Two new elements • Soft-in Soft-out (SISO) • Allows non-uniform input probability • The a priori information bit probabilities P(ul) may be different • Applications: Turbo decoding • Two most used algorithms: • The Soft Output Viterbi Algorithm • The BCJR algorithm (APP, MAP)

  17. SOVA

  18. SOVA (continued)

  19. SOVA (continued)

  20. SOVA (continued)

  21. SOVA Update

  22. SOVA • Finite path memory: Straightforward • Complexity vs. Complexity of Viterbi Algorithm:Modest increase • Storage complexity: Need to store reliability vectors in addition to metrics and survivor pointers • Need to update the reliability vector • Provides ML decoding (Minimizes WER) and reliability measure • Not MAP: Does not minimize BER

  23. Suggested exercises • 12.6-12.26

More Related