1 / 34

Local optimality in Tanner codes

Local optimality in Tanner codes. Guy Even Nissim Halabi. June 2011. Error Correcting Codes for Memoryless Binary-Input Output-Symmetric Channels (1). Memoryless Binary-Input Output-Symmetric Channel Binary input but output is real

kanan
Télécharger la présentation

Local optimality in Tanner codes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Local optimality in Tanner codes Guy Even Nissim Halabi June 2011

  2. Error Correcting Codes for Memoryless Binary-Input Output-Symmetric Channels (1) • Memoryless Binary-Input Output-Symmetric Channel • Binary input but output is real • Stochastic: characterized by conditional probability function P( y | c ) • Memoryless: errors are independent (from bit to bit) • Symmetric channel: Errors affect ‘0’s and ‘1’s symmetrically • Example: Binary Symmetric Channel (BSC) Channel Encoding Noisy Channel Channel Decoding noisy codeword codeword 1-p 0 0 p yi ci p 1-p 1 1

  3. Error Correcting Codes for Memoryless Binary-Input Output-Symmetric Channels (2) • Log-Likelihood Ratio (LLR) i for a received observation yi: • i> 0 yi is more likely to be ‘0’ • i< 0 yi is more likely to be ‘1’ •   y : injective mapping replace y by  ( ) Channel Encoding Noisy Channel Channel Decoding noisy codeword codeword

  4. Maximum-Likelihood (ML) Decoding • Maximum-likelihood (ML) decodingcan be formulated by: (for any binary-input memory-less channel) • or a linear program: {0,1}N No Efficient Representation • conv()[0,1]N

  5. Linear Programming (LP) Decoding Solve LP • Linear Programming (LP) decoding [Fel03, FWK05] – relaxation of the polytopeconv(C) • P : (1) All codewordsx in C are vertices • (2) All new vertices are fractional • (therefore, new vertices are not in C) • (3) Has an efficient representation {0,1}N • conv()[0,1]N P • conv() P fractional

  6. Linear Programming (LP) Decoding Solve LP • Linear Programming (LP) decoding [Fel03, FWK05] – relaxation of the polytopeconv(C) • P : (1) All codewordsx in C are vertices • (2) All new vertices are fractional • (therefore, new vertices are not in C) • (3) Has an efficient representation LP decoder finds ML codeword {0,1}N • conv()[0,1]N P • conv() P fractional

  7. Linear Programming (LP) Decoding Solve LP • Linear Programming (LP) decoding [Fel03, FWK05] – relaxation of the polytopeconv(C) • P : (1) All codewordsx in C are vertices • (2) All new vertices are fractional • (therefore, new vertices are not in C) • (3) Has an efficient representation LP decoder fails {0,1}N • conv()[0,1]N P • conv() P fractional

  8. Tanner Codes [Tan81] Factor graph representation of Tanner codes: • Every Local-code node jis associated withlinear code of length degG(j) • Tanner code C(G) and codewordsx : • Extended local-code j {0,1}N: extend to bits outside the local-code • Example: Expander codes [SS’96] Tanner graph is an expander; Simple bit flipping decoding algorithm. Variablenodes Local-Codenodes G = ( I  J , E )

  9. LP Decoding of Tanner Codes • Maximum-likelihood (ML) decoding: • where • Linear Programming (LP) decoding [following Fel03, FWK05]: • where • P satisfies the requirements. conv(extended local-code 1) conv(extended local-code 2)

  10. Why study (irregular) Tanner Codes? • Tighter LP relaxation because local codes are not parity codes • Irregularityof the Tanner graph  Codes with better performance in the case of LDPC codes [Luby et al., 98] • Regular Low-Density Tanner Code with non-trivial local codes  Irregular LDPC code. • Irregular Tanner code with non-trivial local codes  Might yield improved designs.

  11. Criterions of Interest • Let  Ndenote an LLR vector received from the channel. • Let x (G) denote a codeword. • Consider the following questions: • E.g., efficient test via local computations  “Local Optimality” criterion NP-Hard  Efficient Test with One Sided Error Definitely Yes / Maybe No x

  12. Combinatorial Characterization of Local Optimality (1) • Let x (G)  {0,1}N f  [0,1]N N [Fel03] Define relative pointx  f by • Consider a finite set B [0,1]N of deviations • Definition: A codeword x  is locally optimal for  N if for all vectors b  B, • Goal: find a set Bdeviations such that: (1)x  LO( ) x ML() and ML() unique (2) x  LO( ) x LP() and LP() unique (3)  ML() LP() integral LO( ) All-Zeros Assumption

  13. Combinatorial Characterization of Local Optimality (2) • Goal: find a set B such that: (1)x  LO( ) x ML() and ML() unique (2) x  LO( ) x LP() and LP() unique (3) • Suppose we have properties (1), (2). Large support(b)  property (3). (e.g., Chernoff-like bounds) • If B = C, then: x  LO( ) x ML() and ML() unique However, analysis of property (3) ??? b – GLOBAL Structure

  14. Combinatorial Characterization of Local Optimality (3) • Goal: find a set B such that: (1)x  LO( ) x ML() and ML() unique (2) x  LO( ) x LP() and LP() unique (3) • Suppose we have properties (1), (2). Large support(b)  property (3). (e.g., Chernoff-like bounds) • For analysis purposes, consider structures with a local nature B is a set of TREES [following KV’06] (height < ½ girth) • Strengthen analysis by introducing layer weights! [following ADS’09]  better bounds on • Finally, height(subtrees(G)) < ½ girth(G) = O(log N)  Take path prefix trees – not bounded by girth!

  15. Path-Prefix Tree • Captures the notion of computation tree • Consider a graph G=(V,E) and a node r  V: • – set of all backtrackless paths in G emanating from noder with length at mosth. • – path-prefix tree of G rooted at node r with height h. 1 G: 1 1 1 1 1 1 1 1 2 2 1 2 2 2 3 4 2 2

  16. d-Tree v0 • Consider a path-prefix tree of a Tanner graph G = (I  J , E) • d-tree T [r,h,d]– subgraph of • root = r • ∀ v ∈ T ∩ I: deg T(v) =deg G (v). • ∀ c∈ T ∩ J: deg T(c) =d. v0 Not necessarily a valid configuration! 2-tree = skinny tree / minimal deviation 3-tree 4-tree v0

  17. Cost of a Projected Weighted Subtree • Consider layer weights: {1,…,h}→ , and a subtree of a path prefix tree . Define a weight function for the subtree induced by : where • – projection to Tanner graph G. 1 1 1 1 1 1 1 1 project 1 1 2 2 2 1 2 2 2 3 4 2 2

  18. Combinatorial Characterization of Local Optimality • Setting: • (G)  {0,1}N Tanner code with minimal local distance d* • 2 ≤ d ≤ d* •   [0,1]h\{0N} • – set of all vectors corresponding to projections to G by-weighted d-trees of height 2h rooted at variable nodes • Definition: A codeword x is (h, , d)-locally optimal for  N if for all vectors ,

  19. Thm: Local Opt ⇒ ML Opt / LP Opt • Theorem: If x  (G) is (h ,  , d)-locally optimal for , then: (1) x is the unique ML codeword for . (2) x is the unique optimal LP solution given . • Proof sketch – Two steps: STEP 1:Local Opt.  ML Opt. • Key structural lemma: Every codeword x equals a convex combination of projected -weighted d-treesin G (up to a scaling factor ): • For every codeword x’ ≠ x, let  x local-optimality of x Key structural lemma

  20. Thm: Local Opt ⇒ ML Opt / LP Opt • Theorem: If x  (G) is (h ,  , d)-locally optimal for , then: (1) x is the unique ML codeword for . (2) x is the unique optimal LP solution given . • Proof sketch – Two steps: STEP 2:Local Opt.  LP Opt. • Local opt.  ML Opt. used as “black box”. • Reduction using graph covers and graph cover decoding [VK’05]

  21. Thm: Local Opt ⇒ ML Opt / LP Opt • Theorem: If x  (G) is (h ,  , d)-locally optimal for , then: (1) x is the unique ML codeword for . (2) x is the unique optimal LP solution given . • Interesting outcomes: • Characterizes the event of LP decoding failure. For example: • Work in progress: Design an iterative message passing decoding algorithm that computes an (h ,  , d)- locally-optimal codeword after h iterations. • x locally optimal codeword for  weighted message passing algorithm finds xafterh iterations + guarantee that x is the ML codeword. All-Zeros Assumption

  22. Thm: Local Opt ⇒ ML Opt / LP Opt • Theorem: If x  (G) is (h ,  , d)-locally optimal for , then: (1) x is the unique ML codeword for . (2) x is the unique optimal LP solution given . • Goals achieved: • (1)x  LO( ) x ML() and ML() unique • (2) x  LO( ) x LP() and LP() unique • Left to show: • (3) Pr{x  LO( )} = 1 – o(1)  ML() LP() integral LO( )

  23. Bounds for LP Decoding Based on d-Trees (analysis for regular factor graphs) • (dL,dR)-regular Tanner codes with minimum local distance d* - Form of finite length bounds: c > 1.  t.  noise < t. Pr(LP decoder success)  1 - exp(-cgirth) - If girth = Θ(logn), then Pr(LP decoder success)  1 - exp(-nγ), for 0 < < 1 n → ∞:tis a lower bound on the threshold of LP decoding • Results for BSC using analysis of uniform weighted d-trees:

  24. Message Passing Decoding for Irregular LDPC codes • Dynamic Programming on coupled computation trees of the Tanner graph. • Input: Channel observations for variable nodes v  V. • Output: An assignment to variable nodes v  V ,s.t.: assignment to v agrees with the “Tree” codeword that minimizes an objective function on the computation tree rooted at v. • Algorithm principle – Messages are sent iteratively: • variable nodes → check nodes • check nodes → variable nodes • Goal: A locally optimal codeword x exists  Message passing algorithm finds x

  25.  -Weighted Min-Sum: Message Rules Variable node → Check node message (iteration l): Check node → Variable node message (iteration l):

  26.  -Weighted Min-Sum: Decision Rule • Decision at variable node v on iteration h:

  27. Thm: Local Opt. ⇒  -Weighted Min-Sum Opt. • Theorem: If x  (G) is (h ,  , 2)-locally optimal for , then: (1)  -weighted min-sum computes x in h iterations. (2) x is the unique ML codeword given . • The  -Weighted Min-Sum algorithm generalizes the attenuated max-product algorithm [Frey and Koetter ’00] [Jian and Pfister ’10]. • Open question: probabilistic analysis of local-optimality in the irregular case.

  28. Local Optimality Certificates for LP decoding • Previous results: • Koetter and Vontobel ’06 – Characterized LP solutions and provide a criterion for certifying the optimality of a codeword for LP decoding of regular LDPC codes. • Characterization is based on combinatorial structure of skinny trees of height h in the factor graph. (h<girth(G)) • Arora, Daskalakis and Steurer ’09 – Certificate for LP decoding of regular LDPC codes over BSC. Extension to MBIOS channels in [Halabi-Even ’10]. • Certificate characterization is based on weighted skinny trees of height h. (h<girth(G)) Note: Bounds on the word error probability of LP decoding are computed by analyzing the certificates. By introducing layer weights, ADS certificate occurs with high probability for much larger noise rates. • Vontobel ’10 – Implies a certificate for LP decoding of Tanner codes based on weighted skinny subtrees of graph covers.

  29. Local Optimality Certificates for LP decoding – Summary of Main Techniques

  30. Local Optimality Certificates for LP decoding – Summary of Main Techniques v0 v0

  31. Local Optimality Certificates for LP decoding – Summary of Main Techniques

  32. Conclusions • Combinatorial, graph theoretic, and algorithmic methods for analyzing decoding of (modern) error correcting codes over any memoryless channel: • New local optimality combinatorial certificate for LP decoding of irregular Tanner codes, i.e., Local Opt.  LP Opt  ML Opt. • The certificate is based on weighted “fat” subtrees of computation trees of height h.h is not bounded by girth. • Proofstechniques: combinatorial decompositions and graph covers arguments. • Efficient algorithm (dynamic programming), runs in O(|E|h) time, for the computation of local optimality certificate of a codeword given an LLR vector. Degree of local-code nodes is not limited to 2

  33. Future Work • Work in progress: Design an iterative message passing decoding algorithm for Tanner codes that computes an (h ,  , d)- locally-optimal codeword after h iterations. • x locally optimal codeword for  weighted message passing algorithm computes x in h iterations + guarantee that x is the ML codeword. • Asymptotic analysis of LP decoding and weighted min-sum decoding for ensembles of irregular Tanner codes. • applydensity evolution techniques and the combinatorial characterization of local optimality.

  34. Thank You!

More Related