1 / 38

Linear network code for erasure broadcast channel with feedback

Linear network code for erasure broadcast channel with feedback. Presented by Kenneth Shum Joint work with Linyu Huang, Ho Yuet Kwan and Albert Sung. Erasure broadcast channel. Source node. Data Packets P 1 , P 2 , …, P N. Broadcast.

bobbycruz
Télécharger la présentation

Linear network code for erasure broadcast channel with feedback

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Linear network code for erasure broadcast channel with feedback Presented by Kenneth Shum Joint work with Linyu Huang, Ho Yuet Kwan and Albert Sung

  2. Erasure broadcast channel Sourcenode Data Packets P1, P2, …, PN Broadcast Each transmittedpacket is erased withcertain probability User K User 1 User 2 User 3 …. Want to send all source data packets to each user.

  3. Erasure broadcast channel with feedback Sourcenode User K User 1 User 2 User 3 …. Users can send acknowledgements back to the source node.

  4. Linear Network Code • Source node broadcasts encoded packets. • A packet is considered as a vector over a finite field F. • An encoded packet is obtained by taking linear combination of the N source packets, with coefficients drawn from F. • The vector formed by the N coefficients is called the encoding vector of the encoded packet.

  5. Erasure broadcast channel Sourcenode Data Packets P1, P2, …, PN Broadcast Linear combinationsof P1, P2, …, PN User K User 1 User 2 User 3 …. The packet header contains the encoding vector of the encoded packet.

  6. The received packets are cached The source packets can beinterpreted as the standard basis e1, e2, … eN in vector space FN Sourcenode User 1 User 2 User 3 User K …. v1, v2, … v’1, v’2, … v’’1, v’’2, … Each user stores the received packets and the corresponding encoding vectors

  7. Synopsis • Objectives: • Minimize the completion time of each user. • Minimize encoding and decoding complexity. • Decoding complexity can be reduced if the encoding vectors are sparse. • Apply some version of Gaussian elimination which exploits sparsity. • The problem of generating sparse encoding vector is related to some NP-complete problems. • Heuristic algorithms and comparison.

  8. Complexity Issues in Network Coding • Deciding whether there exists a linear network code with prescribed alphabet size is NP-hard • Lehman and Lehman, Complexity classification of network information flow problems, SODA, 2004. • The minimization of the number of encoding nodes is NP-hard. • Langberg, Sprintson and Bruck, The encoding complexity of network coding, Trans. IT 2006. • Langberg and Sprintson, On the hardness of approximating the network coding capacity, Trans IT, 2011. • For noiseless broadcast channel, when the alphabet is binary, the problem of minimizing the number of packet transmissions in the index coding problem is NP-hard. • El Rouayheb, Chaudhry and Sprintson, On the minimum number of transmissions in single-hop wireless coding networks, ITW, 2007.

  9. Innovative Packet • An encoded packet is said to be innovative to a user if the corresponding encoding vector is linearly independent with the encoding vectors received previously. • If an encoded packet is innovative to all users, then we say that it is innovative. • It is known that innovative packets always exist if the finite field size is larger than or equal to the number of users. • Keller, Drinea and Fragouli, Online broadcasting withnetwork coding, NetCod, 2008.

  10. Notation: Encoding matrix Sourcenode The source packets areinterpreted as the standard basis e1, e2, … eN in vector space FN User 1 User 2 User 3 User K …. C2 C3 CK C1 The rows of matrix Ci are the encoding vectors of the received packets.

  11. The set of all innovative encoding vectors Given for all ’s, the set of all innovative encoding vectors is defined as

  12. Hamming weight and sparsity Given an encoding vector , the support of is defined as The Hamming weight of is defined as the cardinality of . with Hamming weight is said to be -sparse.

  13. SPASITY Problem Consider both sparsity and innovativeness of an encoding vector, formulate the problem below: Problem : SPARSITY Instance : K matrices over GF(q), where . is a positive integer. Question : Is there a vector with Hamming weight less than or equal to ?

  14. Example: Let q=2, K=2, N=4 and n=2. Consider the following two matrices We have There are three vectors in with Hamming weight less than or equal to n=2.

  15. Theorem. SPARSITY is NP-complete. Now define the optimization version of as follows: Question: Find a vector with minimum Hamming weight. It can be shown that the optimization version of SPARSITY is NP-hard. However, for fixed K and q, by brute force methods, it can be solved in

  16. Orthogonal complement Let be the row space of . Denote the orthogonal complement of by Let be an matrix whose rows form a basis of . can be obtained by the Reduced Row Echelon Form(RREF) of .

  17. To check whether an encoding vector is innovative, we use the following fact. Theorem. Given , an encoding vector belongs to iff for all ’s .

  18. Minimizing the Hamming Weight Given all ’s, we have their by RREF. Let be the i-th row of . Define where denotes the logical-OR operator applied component-wise to vectors with each non-zero component being treated as a “1”.

  19. Example: Let q=3, K=3, N=4 and the orthogonal complements of be given by the row spaces of The vector for , are

  20. Define as the matrix whose k-th row is . Note that is a binary matrix and has no zero rows. Given a subset of column indices of , let be the submatrix of matrix , whose columns are chosen according to .

  21. Lemma 3. Let be an index set and . There exists an encoding vector with support inside (i.e. for ) iff has no zero rows.

  22. Example (cont’d) First user Second user Third user Choose a 3 x w submatrix of B with minimal w, such that the submatrix has no zero rows. We may choose the first two columns. We can find an encoding vector with two non-zero components.

  23. By reducing HITTING SET to SPARSITY, NP-completeness of SPARSITY can be shown. Problem: HITTING SET Instance: A finite set , a collection of subsets of and an integer . Question: Is there a subset with cardinality , such that for each we have ?

  24. Example (cont’d) First user Second user Third user Choose a 3 x w submatrix of B with minimal w, such that the submatrix has no zero rows. Second user 2 First user The minimal hitting sets are: {1,2}, {1,3}, {2,3}, {2,4}, {3,4} 3 1,4 Third user

  25. Optimal Hitting method • Solve the hitting set problem optimally by reducing it to binary integer programming. • Minimum sparsity at each iteration is guaranteed. • After the support of the encoding vector is determined, find the coefficients which make the vector innovative.

  26. Greedy Hitting method • Solve the hitting set problem heuristically by greedy method. • Sequentially pick an element which hits the largest number of sets. • Minimum sparsity is not guaranteed. • After the support of the encoding vector is determined, find the coefficients which make the vector innovative.

  27. Existing encoding schemes (I) • Random linear network codes. • Encoding • Phase 1: The source node first broadcast each packet. • Phase 2: Sends encoded packets with coeff. randomly generated. • Decode by Gaussian elimination. • No feedback is required.

  28. Existing encoding schemes (II) • Chunked code • an extension of random linear network coding. • Divide the source packets into chunks. Each chunk contains c packets. • Apply random linear network coding to each chunk. • The resulting encoding vectors are c-sparse. • Feedback is not required.

  29. Existing encoding schemes (III) • Instantly decodable network code • Encoding • Phase 1: The source packets are first broadcast once. • Phase 2: Find a subset of users such that each of them can decode a source packet by transmitting an encoded packet. • Decoding: The user in the target set can decode one packet immediately if the encoded packet is received successfully. • Feedback is required.

  30. Existing encoding schemes (IV) • LT code • Use the robust soliton degree distribution in encoding • No feedback is required.

  31. Comparison of complexity

  32. Completion time vs number of users(perfect feedback)

  33. Binary alphabet

  34. Completion time vs number of users(lossy feedback)

  35. Decoding time vs no. of users

  36. Encoding time vs no. of users

  37. Hamming weight vs no. of users

  38. Conclusion We investigate the issue of the generation of sparsest innovative encoding vectors which is proven to be NP-hard. A systematic way to generate the sparsest innovative encoding vectors is given. There is a tradeoff between encoding complexity, decoding complexity, and completion time.

More Related