1 / 9

Chapter 8

Chapter 8. Channel Capacity. Channel Capacity. Define C = max { I ( A ; B ) : p ( a )} = Amount of useful information per bits actually sent = The change in entropy by going through the channel (drop in uncertainty). after receiving. before receiving. average uncertainty

minh
Télécharger la présentation

Chapter 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 8 Channel Capacity

  2. Channel Capacity Define C = max {I(A; B) : p(a)} = Amount of useful information per bits actually sent = The change in entropy by going through the channel (drop in uncertainty) after receiving before receiving average uncertainty being sent: 8.1

  3. Uniform Channel channel probabilities do not change from symbol to symbol I.e. the rows of the probability matrix are permutations of each other. So the following is independent of a: 1 for some b 0 all others Consider no noise: P(b | a) =  W = 0  I(A ; B) = H(B) = H(A) (conforms to intuition only if permutation matrix) All noise implies H(B) = W 8.2

  4. Capacity of Binary Symmetric Channel P p(a = 0) 0 0 p(b = 0) Q C = max {I(A; B) : p(a)} = max {H(B) - W : p(a)} = max p(a = 1) 1 1 p(b = 1) where x = pP + (1 − p)Q p = p(a = 0) maximum occurs when x = ½,  p = ½ also (unless all noise).  C = 1 − H2(P) 8.5

  5. Numerical Examples If P = ½ + ε, then C(P)  3 ε2 is a great approximation. 8.5

  6. |A| = q = 2n−1 a = c1 … cn (even parity) H2(A) = n − 1 |B| = 2n = 2q c1 … cn = b (any parity) H2(B) = ?? Error Detecting Code P Q Q P For blocks of size n, we know the probability of k errors = A uniform channel with equiprobable input: p(a1) = … = p(aq). Apply to n-bit single error detection, with one parity bit among ci {0, 1}:  every b B can be obtained from any a  A by k = 0 … n errors: 8.3

  7. || nthterm = 0 || 0thterm = 0 || || 1 1 This is W for one bit n∙ …  |B| = 2n W 8.3

  8. think of this as the channel P Q Q P encode decode × 3 noisy channel  3 Error Correcting Code triplicate majority uncoded coded P3+3P2Q3PQ2+Q3 3PQ2+Q3P3+3P2Q let P′ = P2·(P + 3Q) 8.4

  9. Shannon’s Theorem will say that as n = 3 → ∞, there are codes that take P′ → 1 while C(P′)/n → C(P). 8.4

More Related