1 / 11

Channel capacity

Channel capacity. C =max I(xy)  that is maximum information transfer. Binary Symmetric Channels. The noise in the system is random, then the probabilities of errors in ‘0’ and ‘1’ is the same. This is characterised by a single value p of binary error probability.

berickson
Télécharger la présentation

Channel capacity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Channel capacity C=max I(xy)  that is maximum information transfer Binary Symmetric Channels The noise in the system is random, then the probabilities of errors in ‘0’ and ‘1’ is the same. This is characterised by a single value p of binary error probability. 0 0 p(0) x (transmit) y (receive) p(1)=1-p(0) 1 1

  2. Channel capacity of this channel Mutual information increases as error rate decreases This is an backward equivocation (error entropy),p is fixed, so the I(xy) is maximum when H(y) is maximum. This occurs when p(0)=p(1) at receiver (output) H(y)=1. C=1-H(p)

  3. Example . Find the capacity of a binary symmetric channel with a binary error of 0.125. (a) Variation of information transfer with output probability (b) Variation of Capacity with error probability

  4. How to overcome the problem of information loss in noisy channel? • Physical solution? • (b) System solution. (Channel coding). Source coding: The task of source coding is to represent the source information with the minimum of symbols under the assumption that channel is noisy-free. When a code is transmitted over a channel in the presence of noise, errors will occur. Channel coding: The task of channel coding is to represent the source information in a manner that minimises the error probability in decoding. Redundancy; --- put extra amount of information to compensate information loss; (temperature control of a room in winter for different outdoor temperature).

  5. Symbol error is the error based on some decision rule; If a received code word (some bits might be in error) is classified as the wrong symbol (different than the original symbol it meant). Binomial distribution plays an important role in channel coding; A binomial distribution experiment consists of n identical trials, (think of coding a symbol by a binary digit sequence i.e. code word , so n is length of the code word). Each trial has two possible outcomes, S or F, respectively, with a probability p. Easily S can be defined as a transmission error (10 or 01). The probability p is bit error rate. is used to calculate probability of r bit errors in a codeword.

  6. Coding in noisy channel error protection:-- improve tolerance of errors error detection: --- indicate occurrence of errors. or error correction • Binary coding for error protection Example: Assume Binary Symmetrical Channel, p=0.01 ( error probability) • Coding by repetition Code A=00000, B=11111, use majority decision rule.  If more 0’s than 1’s A 2 errors tolerated without producing symbol error. Use binomial probability distribution to find symbol error probability p(e)

  7. Information rate M number of equiprobable code words. n  number of binary digits P(e) if R

  8. 2) Coding by selection of code words ( using 5 digits, there are 32 possible code words, But we don’t have to use them all. ) • Two selections ( i.e. repetition) • A=00000, B=11111 • This gives • Thirty -two selections

  9. 4 selections A compromise between two extremes • A lot of code words to give reasonable R. • Code words are as different as possible to reduce • p(e), e.g. Each code word differs from all the other in at least three digit positions. Hamming distanceis the number of digits positions in which a pair of code words differ.

  10. Minimum Hamming distance (MHD) is the smallest hamming distance for the set of code words. MHD=3. One error can be tolerated.

More Related