1 / 19

Information Theory

Information Theory. Introduction to Channel Coding Jalal Al Roumy. Information to be transmitted. Channel coding. Source. Channel. Transmitter. Modulation. coding. coding. Channel. Information received. Channel decoding. Source. Channel. Receiver. Demodulation. decoding.

yuma
Télécharger la présentation

Information Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Information Theory Introduction to Channel Coding Jalal Al Roumy

  2. Information to be transmitted Channel coding Source Channel Transmitter Modulation coding coding Channel Information received Channel decoding Source Channel Receiver Demodulation decoding decoding Introduction Error control for data integrity may be exercised by means of either forward error correction (FEC) or automatic request for re-transmission (ARQ). For FEC, redundant data is added to the code for the purposes of error detection and correction whereas ARQ utilises redundancy for the sole purpose or error detection. Upon detection the receiver requests a repeat transmission. Channel coding is shown in the following diagram – modulation and channel coding may be combined. Cryptosource

  3. Coding - basic concepts Without coding theory and error-correcting codes there would be nodeep-space travel and pictures, no satellite TV, no mobile communications no e-business, no e-commerce, no control systems, etc.Error-correcting codes are used to correct messages when they are transmitted through noisy channels. Error correcting framework Example A codeC over an alphabet S is a subset of S* - (CεS*). A q-nary code is a code over an alphabet of q-symbols. A binary code is a code over the alphabet {0,1}. Examples of codesC1={00,01,10,11} C2={000,010,101,100} C3={00000,01101,10111,11011}

  4. Background Historically FECs are divided into linear block codes (eg Hamming, cyclic, Reed-Solomon etc) and codes such as convolution codes. From a purist viewpoint the division is drawn between linear block codes and convolution codes. The next sessions will examine particular examples of FECs covering: • An overview to linear codes • Hamming codes • Cyclic codes • Convolutional codes The session will conclude with a brief excursion into ARQs. Firstly a reminder of the most basic form or error detection – the parity check. As usual we need some basic theory to begin!

  5. Parity Codes Two Dimensional Bit Parity: Detect and correct single bit errors Single Bit Parity: Detect single bit errors 1 0 0

  6. Example code

  7. The ISBN-code; an example Each book has International Standard Book Number has a 10-digit codeword producedby the publisher with the following structure: l p m w = x1 … x10 language publisher number weighted check sum 0 13 061814 4 such that The publisher has to put X (in this case 4 ) into the 10-th position. The ISBN code is designed to detect: (a) any single error (b) any double error created by a transposition 1.0+2.1+3.3+4.0+5.6+6.1+7.8+8.1+9.4+10.4+187 ≡ 0 (mod 11) Single error detection Let X=x1…x10 be a correct code and let Y=x1…xJ-1 yJ xJ+1 …x10with yJ=xJ+a, aą 0 In such a case:

  8. Binary symmetric channel Consider a transmission of binary symbols such that each symbol has probability of error p<1/2. Binary symmetric channel If n symbols are transmitted, then the probability of t errors is In the case of binary symmetric channels the ”nearest neighbour decoding strategy” is also “maximum likelihood decoding strategy''. ExampleConsider C={000,111} and the nearest neighbour decoding strategy. Probability that the received word is decoded correctly as 000is(1-p)3+3p(1-p)2, as111 is(1-p)3+3p(1-p)2. ThereforePerr(C)=1-((1-p)3+3p(1-p)2) is the so-called word error probability. Example If p=0.01, then Perr(C)=0.000298 and only one word in 3555 will reach the user with an error.

  9. Hamming distance The intuitive concept of “closeness'' of two words is formalized throughHamming distance d (x,y) of words x, y. For two words (or vectors) x, y; d (x,y) = the number of symbols x and y differ. Example: d (10101,01100)=3,d (first, second, fifth)=3 Properties of Hamming distance (1) d (x,y) = 0; iff x = y (2) d (x,y) = d (y, x) (3) d (x,z) ≤ d (x,y) + d (y,z)triangleinequality An important parameter of codes C is their minimal distance. d (C) = min{d (x,y)|x, yεC, x≠y}, because it gives the smallest number of errors needed to change one codeword into another. Theorem Basic error correcting theorem (1)A code C can detect up to s errors if d (C) ≥s+1. (2) A code C can correct up to t errors if d (C) ≥2t+1. Note – for binary linear codes d (C) = smallest weight W (C) of non-zero codeword,

  10. Some notation Notation: An (n,M,d) - code C is a code suchthat • n - is the length of codewords. • M - is the number of codewords. • d - is the minimum distance in C. Example: C1={00,01,10,11} is a (2,4,1)-code. C2={000,011,101,110} is a (3,4,2)-code. C3={00000,01101,10110,11011} is a (5,4,3)-code. Comment: A good (n,M,d) code has small n and large M and d.

  11. Code Rate For q-nary (n,M,d)-code we define code rate, or information rate, R, by The code rate represents the ratio of the number of input data symbols to the number of transmitted code symbols. For a Hadamard code eg, this is an important parameter for real implementations, because it shows what fraction of the bandwidth is being used to transmit actual data. Recall that log2(n) = ln(n)/ln(2)

  12. The main coding theory problem A good (n,M,d)-code has small n, large M and large d. The main coding theory problem is to optimize one of the parameters n,M,d for given values of the other two. Notation:Aq(n,d) is the largest M such that there is an q-nary (n,M,d)-code.

  13. A general upper bound on Aq(n,d) NotationFqn – is a set of all words of length n over alphabet{0,1,2,…,q-1} Definition For any codeword uÎFqn and any integer rł 0 the sphere of radius r and centre u is denoted by S(u,r)={vÎFqn |d(u,v) Łr}. Theorem A sphere of radius r in Fqn, 0ŁrŁn contains words Theorem(The sphere-packing or Hamming bound) If C is a q-nary (n,M,2t+1)-code, then A code which achieves the sphere-packing bound as above, i.e. such that equality holds, is called a perfect code. The Singleton result given for upper bound is

  14. A general upper bound on Aq(n,d) For Binary: The Singleton result becomes

  15. A general upper bound on Aq(n,d) Example An (7,M,3)-code is perfect if i.e. M=16 An example of such a code: C4={0000000, 1111111, 1000101, 1100010, 0110001, 1011000, 0101100, 0010110, 0001011, 0111010, 0011101, 1001110, 0100111, 1010011,1101001, 1110100}

  16. Lower Bound ForAq(N,d) The following lower bound for Aq(n,d) is known as Gilbert-Varshanov bound: Theorem Given dŁn, there exists a q-ary (n,M,d)-code with For binary

  17. Examples using bounds • Does there exist a linear code of length n=9, dimension k=2, and distance d=5? Yes, because • What is a lower and an upper bound on the size or the dimension, k, of a linear code with n=9 and d=5? • Hamming upper bound:

  18. Further example Does there exists a (15, 7, 5) linear code? Check G-V condition G-V condition does not hold, so G-V bound does not tell us whether or not such a code exists.

  19. Another example

More Related