1 / 34

S.72-227 Digital Communication Systems

S.72-227 Digital Communication Systems. Cyclic Codes and Convolutional Codes. Topics today. Cyclic codes presenting codes: code polynomials systematic and non-systematic codes generating codes: generator polynomials, feedbacked shift registers decoding: syndrome decoding

hansenl
Télécharger la présentation

S.72-227 Digital Communication Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. S.72-227 Digital Communication Systems Cyclic Codes and Convolutional Codes

  2. Topics today • Cyclic codes • presenting codes: code polynomials • systematic and non-systematic codes • generating codes: generator polynomials, feedbacked shift registers • decoding: syndrome decoding • Convolutional codes • presenting codes • convolutional encoder • code trees and state diagram • generator sequences • structural properties • code weight, path gain, and generating function • code gain • decoding: maximum likelihood detection • Mod-2 arithmetic`s

  3. Linear block codes: Cyclic codes • For practical applications rather large n and k must be used. This is because in order to correct up to t errors it must be that • Hence for large n and k must be used • Advantages of cyclic codes: • Encoding and syndrome computation easy by shift registers • Handy decoding schemes exist due to inherent algebraic structure • For most of the linear codes there exist cyclic codes, as for instance for Hamming, BCH and Golay codes number of syndromes number of error patters in encoded word

  4. Defining cyclic codes: code polynomial • An (n,k) linear code X is called a cyclic code when every cyclic shift of a code X, as for instance X’, is also a code, e.g. • Each cyclic code has the associated code vector with the polynomial • Note that the (n,k) code vector has the polynomial of degree of n-1 or less. Mapping between code vector and code polynomial is one-to-one, e.g. they specify each other uniquely • Manipulation of the associated polynomial is done in Galois field 2 (or GF(2)) having elements {0,1}, where operations are performed mod-2. • For each cyclic code, there exist only one generator polynomial whose degree equals the number of check bits in the encoded word

  5. An example of a (7,4) cyclic code, generator polynomial G(p)=1+p+p3

  6. The common factor of cyclic codes • GF(2) operations (XOR and AND): • Cyclic codes have a common factor pn+1. In order to see this we consider summing two (unity shifted) cyclic code vectors: • Adding these together yields then the expression that shows the common factor: also

  7. Factoring cyclic code generator polynomial • Any factor of pn+1 with degree q=n-k generates an (n,k) cyclic code • Example: Consider the polynomial p7+1. This can be factored as • For instance the factors 1+p+p3 or 1+p2+p3, can be used to generate an unique cycliccode. For a message polynomial 1+p2 the following encoded word is generated:and the respective code vector (of degree n-1, in this case) is

  8. More about the generator polynomial • The generator polynomial for a (n,k) cyclic code is defined byand G(p) is a factor of pn+1. Any factor of pn+1 that has the degree q may serve as the generator polynomial. We noticed that a code is generated by the multiplication, where M(p) is a block of k message bits. Hence this gives a criterion to select the generating polynomial, e.g. it must be a factor of pn+1. • Only few of the possible generating polynomials yield high quality codes (in terms of their minimum Hamming distance) Some cyclic codes:

  9. Systematic cyclic codes • Define the length q=n-k check vector C and the length-k message vector M by • Thus the systematic n:th degree codeword polynomial is message bits check bits Check bits: Question: Why these denote the message bits still the message bits are M(p) ???

  10. Determining check-bits • Prove that the check-bits can be calculated from the message bits M(p) by must be a systematic code based on its definition (previous slide) message check Example: (7,4) Cyclic code:

  11. Generating systematic cyclic codes 0 1 • Both systematic and nonsystematic cyclic codes can be generated by using shift registers whose feedback coefficients are determined directly by the generating polynomial • In can be shown that for cyclic codes generator polynomial is always of the form • In the circuit, first the message flows to the transmitter, and feedback switch is set to ‘1’, where after check-bit-switch is turned on, and the feedback switch to ‘0’, enabling the check bits to be outputted

  12. A practical realization for (7,4) Hamming code Use these equations to check out the table. (Where the equations come from?)

  13. Decoding cyclic codes • Every valid, received code word R(p) must be a multiple of G(p), otherwise an error has occurred. (Assume that the probability for noise to convert code words to other code words is very small.) • Therefore dividing the R(p)/G(p) and considering the remainder as a syndrome can reveal if the error has happed and sometimes also to reveal in which bit (depending on code strength) • Division can be accomplished by shift registers • The syndrome of n-k-1 degree is therefore • This can be expressed also in terms of error E(p) and the code word X(p) hence

  14. Decoding cyclic codes: example Using denotation of this example:

  15. Decoding cyclic codes (cont.) Table 16.6

  16. Convolutional coding • Block codes are memoryless • Convolution codes have memory that utilizes previous bits to encode or decode following bits • Convolutional codes are specified by n, k and constraint length that is the maximum number of information symbols upon which the symbol may depend • Thus they are denoted by (n,k,L), where L is the code memory depth • Convolutional codes are commonly used in applications that require relatively good performance with low implementation cost • Convolutional codes are encoded by circuits based on shift registers and decoded by several methods as • Viterbi decoding that is a maximum likelihood method • Sequential decoding (performance depends on decoder complexity) • Feedback decoding (simplified hardware, lower performance)

  17. Example: convolutional encoder (n,k,L) = (2,1,2) encoder • Convolutional encoder is a finite state machine processing information bits in a serial manner • Thus the generated code word is a function of input and the state of the machine at that time instant • In this (n,k,L)=(2,1,2) encoder each message bit influences a span of n(L+1)=6 successive output bits that is the code constraint length • Thus (n,k,L) convolutional code is produced that is a 2n(L-1) state finite-state machine

  18. (3,2,1) Convolutional encoder Here each message bit influences a span of n(L+1)=3(1+1)=6successive output bits

  19. Generator sequences • (n,k,L) Convolutional code can be described by the generator sequences that are the impulse responses for each coder output branch: • Generator sequences specify convolutional code completely by the generator matrix • Encoded convolution code is produced by matrix multiplication of the input and the generator matrix Note that the generator sequence length exceeds register depth by 1

  20. Encoding equations • Encoder outputs are formed by modulo-2 discrete convolutions: • Therefore the l:th bit of the j:th output branch is • Input for extraction of generator sequences is • Hence for this circuit the following equations result: Encoder output:

  21. Example of using generator matrix Verify that you can obtain the result shown!

  22. Representing convolutional code: code tree Tells how one input bit is transformed into two output bits (initially register is all zero)

  23. Shift register states Representing convolutional codes compactly: code trellis and state diagram Input state ‘1’ indicated by dashed line

  24. Structural properties of convolutional codes • Each new block of k bits causes a transition into new state • Hence there are 2k branches leaving each state • Assuming encoder zero initial state, encoded word for any input k bits can thus be obtained. For instance, below for u=(1 1 1 0 1) the encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1) is produced: Verify that you have the same result! Input state Encoder state diagram for (n,k,L)=(2,1,2) coder

  25. Code weight, path gain, and generating function • The state diagram can be modified to yield information on code Hamming distance • Rules: • (1) Split S0 into initial and final state, remove self-loop • (2) Label each branch by the branch gain Xi. Here i is the weight of the n encoded bits on that branch • (3) Each path connecting the initial state and the final state represents a nonzero code word that diverges and re-emerges with S0 only once • The path gain is the product of the branch gains along a path, and the weight of the associated code word is the power of X in the path gain • Code weigh distribution is obtained by using a weighted gain formula to compute its ‘generating function ’ (input-output equation)where Aiis the number of encoded words of weight i

  26. The path representing the state sequence S0S1S3S7S6S5S2S4S0 has path gain X2X1X1X1X2X1X2X2=X12 and the corresponding code word has the weight 12 Where does these terms come from?

  27. Distance properties of convolutional codes • Code strength is measured by the minimum free distance:where v’ and v’’ are the encoded words corresponding information sequences u’ and u’’. • The minimum free distance denotes: • The minimum weight of all paths in the state diagram that diverge from and remerge with the all-zero state S0 • The lowest power of the code-generating function T(X) Code gain:

  28. Decoding convolutional codes • Maximum likelihood decoding of convolutional codes means finding the code branch in the code trellis that was most likely transmitted • Therefore maximum likelihood decoding is based on calculating code Hamming distances dfree for each branch forming encoded word • Assume that information symbols applied into a AWGN channel are equally alike and independent • Let’s denote by x the message bits (no errors) and by y the decoded bits: • Probability to decode the sequence y is then • The most likely path through the trellis will maximize this metric • Also, the following metric is maximized (probs<1) that can alleviate computations:

  29. Example of exhaustive maximal likelihood detection • Assume a three bit message is to transmitted. To clear the encoder two zero-bits are appended after message. Thus 5 bits are inserted into encoder and 10 bits produced. Assume channel error probability is p=0.1. After the channel 10,01,10,11,00 is produced. What comes after decoder, e.g. what was most likely the transmitted sequence?

  30. weight for prob. to receive bit in-error correct errors

  31. The largest metric, verify that you get the same result! Note also the Hamming distances!

  32. Soft and hard decoding • Regardless whether the channel outputs hard or soft decisions the decoding rule remains the same: maximize the probability • However, in soft decoding decision region energies must be accounted for, and hence Euclidean metric dE, rather that Hamming metric dfree is used Transition for Pr[3|0] is indicated by the arrow

  33. Decision regions • Coding can be realized by soft-decoding or hard-decoding principle • For soft-decoding reliability (measured by bit-energy) of decision region must be known • Example: decoding BPSK-signal: Matched filter output is a continuos number. In AWGN matched filter output is Gaussian • For soft-decodingseveral decision region partitionsare used Transition probability for Pr[3|0], e.g. prob. that transmitted ‘0’ falls into region no: 3

More Related