1 / 24

ECED 4504 Digital Transmission Theory

ECED 4504 Digital Transmission Theory. Jacek Ilow (based on the slides by Matthew Valenti) Performance of Block Codes. equality for “perfect” codes (Golay, Hamming). Error Probability: Hard Decision Decoding. Assumptions: p is the probability that any single code symbol is in error.

Télécharger la présentation

ECED 4504 Digital Transmission Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECED 4504Digital Transmission Theory Jacek Ilow (based on the slides by Matthew Valenti) Performance of Block Codes

  2. equality for “perfect” codes (Golay, Hamming) Error Probability:Hard Decision Decoding • Assumptions: • p is the probability that any single code symbol is in error. • p = Ps for the type of modulation used (i.e. BPSK) • Replace Eb/No with rEb/No • Errors occur independently. • All combinations of t or less errors are correctable. • Most combinations of more than t errors are not correctable. • “Perfect codes” cannot correct any combination of more than t errors. • Then the code word error probability is:

  3. Example: Performance of (7,3) Code • Compute Pc for the example code if p = 0.01 • Since dmin = 4, t = 1

  4. “hard decision” decoding Error Correction of Digitally Modulated Signals • Consider the following system: k data bits rate r = k/n block encoder n code bits s(t) BPSK modulator AWGN n(t) + r(t) estimates of data bits block decoder estimates of code bits BPSK detector

  5. Error Correction with Digital Modulation • The bit error probability is found by using the appropriate equation for the modulation that is being used. • However, we want performance as a function of Eb/No. • Eb is energy per data bit (not code bit). • The energy per code symbol is Es = rEb • Therefore, we must replace Eb/No with rEb/No in all our error formulas for different modulation types.

  6. Example: Performance of (7,3) Code with BPSK Modulation • For uncoded BPSK: • For our coded system: • Therefore the code word error probability is:

  7. A More Interesting Example • Find the error probability for a (63,45) BCH code, which has t = 3 (see table 8-1-6). • First, compute p: • The compute code word probability:

  8. Performance of Golay Code:Hard-decision Decoding • (23,12), t = 3 Golay code • BPSK modulation • Hard decision decoding: • Where: Union bound is exact because Golay code is perfect

  9. Codeword Error Probability and Bit Error Probability • If a code word is received correctly, then all data bits will be correctly decoded. • If a code word is received incorrectly, then between 1 and k bits will be incorrect at the output of the decoder, so: • To find the exact BER, we need to know how many errors there are at the output of the decoder whenever it makes an error: • Let (i) be the average number of bit errors when i code bits are in error. • Then:

  10. BER of Golay code • To find the BER, we need to know (i). • First find the distance spectrum Most common error at high SNR

  11. Coding Gain • The coding gain is the difference between the uncoded and coded Eb/No required to achieve a desired Pb. • Usually we use a reference of Pb = 10-5 • The stronger the code, the higher the coding gain.

  12. capacity curve is at Eb/No = (22r-1)/(2r) = 0.07 dB 2.1 dB coding gain 7.4 dB away from capacity Performance Curve for (23,12) Code 0 10 -2 10 uncoded BPSK BER -4 10 -6 10 (23,12) Golay code -8 10 0 1 2 3 4 5 6 7 8 9 10 Eb/No (in dB)

  13. Soft Decision Decoding • With hard-decision decoding, a hard-decision is made on the bits before decoding takes place. • Input to the decoder is hard bit decisions {0,1} • Whenever a hard-decision is made, valuable information is lost. • We are interested in not only if the receiver thinks the received code bit was a 0 or a 1, but how confident it was about that decision. • The decoder should rely more on strong signals, and less on weaker signals. • Any type of decoder that uses soft-information about the confidence of the bit decision is called a soft-decision decoder.

  14. This is where the hard decision is made: Information is lost! Essentially a 1-bit (2 level) quantizer Hard Decision Decoder for BPSK • Assume bits are equally likely: r r(t) block decoder f1(t)

  15. Softer Decision Decoder for BPSK • Replace the 1 bit quantizer with a p bit quantizer. • Note that this requires a more complicated decoder. • Must be able to work with more finely quantized samples from output of correlator. • Benefit is that decoder can place more confidence on strong signals and less confidence on weak signals. • Large (~2 dB) performance gain even for p = 3 bits. rQ r r(t) p bit quantizer block decoder f1(t)

  16. Soft Decision Decoder for BPSK • To achieve a fully soft decoder, simply pass the output of the correlator to the decoder. • Equivalent to letting p   • The vector r contains more information than • Due to the “data processing theorem” • Therefore we can obtain better performance with soft-decision decoding r r(t) block decoder f1(t)

  17. Comments on Soft Decision Decoding • Hard decision decoding chooses the code word with smallest Hamming distance from the received code word. • Soft decision decoding chooses the code word with the smallest Euclidian distance from the received code word. • For block codes, soft-decision decoders are usually much more complex than hard-decision decoders. • Exception: Errors and erasures decoding of RS codes. • However, soft-decision decoding is easy for convolutional codes.

  18. Performance of Soft Decision Decoding • Calculate the pairwise error probability between all pairs of code words i  j: • Where is the Euclidian distance between modulated code words ci and cj. • Euclidian distance is related to Hamming distance. • Depends on type of modulation • For BPSK:

  19. Performance of Soft Decision Decoding • Apply Union bound to compute overall code word error probability: Assume the 2k code words are equally likely Assume a linear code: Conditional probability of error is the same for all possible transmitted code words. Just assume that all-zeros was sent This is called the uniform error property ad is the number of code words of weight w = d For high SNR, performance is dominated by the code words of weight w = dmin “Free distance asymptote”

  20. Bit Error Rate • Now use total information weight

  21. Weight Distribution • The weight distribution or distance spectrum is the number of code words for each possible weight. • Example: Golay code (table 8-1-1)

  22. Performance of Golay Code:Soft Decision Decoding • Soft decision decoding, BPSK modulation.

  23. Performance of Golay Code:BER of Soft Decision Decoding

  24. 2 dB difference at high Eb/No Soft-decision vs. Hard-decision Decoding of Golay Code 0 10 -2 10 Soft Decision Decoding Minimum Distance Asymptote Uncoded BER -4 10 Soft decision decoding Union Bound Hard decision decoding -6 10 -8 10 0 1 2 3 4 5 6 7 8 9 10 Eb/No (in dB)

More Related