1 / 73

Noise-Resistant Coding: Concepts and Technologies for Error Detection and Correction

This lecture explores the importance of noise-resistant coding in electronic devices and communication systems. It discusses various error detection and correction schemes used in human communication and electronic devices, including forward error correction techniques. The lecture also introduces block codes, convolutional codes, and interleaving as methods of noise-resistant coding.

dawnr
Télécharger la présentation

Noise-Resistant Coding: Concepts and Technologies for Error Detection and Correction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Module 2. “Basics of Transmission Theory and Messages Coding” Lecture 2.1. NOISE-RESISTANT CODING

  2. NOISE-RESISTANT CODING● Coding Concepts and Technologies of Noise-resistant CodingIt's hard to work in a noisy room as it makes it harder to think. Work done in such environment is likely to cause more mistakes. Electronic devices suffer much the same fate. Signals too are apt (склонны) to be misinterpreted in an environment that is "noisy".

  3. During a conversation how do you determine if you heard the other person correctly? We human receivers employ various forms of error detection and correction as we try to make sense of what we are hearing. First of all, we use context and if a word or phrase does not seem to fit the topic of conversation, we automatically replace the confusing section with what we expect to hear. If contextual confusion still exists, we ask the person to repeat what he has just said or we may ask the other person to speak louder. All of these are forms of error correction schemes employed during human communication.

  4. Error detection and correction (EDC) to achieve good communication is also employed in electronic devices and in much the same conceptual way. In fact, without EDC most electronic communication would not be possible. The level of noise and interferences are just way too high in electronic medium. The amount of EDC required and its effectiveness depends on the signal to noise ratio, or SNR.

  5. EDC can be implemented in the five following ways:Increase signal power Decrease signal noise Introduce diversity Retransmission Forward Error Correction

  6. Forward Error CorrectionWhen a duplex line is not available or is not practical, a form of error correction called Forward Error Coding (FEC) is used. The receiver has no real-time contact with the transmitter and can not verify if a block was received correctly. It must make a decision about the received data and do whatever it can to either fix it or declare an alarm.

  7. FEC techniques add a heavy burden (нагрузку) on the link either in adding redundant data and adding delay. Another big disadvantage is the lower information rate. However at the same time, these tech­niques reduce the need to vary power. For the same power, we can now achieve a lower error rate. The commu­nication in this case remains simplex and all the burden of detecting and correcting errors falls on the receiver. The transmitter complexity is avoided but is now placed on the receiver instead.

  8. Digital signals use FEC in these three well known methods:Block Codes – блоковые кодыConvolutional Codes – свёрточные коды (загортальне кодування)Interleaving(чередование) – каскадное кодирование

  9. Block codes operate on a block of bits. Using a preset algorithm, we take a group of bits and add a coded part to make a larger block. This block is checked at the receiver. The receiver then makes a decision about the validity of the received sequence.Convolutional codes are referred to as continuous codes as they operate on a certain number of bits continuously.

  10. Interleaving has mitigating properties for fading channels and works well in conjunction with these two types of coding. In fact all three techniques are used together in many EDC suites such as Digital Video Broadcast, satellite communications, radio and cell phones and baseband systems such as PCs and CD players.

  11. Block CodesBlock codes are referred to as (n, k) codes. A block of k information bits are coded to become a block of n bits.Formal definitionA block code is a code which encodes strings formed from an alphabet set S into code words by encoding each letter of S separately.

  12. But before we go any further with the details, let's look at an important concept in coding called Ham­ming distance.Let's say that we want to code the 10 integers, 0 to 9 by a digital sequence. Sixteen unique sequences can be obtained from four bit words. We assign the first ten of these, one to each integer. Each integer is now identified by its own unique sequence of bits as in Table below. The remaining six possible sequences are unassigned.

  13. The universe of all possible sequences is called the code space. For 4-bit sequences, the code space consists of 16 sequences. Of these we have used only ten. These are the valid words. The ones not used or unassigned are invalid code words. They would never be sent, so if one of these is received then the receiver assumes that an error has occurred.

  14. Hamming Weight: The Hamming weight of this code scheme is the largest number of 1's in a valid codeword. This number is 3 among the 10 codewords we have chosen. (the ones in the while space)Concept of Hamming DistanceIn continuous variables, we measure distance by Euclidean concepts such as lengths, angles and vectors. In the binary world, distances are measured between two binary words by something called the Hamming distance. The Hamming distance is the number of disagreements between two binary sequences of the same size. It is a measure of how apart binary objects are.

  15. The Hamming distance between sequences 001 and 101 is = 1The Hamming distance between sequences 0011001 and 1010100 is = 4Hamming distance and weight are very important and useful concepts in coding. The knowledge of Hamming distance is used to determine the capability of a code to detect and correct errors.

  16. Although the Hamming weight of our chosen code set is 3, the minimum Hamming distance is 1. The codeword 1011 and codeword 1010 differ by only one bit. A single bit error can turn 1011 into 1010. The receiver interpreting the received sequence of 1010 will see it as a valid codeword will never know that actually the sent sequence was 1011. So it only takes one bit error to turn one valid codeword into another valid codeword such that no error is apparent. Number 2 can then turn into Number 8 with one bit error in the first bit.

  17. Now extend the chosen code set by one parity bit. Here is what we do, if the number of 1's is even then we extend the sequence by appending a 1, and if the number of 1's is odd, then we add a zero at the end of the sequence.. The codespace now doubles. It goes from 16 possible codewords to 32 codewords since codespace is equal to 2N, where N is the length of the sequence. We can extend the sequence yet again using the same process and now the code space increases to 64. In the first case the number of valid codewords is 10 out of 32 and in the second case it is still 10 but out of 64, as shown in Column 4.

  18. We now have three code choices, with 4, 5 or 6-bit words. The Hamming weight of the first set is 3, 4 for the second set and 5 for the last set. The minimum Hamming distance for the first set is 1. The minimum Hamming distance for the second set is 2 bits and 3 for the last set.

  19. In the first set, even a single bit error cannot be tolerated (because it cannot be detected). In the second set, a single error can be detected because a single bit error does not turn one code word into another one. So its error detection ability is 1. For the last set, we can detect up to 2 errors. So its error detection capability is 2 bits.We can generalize this to say that the maximum number of error bits that can be detected is t = dmin -1, where dmin is Hamming distance of the codewords. For a code with dmin = 3, we can both detect 1 and 2 bit errors.

  20. So we want to have a code set with as large a Hamming distance as possible since this directly effects our ability to detect errors. It also says that in order to have a large Hamming distance, we need to make our codewords larger. This adds overhead, reduces information bit rate for a given bandwidth and proves the adage that you cannot get something for nothing even in signal processing.

  21. Number of errors we can correctDetecting is one thing but correcting is another. For a sequence of 6 bits, there are 6 different ways to get a 1 bit error and there are 15 different ways we can get 2-bit errors.6!No. of 2-bit combinations = 2!4! = 15

  22. What is the most likely error event given that this is a Gaussian channel? Most likely event is a 1 bit error then 2 bit errors and so on. There is also a possibility that 3 errors can occur, but for a code of 6 bits, the maximum we can detect is two bits. This is an acceptable risk since 3 bit errors are highly unlikely because if the transition probability p is small (<<1), the probability of getting three errors is cube of the channel error rate,PE(N) = pN

  23. Of course it is indeed possible to get as many as 6 errors in a 6 bit sequence but since we have no way of detecting this scenario, we get what is called a decoding failure. This is also termed the probability of undetec­ted errors and is the limitation on what can be corrected.The number of errors that we can correct is given by t (int)=(dmin-1)/2For a code with dmin = 3, the correction capability is just 1 bit.

  24. Creating block codesThe block codes are specified by (n.k). The code takes k information bits and computes (n-k) parity bits from the code generator matrix. Most block codes are systematic in that the information bits remain unchanged with parity bits attached either to the front or to the back of the information sequence.Hamming code, a simple linear block code.

  25. Hamming codes are most widely used linear block codes. A Hamming code is generally specified as (2n-1, 2n-n-1). The size of the block is equal to 2n-1. The number of information bits in the block is equal to 2n-n-1 and the number of overhead bits is equal to n. All Hamming codes are able to detect three errors and correct one. Common sizes of Hamming codes are (7, 4), (15,11) and (31, 26). All of these have the same Hamming distance.

  26. Repetition schemesVariations on this theme exist. Given a stream of data that is to be sent, the data is broken up into blocks of bits, and in sending, each block is sent some predetermined number of times. For example, if we want to send "1011", we may repeat this block three times each.Suppose we send "1011 1011 1011", and this is received as "1010 1011 1011". As one group is not the same as the other two, we can determine that an error has occurred. This scheme is not very efficient, and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g. "1010 1010 1010" in the example above will be detected as correct in this scheme).

  27. Parity schemesA parity bit is an error detection mechanism that can only detect an odd number of errors.The stream of data is broken up into blocks of bits, and the number of 1 bits is counted. Then, a "parity bit" is set (or cleared) if the number of one bits is odd (or even). (This scheme is called even parity; odd parity can also be used.) If the tested blocks overlap, then the parity bits can be used to isolate the error, and even correct it if the error affects a single bit: this is the principle behind the Hamming code.There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of bit errors (one, three, five, and so on). If an even number of bits (two, four, six and so on) are flipped, the parity bit appears to be correct, even though the data is corrupt.

  28. ChecksumA checksum of a message is an arithmetic sum of message code words of a certain word length, for example byte values, and their carry value. The sum is negated by means of ones-complement, and stored or transferred as an extra code word extending the message.On the receiver side, a new checksum may be calculated from the extended message. If the new checksum is not 0, an error has been detected.Checksum schemes include parity bits, check digits, and longitudinal redundancy check.

  29. Cyclic redundancy checksMore complex error detection (and correction) methods make use of the properties of finite fields and polynomials over such fields.The cyclic redundancy check considers a block of data as the coefficients to a polynomial and then divides by a fixed, predetermined polynomial. The coefficients of the result of the division is taken as the redundant data bits, the CRC.On reception, one can recompute the CRC from the payload bits and compare this with the CRC that was received. A mismatch indicates that an error occurred.

  30. Hamming distance based checksIf we want to detect d bit errors in an n bit word we can map every n bit word into a bigger n+d+1 bit word so that the minimum Hamming distance between each valid mapping is d+1. This way, if one receives a n+d+1 word that doesn't match any word in the mapping (with a Hamming distance x <= d+1 from any word in the mapping) it can successfully detect it as an erroneous word. Even more, d or fewer errors will never transform a valid word into another, because the Hamming distance between each valid word is at least d+1, and such errors only lead to invalid words that are detected correctly. Given a stream of m*n bits, we can detect x <= d bit errors successfully using the above method on every n bit word. In fact, we can detect a maximum of m*d errors if every n word is transmitted with maximum d errors.

  31. Hash functionAny hash function can be used as a integrity check.Horizontal and vertical redundancy checkOther types of redundancy check include horizontal redundancy check, vertical redundancy check and "double", "dual" or "diagonal" parity (used in RAID-DP).

  32. Contents Introduction credit card error checking what is a code purpose of error-correction codes Encoding naïve approach hamming codes Minimum Weight Theorem definitions proof of single error-correction Decoding list all possible messages using vectors syndrome Conclusion perfect codes

  33. Detect Error On Credit Card

  34. Formula for detecting error Let d2, d4, d6, d8, d10, d12, d14, d16 be all the even values in the credit card number. Let d1, d3, d5, d7, d9, d11, d13, d15 be all the odd values in the credit card number. Let n be the number of all the odd digits which have a value that exceeds four Credit card has an error if the following is true: (d1 + d3 + d5 + d7 + d9 + d11 + d13 + d15) x 2 + n + (d2 + d4 + d6 + d8 + d10 + d12 + d14 + d16) 0 mod(10)

  35. Detect Error On Credit Card n = 3 d1 d2 d3 … d15 d16

  36. Now the test (4 + 4 + 8 + 1 + 3 + 5 + 7 + 9) = 41 (5 + 2 + 1 + 0 + 3 + 4 + 6 + 8) x 2 + 3 = 61 41 + 61 = 102 mod (10) =2 3

  37. Credit Card Summary The test performed on the credit card number is called a parity check equation. The last digit is a function of the other digits in the credit card. This is how credit card numbers are generated by Visa and Mastercard.They start with an account number that is 15 digits long and use the parity check equation to find the value of the 16th digit. “This method allows computers to detect 100% of single-position errors and about 98% of other common errors” (For All Practical Purposes p. 354).

  38. What is a code? A code is defined as an n-tuple of q elements. Where q is any alphabet. Ex. 1001 n=4, q={1,0} Ex. 2389047298738904 n=16, q={0,1,2,3,4,5,6,7,8,9} Ex. (a,b,c,d,e) n=5, q={a,b,c,d,e,…,y,z} The most common code is when q={1,0}. This is known as a binary code.

  39. The purpose A message can become distorted through a wide range of unpredictable errors. • Humans • Equipment failure • Lighting interference • Scratches in a magnetic tape

  40. Why error-correcting code? To add redundancy to a message so the original message can be recovered if it has been garbled. e.g. message = 10 code = 1010101010

  41. Send a message Message Encoder Channel Decoder Message 10 101010 noise 001010 10

  42. Encoding Naïve approach Hamming codes

  43. Take Naïve approach Append the same message multiple times. Then take the value with the highest average. Message:= 1001 Encode:= 1001100110011001 Channel:= 1001100100011001 Decode: = a1 = Average(1,1,0,1) = 1 a2 = Average(0,0,0,0) = 0 ... (a1,a2,a3,a4) Message:= 1001

  44. Hamming [7,4] Code The seven is the number of digits that make the code. E.g. 0100101 The four is the number of information digits in the code. E.g. 0100101

  45. Hamming [7,4] Encoding Encoded with a generator matrix. All codes can be formed from row operations on matrix. The code generator matrix for this presentation is the following:

  46. Hamming [7,4] Codes 1000011 0100101 0010110 0001111 1100110 1010101 1001100 0110011 0101010 0011001 1101001 1001010 1111111 0111100 0011001 0000000 Codes Possible codes

More Related