270 likes | 383 Vues
This document presents an in-depth examination of orthogonal signaling within a 2-dimensional transmission model. It discusses the implementation of orthogonal binary signaling with two signals, S1(t) and S2(t), and explores Quadrature Amplitude Modulation (QAM) while considering error probabilities and Gaussian noise effects. Key insights include decision regions for maximum likelihood receivers, performance metrics for various modulation schemes, and coding strategies. The analysis aims to improve understanding of efficient data communication techniques and error management in digital transmission systems.
E N D
DATA COMMUNICATION2-dimensional transmission A.J. Han Vinck May 1, 2003
we describe orthogonal signaling 2-dimensional transmission model Content
„orthogonal“ binary signaling 2 signals S1 (t) S2 (t) in time T Example: Property: orthogonal energy E T T
Quadrature Amplitude Modulation: QAM 1 0 0 S(t) 1 1 0
QAM receiver 1/0 +/- r(t) 1/0 +/- r(t) = S(t) + n(t) Note: sin(x)sin(x) = ½ (1 – cos (2x) ) sin(x)cos(x) = ½ sin (2x)
about the noise Conclusion: n1 and n2 are Gaussian Random Variables zero mean uncorrelated (and thus statistically independent (f(x,y) =f(x)f(y) ) with variance 2.
Geometric presentation (2) 11 10 00 01 ML receiver:find maximum p(r|s) min p(n) decision regions
performance From Chapter 1: P(error) =
extension 4-QAM 2 bits 16-QAM 4 bits/s Channel 2 Channel 1
Geometric presentation (2) 1 equal density transmitted 2 noise vector n received The noise vector n has length |n| = ( 12+22) ½ n has a spherically symmetric distribution!
Geometric presentation (1) Prob (error) = Prob(length noise vector > d/2) d/2 r r‘
Error probability for coded transmission The error probabiltiy is similar to the 1-dimensional situation: We have to determine the minimum d2Euclidean between any two codewords Example: C d2Euclidean = C‘
Error probability The two-code word error probability is then given by:
modulation schemes On-off FSK 8-PSK 3 bits/s 1 bit/symbol 1 bit/symbol 4-QAM 2 bits 16-QAM 4 bits/s
transmitted symbol energy energy: per information bit must be the same FSK
performance d/2 From Chapter 1: P(error) = FSK
Coding with same symbol speed In k symbol transmissions, we transmit k information bits. We use a rate ½ code In k symbol transmissions, we transmit k bits ML receiver:
Famous Ungerböck coding In k symbol transmissions transmit We can transmit 2k information bits and k redundant digits In k symbol transmissions transmit 2k digits Hence, we can use a code with rate 2/3 with the same energy per info bit!
modulator info ci 23 encoder Signal mapper ci{000,001,010,...111}
example transmit 00 00 00 10 10 01 Parity even Parity odd 11 11 11 or 00 01 00 01 10 11 10 11 Decoder: 1) first detect whether the parity is odd or even 2) do ML decoding given the parity from 1) Homework: estimate the coding gain
Example: Frequency Shift Keying-FSK Transmit: s(1):= s(0):= Note: FSK
Modulator/demodulator m modulator S(t) m r(t) Select largest demodulator m
Ex: Binary Phase Shift Keying-BPSK Transmit: s(1):= s(0):= m m‘ > or < 0?
On-off BFSK BPSK Modulation formats
PERFORMANCE 10-1 10-2 10-3 10-4 10-5 10-6 10-7 Error rate On-off BPSKQPSK 5 10 15 Eb/N0 dB