Lecture 06 - Optimal Receiver Design
Lecture 06 - Optimal Receiver Design
Lecture 06 - Optimal Receiver Design
Modulation
A signal space representation is a convenient form for viewing modulation which allows us to:
z design energy and bandwidth efficient signal constellations z determine the form of the optimal receiver for a given
Problem Statement
We transmit a signal s(t ) {s1(t ), s2 (t ),, sM (t )} , where s(t) is nonzero only on t [0,T] . Let the various signals be transmitted with probability:
p1 = Pr[ s1(t )],, p M = Pr[ s M ( t )]
Given r(t) , the receiver forms an estimate s(t) of the signal s(t ) with the goal of minimizing symbol error probability
Ps = Pr[ s(t ) s(t )]
Noise Model
Channel
s(t )
r (t )
n( t )
and
n (t ) = n( t ) nk f k ( t )
k =1
r ( t ) = sm, k f k ( t ) + nk f k ( t ) + n ( t ) = rk f k ( t ) + n ( t )
where rk = sm, k + nk
= sm,k nk sm,k nk = 0
k =1 k =1
We transmit a K dimensional signal vector: s = [ s1, s2 ,, sK ] {s1,, s M } We receive a vector r = [r1,, rK ] = s + n which is the sum of the signal vector and noise vector n = [n1,, nK ] Given r , we wish to form an estimate s of the transmitted signal vector which minimizes Ps = Pr[s s]
Channel
Receiver
Suppose that signal vectors {s1 , , s M } are transmitted with probabilities { p1, , pm } respectively, and the signal vector r is received We minimize symbol error probability by choosing the signal sm which satisfies : Pr (sm r ) Pr (si r ), m i Equivalently : or
If p1 = = pm or the a priori probabilities are unknown, then the MAP rule simplifies to the ML Rule We minimize symbol error probability by choosing the signal s m which satisfies p(r sm) p(r si ), m i
Evaluation of Probabilities
In order to apply either the MAP or ML rules, we need to evaluate: p(rsm) Since r = sm + n where sm is constant, it is equivalent to evaluate : p(n) = p(n1,, nk )
0 2
0 f ( t ) f ( t )dt = N 0 2 , i = k i k
0,
ik
Since E[ni nk ] = 0,i k , individual noise components are uncorrelated (and therefore independent) Since E nk2 = N0 2, each noise component has a variance of N0 2 . p(n1,, nK ) = p(n1) p(nK )
[ ]
Transmitted signal values in each dimension represent the mean values for each signal
p(r sm ) = ( N 0 )
K 2
K 2 exp rk sm, k N0 k =1
s = arg max pm ( N 0 )
K 2
{s1,,s M }
K 2 exp rk sm, k N0 k =1
{s1,,s M }
K 1 K 2 ln[ N 0 ] rk sm, k N 0 k =1 2
{s1,,s M }
K ln[ N 0 ] 2
K K 1 K 2 rk 2 rk sm,k + sm,k 2 N 0 k =1 k =1 k =1
Eliminating terms which are identical for all choices: 2 K 1 K s = arg max ln[ pm ] + rk sm,k sm, k 2 N 0 k =1 N 0 k =1 {s ,,s }
1 M
pm counts a lot
k =1
T 0 dt
s1(t )
E1 2 N0 ln( p1 ) 2
Choose Largest
r (t )
T 0 dt
sM (t )
EM 2 N0 ln( pM ) 2
ML case: All signals are equally likely ( p1 = = pM ). A priori probabilities can be ignored. All signals have equal energy ( E1 = = EM ). Energy terms can be ignored. We can reduce the number of correlations by directly implementing:
s = arg max
K 1 K N0 ln[ pm ] + rk sm, k sm, k 2 2 k =1 k =1 {s1,,s M } 2
f1(t )
T 0 dt
r1
r = [ r1
rK ]
r (t )
f K (t )
T 0 dt
rK
k =1
E1 2
s11 , s1, K
sM,1 K sM , K sM,k rk
k =1
N0 ln( p1) 2
Choose Largest
EM 2 N0 ln p ( M) 2
where r(t ) hk (t ) t =T denotes the convolution of the signals r(t) and hk (t) evaluated at time T
We can implement each correlation by passing r(t) through a filter with impulse response hk (t)
r (t )
h1(t )
r1
r = [ r1 rK ]
t=T
r (t )
hK (t )
rK
s1(t )
+1
s 2 (t )
+1
t
-1 1 2 -1 1 2
s 3 (t )
+1
s 4 (t )
+1
t
-1 1 2 -1
f 1(t )
+1
f 2(t )
+1
t
-1 1 2 -1 1 2
s1( t ) = 1 f1 ( t ) + 1 f 2 ( t ) s3 ( t ) = 1 f1 ( t ) + 1 f 2 ( t )
T=2 E1 = E2 = E3 = E4 = 2
t
s2 ( t ) = 1 f1 ( t ) 1 f 2 ( t ) s4 ( t ) = 1 f1 ( t ) 1 f 2 ( t )
0 dt
N0 ln( p1 ) 2
Choose Largest
r (t )
0 dt
s4 (t )
N0 ln( p4 ) 2
0 dt
2
r1
f1(t )
r = [ r1 r2 ]
0 dt
2
r (t )
r2
f2 (t )
1 r1 1 r2
N0 ln( p1 ) 2
N0 ln( p2 ) 2 1 r1 + 1 r2 1 r1 1 r2
Choose Largest
N0 ln( p3 ) 2
N0 ln( p4 ) 2
h1(t )
+1 +1
h 2 (t )
t
1 2 1 2
t=2 r (t ) r (t )
h1(t )
r1 t=2
r = [ r1 r2 ]
h2 (t )
r2
signal signal z Normalizes the correlation to account for energy z Weights the a priori probabilities according to noise power
This receiver is completely general for any signal set Simplifications are possible under many circumstances
Decision Regions
[ ]
The Matlab Script File sigspace.m (on course web page) can be used to visualize two dimensional signal spaces and decision regions The function is called with the following syntax: sigspace( [ x1 y1 p1; x2 y2 p2; ; x M y M pM ] , Eb N0 )
z z z
Eb =
is the (two-sided) power spectral density of the background noise The ratio Eb N 0 measures the relative strength of signal and noise at the receiver has units of Joules = Watts *sec has units of Watts/Hz = Watts*sec
Eb N0
-0.8
-0.6
-0.4
-0.2
0.2
0.4
0.6
0.8
-0.8
-0.6
-0.4
-0.2
0.2
0.4
0.6
0.8
-0.8
-0.6
-0.4
-0.2
0.2
0.4
0.6
0.8
3.5
2.5
1.5
0.5
sigspace( [1.5 -1.5; 1.5 -0.5; 1.5 0.5; 1.5 1.5; 0.5 -1.5; 0.5 0.5; 0.5 0.5; 0.5 1.5;-1.5 -1.5; -1.5 -0.5; -1.5 0.5; -1.5 1.5; 0.5 -1.5; -0.5 -0.5; -0.5 0.5; -0.5 1.5],10)
2 1.5
0.5
-0.5
-1
-1.5
-2 -2
-1.5
-1
-0.5
0.5
1.5
Boundaries are perpendicular to a line drawn between two signal points If signal probabilities are equal, decision boundaries lie exactly halfway in between signal points If signal probabilities are unequal, the region of the less probable signal will shrink. Signal points need not lie within their decision regions for case of low Eb N0 and unequal probabilities.