1 Random Processes: " " Is Considered An Important Background To Communication Study
1 Random Processes: " " Is Considered An Important Background To Communication Study
1 Random Processes: " " Is Considered An Important Background To Communication Study
X (t ) : S
for every t .
Define or view a random process via its
inherited probability system
X(t,s1)
X(t,s2)
X(t,sn)
1-4
1.2 Random Process
X(t, sj) is called a sample function (or a
realization) of the random process for sample
point sj.
X(t, sj) is deterministic.
1.2 Random Process
Notably, with a probability system (S, F, P)
over which the random process is defined, any
finite-dimensional joint cdf is well-defined.
For example,
PrX (t1 ) x1 and X (t2 ) x2 and X (t3 ) x3
P s S : X (t1 , s ) x1 s S : X (t2 , s ) x2 s S : X (t3 , s ) x3
1.2 Random Process
Summary
A random variable
maps s to a real number X
A random vector
maps s to a multi-dimensional real vector.
A random process
maps s to a real-valued deterministic function.
s is where the probability is truly defined; yet the
image of the mapping is what we can observe and
experiment over.
1.2 Random Process
Question: Can we map s to two or more real-
valued deterministic functions?
Answer: Yes, such as (X(t), Y(t)).
Then, we can discuss any finite-dimensional joint
distribution of X(t) and Y(t), such as, the joint
distribution of
( X (t1 ), X (t2 ), X (t3 ), Y (t1 ), Y (t4 ))
1.3 (Strictly) Stationary
For strict stationarity, the statistical property of
a random process encountered in real world is
often independent of the time at which the
observation (or experiment) is initiated.
Mathematically, this can be formulated as that
for any t1, t2, …, tk and :
FX ( t ), X ( t ),..., X ( t ) ( x1 , x2 ,..., xk )
1 2 k
FX ( t ), X ( t
1 2 ),..., X ( tk ) ( x1 , x2 ,..., xk )
1.3 (Strictly) Stationary
Example 1.1. Suppose that any finite-
dimensional cdf of a random process X(t) is
defined. Find the probability of the joint event.
FX ( t ), X ( t ) ( a1 , b2 ) FX ( t ), X ( t ) ( a1 , a2 )
1 2 1 2
1.3 (Strictly) Stationary
1.3 (Strictly) Stationary
Example 1.1. Further
assume that X(t) is
strictly stationary.
Then, P(A) is also
equal to:
P( A) FX ( t ), X ( t ) (b1 , b2 ) FX ( t ), X ( t ) (b1 , a2 )
1 2 1 2
FX ( t ), X ( t ) ( a1 , b2 ) FX ( t ), X ( t ) (a1 , a2 )
1 2 1 2
1.3 (Strictly) Stationary
Why introducing “stationarity?”
With stationarity, we can be certain that the
observations made at different time instances have the
same distributions!
For example, X(0), X(T), X(2T), X(3T), ….
x1 x2 f X ( 0 ), X ( t t ) ( x1 , x2 )dx1dx2
2 1
E[ X (t1 t2 ) X (0)]
RX (t1 t2 ,0)
A short-hand for
RX (t1 t2 ) autocorrelation function
of a stationary process
1.4 Autocorrelation
Conceptually, autocorrelation function = “power
correlation” between two time instances t1 and
t2.
1.4 Autocovariance
“Variance” is the degree of variation to the
standard value (i.e., mean).
Autocovariance function CX(t1, t2) is given by:
C X (t1 , t2 ) E X (t1 ) X (t1 ) X (t2 ) X (t2 )
E X (t1 ) X (t2 ) X (t1 ) X (t2 )
X (t1 ) X (t2 ) X (t1 ) X (t2 )
E X (t1 ) X (t2 ) E X (t1 ) X (t2 )
X (t1 ) E X (t2 ) X (t1 ) X (t2 )
RX (t1 , t2 ) X (t1 ) X (t2 )
X (t1 ) X (t2 ) X (t1 ) X (t2 )
RX (t1 , t2 ) X (t1 ) X (t2 )
1.4 Autocovariance
If X(t) is stationary, the autocovariance
function CX(t1, t2) becomes
C X (t1 , t2 ) RX (t1 , t2 ) X (t1 ) X (t2 )
RX (t1 t2 ,0) 2
X
C X (t1 t2 ,0)
C X (t1 t2 )
1.4 Wide-Sense Stationary (WSS)
Since in most cases of practical interest, only
the first two moments (i.e., X(t) and CX(t1, t2))
are concerned, an alternative definition of
stationarity is introduced.
Definition (Wide-Sense Stationarity) A
random process X(t) is WSS if
X (t ) constant; X (t ) constant;
or
C X (t1 , t2 ) C X (t1 t2 ) RX (t1 , t2 ) RX (t1 t2 ).
1.4 Wide-Sense Stationary (WSS)
Alternative names for WSS include
weakly stationary
stationary in the weak sense
second-order stationary
m(t)
T
A
sin( 2f ct )
2
A
sin( 2f ct ) sin( 2f ct )
2
0.
Example 1.2 Signal with Random Phase
RX (t1 , t2 ) E[ A cos(2f c t1 ) A cos(2f c t2 )]
1
A cos( 2f c t1 ) cos( 2f c t2 )
2
d
2
A2 1
cos( 2f ct1 ) ( 2f ct2 )
2 2
cos( 2f c t1 ) ( 2f c t2 )d
A2 1
cos2 2f c (t1 t2 ) cos2f c (t1 t2 ) d
2 2
A2
cos2f c (t1 t2 ) . Hence, X(t) is WSS.
2
Example 1.2 Signal with Random Phase
Example 1.3 Random Binary Wave
Let
X (t ) A I
n
n p(t nT td )
m(t)
T
No/Ignore
w(t) carrier wave
0110… >0 yT x(t) s(t)
<
correlator
X(t) = A p(t−td)
Example 1.3 Random Binary Wave
Then by assuming that td is uniformly distributed
over [0, T), we obtain:
X (t ) E A I n p(t nT td )
n
A E[ I
n
n ] E[ p(t nT td )]
A 0 E[ p(t nT t
n
d
)]
0
Example 1.3 Random Binary Wave
A useful probabilistic rule: E[X] = E[E[X|Y]]
So we have:
Since E[ I n I m ] E [ I n ]E [ I m ] 0 for n m.
1-39
Three conditions must be simultaneously satisfied in
order to obtain a non-zero n p(t1 nT td ) p(t2 nT td )
p(t1 nT td ) 1 if 0 t1 nT td T
t1 td t1 td
or equivalently 1 n
T T
p (t2 nT td ) 1 if 0 t2 nT td T
t2 td t2 td
or equivalently 1 n
T T
n must be an integer.
Notably, 0 td T .
1-40
For an easier understanding, we let t1 = 0 and t2 = 0.
The below two conditions reduce to:
td td
1 n n 1, if 0 td T
T T
n must be an integer n 0, if td 0
td td
Combining with 1 n gives :
T T
td td
0 td T and n 1 T 1 1 T
td T 0 td T
td 0 and n 0 1 0 0 T and td 0.
T T
1-41
Example 1.3 Random Binary Wave
E X (t1 ) X (t2 ) td A
2
p(t
n
1 nT td ) p(t2 nT td )
A2 , (0 | t1 t2 | td T ) or (0 | t1 t2 | T and td 0)
0, otherwise
T 2 1
E X (t1 ) X (t2 ) E E X (t1 ) X (t2 ) td |t t | T d
A dt , 0 | t1 t2 | T
1 2
0, otherwise
2 | t1 t2 |
A 1 , 0 | t1 t2 | T
T
0, otherwise
Example 1.3 Random Binary Wave
1.4 Cross-Correlation
The cross-correlation between two processes
X(t) and Y(t) is:
RX ,Y (t , u ) E[ X (t )Y (u )]
RX I ,XQ (t , t ) 0
where
1 T
X (T )
2T
T
X (t )dt
1.5 Ergodicity
Definition. A stationary process X(t) is ergodic
in the autocorrelation function if
1. Pr lim RX ( ; T ) RX ( ) 1, and
T
where
1 T
RX ( ; T )
2T
T
X ( t ) X ( t ) dt
1.5 Ergodic Process
Experiments or observations on the same
process can only be performed at different time.
“Stationarity” only guarantees that the
observations made at different time come from
the same distribution.
A1.3 Statistical Average
Alternative names of ensemble average
Mean
Expectation value
Sample average. (Recall that sample space consists
of all possible outcomes!)
How about the sample average of a function
g( ) of a random variable X ?
E [ g ( X )] g ( x ) f X ( x )dx
A1.3 Statistical Average
nth moment of random variable X
Linear
Y(t) is a linear function of X(t).
Specifically, Y(t) is a weighted sum of X(t).
Time-invariant
The weights are time-independent.
Stable
X (t ) Y (t )
( 2 , 2 )
Transmitter Receiver
Y ( t ) 1 s ( t 1 ) 2 s ( t 2 ) 3 s ( t 3 )
3
h( i ) s (t i ),
i 1
( 3 , 3 )
where h( i ) i .
Example of LTI filter: Mobile Radio
Channel
X (t ) Y (t )
Transmitter Receiver
Y (t ) h( ) X (t )d
1.6 Transmission of a Random Process
Through a Linear Time-Invariant Filter
What are the mean and autocorrelation
functions of the LTI filter output Y(t)?
Suppose X(t) is stationary and has finite mean.
Suppose | h( ) | d
Then
Y (t ) E [Y (t )] E h( ) X (t )d
h ( ) E [ X (t )]d X h( )d
1.6 Zero-Frequency (DC) Response
1
h( )d
h( 1 )h( 2 ) E X (t 1 ) X (u 2 )d 2 d 1
h( 1 )h( 2 ) R X (t 1 , u 2 )d 2 d 1
If X (t ) WSS,
then RY ( ) h( 1 )h( 2 ) R X ( 1 2 )d 2 d 1
1.6 WSS input induces WSS output
From the above derivations, we conclude:
For a stable LTI filter, a WSS input induces a WSS
output.
In general (not necessarily WSS),
g (t ) (t t0 )dt g (t0 )
g (t ) s (t t )dt t 1/( 2 n )g (t ) n dt g (t )
t 1/( 2n )
0
n 0 0
0
Constant spectrum
(t ) exp( j 2ft )dt (t 0) exp( j 2ft )dt 1.
A2.1 Properties of Dirac Delta Function
Scaling after integration f ( x) g ( x)
f ( x )dx g ( x )dx ???
Although
, t 0
(t ) 2 (t )
0, t 0
their integrations are different
(t )dt 1 while
2 (t )dt 2.
( a b)
A2.1 Fourier Series
The Fourier transform of a periodic function
does not exist!
E.g., for integer k,
1, 2k t 2k 1;
g (t )
0, otherwise.
0
0 2 4 6 8 10
A2.1 Fourier Series
Theorem: If gT(t) is a bounded periodic function
satisfying Dirichlet condition, then
2n
gT (t ) cn exp j t
n T
gT (t ), T / 2 t T / 2;
g (t )
0, otherwise.
Then
gT (t ) g (t mT )
m
A2.1 Relation between a Periodic Function
and its Generating Function
Let G(f) be the spectrum of g(t) (which should
exist).
Then from Theorem in Slide 1-96,
1 T /2 2n
cn
T T / 2 gT (t ) exp j T t dt
1 T /2 2n
TT / 2 m
g (t mT ) exp j
T
t dt
1 T /2 2n
g (t mT ) exp j t dt , s t mT
T m T / 2 T
(Continue from the previous slide.)
1 T / 2 mT 2n
cn
T
m
T / 2 mT
g ( s ) exp j
T
( s mT ) ds
1 T / 2mT 2n
T / 2mT g ( s ) exp j s ds
T m T
1 n
g ( s) exp j 2 T s ds
T
1 n
G
T T
1-80
A2.1 Relation between a Periodic Function
and its Generating Function
This concludes to the Poisson’s sum formula
1 n n
gT (t ) G exp j 2 t
T n T T
g (t ) G( f )
1
T n
G
gT (t ) 1/ T T
1
T
A2.1 Spectrums through LTI filter
x1(t) y1(t)
h()
excitation response
x2(t) y2(t)
h()
h()
-2 -1 1 2
-0.5
3
y (t ) x (t ) 0.1 x (t ) -1
A2.1 Linearity and Convolution
Time-Convolution = Spectrum-Multiplication
x (t ) x ( f ) exp( j 2ft )df
y (t ) h ( ) x (t )d and
h( ) H ( f ) exp( j 2f )df
y ( f ) h( ) x ( s ) exp( j 2f ( s ))dt d , s t
x ( f ) h ( ) exp( j 2f )d
x( f ) H ( f )
1-85
A2.1 Impulse Response of LTI Filter
Impulse response = Filter response to Dirac
delta function (application of replication
property)
(t) h(t)
h()
y (t ) h( ) x (t )d h( ) (t )d h(t ).
A2.1 Frequency Response of LTI Filter
Frequency response = Filter response to a
complex exponential input of unit amplitude
and frequency f
exp(j2ft) y (t ) h( ) x (t )d
h()
h( ) exp j 2f (t ) d
exp j 2ft h( ) exp j 2f d
exp( j 2ft ) H ( f )
A2.1 Measures for Frequency Response
Expression 1
| H ( f ) | amplitude response
H ( f ) | H ( f ) | exp[ j ( f )], where
( f ) phase response
Expression 2
log H ( f ) log | H ( f ) | j ( f )
( f ) gain
( f ) j ( f ) where
( f ) phase response
( f ) ln | H ( f ) | nepers
20 log10 | H ( f ) | dB
A2.1 Fourier Analysis
Remember to self-study Tables
A6.2 and A6.3.
1.7 Power Spectral Density
Deterministic x(t) LTI y (t ) h( ) x (t )d
h()
WSS X(t) LTI Y (t ) h( ) X (t )d
h()
1.7 Power Spectral Density
How about the spectrum relation between filter
input and filter output?
An apparent relation is:
X(f) LTI Y ( f ) H( f )X ( f )
H(f)
1.7 Power Spectral Density
This is however not adequate for a random
process.
For a random process, what concerns us is the
relation between the input statistic and output
statistic.
1.7 Power Spectral Density
How about the relation of the first two moments
between filter input and output?
Spectrum relation of mean processes
Y (t ) E [Y (t )] E h( ) X (t )d
h( ) X (t )d
Y ( f ) X ( f ) H ( f )
補充 : Time-Average Autocorrelation
Function
For a non-stationary process, we can use the
time-average autocorrelation function to define
the average power correlation for a given time
difference.
1 T
RX ( ) lim
T
2T
T
E [ X ( t ) X ( t )]dt
It is implicitly assumed that RX ( ) is independent
of the location of the integration window. Hence,
1 3T / 2
RX ( ) lim
T
2T
T / 2
E [ X ( t ) X ( t )]dt
補充 : Time-Average Autocorrelation
Function
E.g., for a WSS process,
RX ( ) E [ X (t ) X (t )]
1 T
lim
T 2T T
x (t ) x (t )dt
補充 : Time-Average Autocorrelation
Function
E.g., for a cyclostationary process,
1 T
RX ( )
2T T
E[ X (t ) X (t )]dt ,
X ( t ) X 2 T
( t ) dt
e j 2 f
d
1
lim E [ X ( f ) X 2*T ( f )] where X 2T (t ) X (t ) 1| t | T .
T
2T
補充 : Time-Average Autocorrelation
Function
For a WSS process, S X ( f ) S X ( f ).
For a deterministic process,
1
S X ( f ) lim x ( f ) x2*T ( f ).
T 2T
1.7 Power Spectral Density
Relation of time-average PSDs
1-100
1.7 Power Spectral Density under WSS input
For a WSS filter input,
1.7 Power Spectral Density under WSS input
Observation
E[Y (t )] RY (0) SY ( f )df | H ( f ) |2 S X ( f )df
2
f c f / 2 f c f / 2
f f / 2 S X ( f )df f f / 2 S X ( f )df
c c
f [ S X ( f c ) S X ( f c )]
The filter passes only those frequency components of the input
random process X(t), which lie inside a narrow frequency band
of width f centered about the frequency fc and fc.
1.7 Properties of PSD
Property 0. Einstein-Wiener-Khintchine relation
Relation between autocorrelation function and PSD
of a WSS process X(t)
S ( f ) R ( ) exp( j 2f )d
X X
RX ( ) S X ( f ) exp( j 2f )df
1.7 Properties of PSD
Property 1. Power density at zero frequency
S X (0) [Watt/Hz] S X (0) [Watt-Second]
RX ( ) [Watt] d [Second]
| H ( f ) |2 S X ( f )df
1
( f f c ) S X ( f )df ( f f c ) S X ( f )df
4
1
( S X ( f c ) S X ( f c ))
4
1
S X ( f c ), since S X ( f ) S X ( f ) from Property 4.
2
1.7 Properties of PSD
Therefore, by passing through a proper filter,
S X ( f c ) 2 E [Y 2 (t )] 0
SX ( f )
1.7 Properties of PSD
Property 5: PSD is real.
Proof.
RX ( ) real S X ( f ) S X ( f )
*
A2 j 2 ( f f )
j 2 ( f f )
e d c
e c
d
2 4
A
R X ( ) cos2f c . A2
2 ( f f c ) ( f f c )
4
Example 1.5(continue from Example 1.2)
Signal with Random Phase
Example 1.6 (continue from Example
1.3) Random Binary Wave
Let
X (t ) A I
n
n p(t nT td )
T | | j 2f
S X ( f ) A2 1
T
e d u dv uv v du
T
T
| | 1 j 2f
T
2 1 1 j 2f
2
A 1 e A sgn( ) e d
T j 2f T T
T j 2f
A2 T
j 2fT T
sgn( )e j 2f d
(Continue from the previous slide.)
A2 T
SX ( f )
j 2fT
T
sgn( )e j 2f d
A2 T 0
0 j 2f e
j 2f
d j 2f e j 2f d
j 2fT j 2f T
A2 e j 2f T e j 2f 0
2 2
4 f T 0 T
A2
2 2 e j 2fT 1 1 e j 2fT
4 f T
A2
2 2 2 2 cos(2fT )
4 f T
A2
2 2 sin 2 (fT ) A2Tsinc2 ( fT )
f T
1-117
1-118
1.7 Energy Spectral Density
Energy of a (deterministic) function p(t) is
given by | p(t ) |2 dt.
Recall that the average power of p(t) is given by
1 T
lim | p ( t ) | dt.
2
T
2T T
Observe that
p ( f )e j 2ft
df
p ( f ' )e j 2f 't
*
df ' dt
(Continue from the previous slide.)
2
p( f ) p* ( f ' ) ( f ' f )dfdf '
p( f ) p* ( f )df
| p( f ) |2 df
1-120
Example
The ESD of a rectangular pulse of amplitude A
and duration T is given by
T 2
j 2ft
Eg ( f ) Ae dt A2T 2sinc 2 ( fT )
0
Example 1.7 Mixing of a Random Process
with a Sinusoidal Process
Let Y(t) = X(t) cos(2fct + ), where is
uniformly distributed over [, ), and X(t) is
WSS and independent of .
RY (t , u ) E [ X (t ) X (u ) cos(2f c t ) cos(2f c u )]
E [ X (t ) X (u )]E[cos(2f c t ) cos(2f c u )]
cos(2f c (t u ))
RX (t u )
2
1
SY ( f ) S X ( f f c ) S X ( f f c )
4
1.7 How to measure PSD?
If X(t) is not only (strictly) stationary but also
ergodic, then any (deterministic) observation
sample x(t) in [T, T) has:
1 T
lim
T
2T
T
x (t )dt E[ X (t )] X
Similarly,
1 T
lim
T
2T
T
x (t ) x (t )dt RX ( )
1.7 How to measure PSD?
Hence, we may use the time-limited Fourier
transform of the time-averaged autocorrelation
function: 1 T
2T
lim
T T
x ( t ) x ( t ) dt
1 T
2T T
T
x (t ) T x ( s ) exp( j 2f ( s t ))ds dt , s t
1 T
2T T
T
x (t ) exp( j 2ft ) T x ( s ) exp( j 2fs)ds dt
1 T
2T
T
T
x ( s ) exp( j 2fs)ds T x (t ) exp( j 2 ( f )t )dt
1
x 2 T ( f ) x2 T ( f )
2T
1-125
1.7 How to measure PSD?
The estimate
1
x2 T ( f ) x2 T ( f )
2T
is named the periodogram.
To summarize:
1. Observe x (t ) for duration [ T , T ).
T
2. Calculate x2T ( f ) x (t ) exp( j 2ft )dt.
T
1
3. Then S X ( f ) x2 T ( f ) x2 T ( f )
2T
1.7 Cross Spectral Density
Definition: For two (jointly WSS) random
processes, X(t) and Y(t), their cross spectral
densities are given by:
S ( f ) R ( ) exp( j 2f )d
X ,Y X ,Y
SY , X ( f ) RY , X ( ) exp( j 2f )d
where RX ,Y (t , u ) E [ X (t )Y (u )] and RX ,Y ( ) RX ,Y (t u ),
and RY , X (t , u ) E [Y (t ) X (u )] and RY , X ( ) RY , X (t u )
(for t u ).
1.7 Cross Spectral Density
Property
RX ,Y ( ) RY , X ( ) SY , X ( f ) RY , X ( ) exp( j 2f )d
RY , X ( ) exp( j 2f )d
RX ,Y ( ) exp( j 2f )d
S X ,Y ( f )
Example 1.8 PSD of Sum Process
Determine the PSD of sum process Z(t) = X(t) +
Y(t) of two zero-mean WSS processes X(t) and
Y(t).
Answer:
RZ (t , u )
E [ Z (t ) Z (u )]
E [( X (t ) Y (t ))( X (u ) Y (u ))]
E [ X (t ) X (u )] E[ X (t )Y (u )] E[Y (t ) X (u )] E [Y (t )Y (u )]
RX (t , u ) RX ,Y (t , u ) RY , X (t , u ) RY (t , u ).
WSS implies that
RZ ( ) RX ( ) RX ,Y ( ) RY , X ( ) RY ( ).
Hence,
S Z ( f ) S X ( f ) S X ,Y ( f ) SY , X ( f ) SY ( f ).
Q.E.D.
1-130
Example 1.9
Determine the CSD of output processes induced
by separately passing jointly WSS inputs
through a pair of stable LTI filters.
1-132
1.8 Gaussian Process
Definition. A random variable is Gaussian
distributed, if its pdf has the form
1 ( y Y ) 2
fY ( y ) exp
2 Y
2
2 2
Y
1.8 Gaussian Process
Definition. An n-dimensional random vector is
Gaussian distributed, if its pdf has the form
1 1 T 1
f ( x)
exp ( x ) ( x )
X
(2 | |) n/2
2
where [ E[ X 1 ], E [ X 2 ],..., E [ X n ]]T is the mean vector, and
Cov{ X 1 , X 1} Cov{ X 1 , X 2 }
Cov{ X 2 , X 1} Cov{ X 2 , X 2 } is the covariance matrix.
nn
1.8 Gaussian Process
For a Gaussian random vector, “uncorrelation”
implies “independence.”
Cov{ X 1 , X 1} 0
n
0 Cov{ X 2 , X 2 } f X ( x ) f X ( xi )
i 1
i
nn
1.8 Gaussian Process
Definition. A random process X(t) is said to be
Gaussian distributed, if for every function g(t),
satisfying
T T
g (t ) g (u ) R
0 0
X (t , u )dtdu
T
Y g (t ) X (t )dt
0
is a Gaussian random variable.
T T
Notably, E [Y ]
2
g (t ) g (u ) R X (t , u )dtdu.
0 0
1.8 Central Limit Theorem
Theorem (Central Limit Theorem) For a
sequence of independent and identically
distributed (i.i.d.) random variables X1, X2, X3,
…
( X1 X ) ( X n X ) y 1 x2
lim Pr y exp dx
n
X n 2 2
where X E [ X j ] and X2 E [ X 2j ].
1.8 Properties of Gaussian processes
Property 1. Output of a stable linear filter is a Gaussian
process if the input is a Gaussian process.
(This is self-justified by the definition of Gaussian
processes.)
Autocovariance function
C X Shot ( ) p(t ) p(t )dt
1.9 Shot Noise
For your reference
1-145
1.9 Shot Noise
Example. p(t) is a rectangular pulse of
amplitude A and duration T.
T
X Shot p(t )dt Adt AT
0
A2 (T | |), | | T
C X Shot ( )
0, otherwise
1.9 Thermal Noise
A noise arises from the random motion of
electrons in a conductor.
Mathematical model
Thermal noise voltage VTN that appears across the
terminals of a resistor, measured in a bandwidth of
f Herz, is zero-mean Gaussian distributed with
2 2
variance E [VTN ] 4 kTR f [volts ]
where k 1.38 1023 joules per degree Kelvin is the
Boltzmann' s constant, R is the resistance in ohms,
and T is the absolute temparature in degrees Kelvin.
1.9 Thermal Noise
Model of a noisy resistor
E[VTN2 ] E[ I TN
2
]R 2
1.9 White Noise
A noise is white if its PSD equals constant for
all frequencies.
N0
It is often defined as: SW ( f )
2
Impracticability
The noise has infinite power
N0
E [W (t )] SW ( f )df
2
df .
2
1.9 White Noise
Another impracticability
No matter how closely in time two samples are,
they are uncorrelated!
So impractical, why white noise is so popular in
the analysis of communication system?
There do exist noise sources that have a flat power
spectral density over a range of frequencies that is
much larger than the bandwidths of subsequent
filters or measurement devices.
1.9 White Noise
Some physically measurements have shown that the
PSD of (a certain kind of) noise has the form
2 2
SW ( f ) kTR 2
( 2f ) 2
where k is the Boltzmann’s constant, T is the absolute
temperature, and R are the parameters of physical
medium.
When f << ,
2 2 N0
SW ( f ) kTR 2 2kTR
( 2f ) 2
2
Example 1.10 Ideal Low-Pass Filtered White
Noise
After the filter, the PSD of the zero-mean white
noise becomes:
N 0
, | f | B
S FW ( f ) | H ( f ) | SW ( f ) 2
2
0, otherwise
B N0
RFW ( ) B exp( j 2f )df N 0 Bsinc(2 B )
2
k /(2 B ) implies RFW ( ) 0, i.e., uncorrelated.
Example 1.10 Ideal Low-Pass Filtered White
Noise
So if we sample the noise at rate of 2B times per
second, the resultant noise samples are
uncorrelated!
Example 1.11
Channel …0110 …,-m
,-m(t), m(t), m(t), -m
-m(t)
Modulator
Encoder
m (t )
T
T 1 cos(4f t )
c
dt
0 T
1.
Example 1.11
T 2
Noise N w(t ) cos(2f c t )dt
0 T
T 2 T 2
N E w(t ) cos(2f c t )dt E[ w(t )] cos(2f c t )dt 0.
0 T
0 T
T 2 T 2
E w( t )
2
N cos(2f c t )dt w( s ) cos(2f c s )ds
0 T 0 T
2 T T
E [ w(t ) w( s )] cos(2f c t ) cos(2f c s )dsdt
T 0 0
(Continue from the previous slide.)
2 T T N0
2
N (t s ) cos(2f c t ) cos(2f c s )dsdt
T 0 0 2
N0 T
( 2f c s )ds
2
cos
T 0
N0
.
2
1-157
1.10 Narrowband Noise
In general, the receiver of a communication
system includes a narrowband filter whose
bandwidth is just large enough to pass the
modulated component of the received signal.
The noise is therefore also filtered by this
narrowband filter.
So the noise’s PSD after being filtered may
look like the figures in the next slide.
1.10 Narrowband Noise
S X ( f ) A2Tsinc 2 ( fΤ )
1-162
A2.2 Null-to-Null Bandwidth
The null-to-null bandwidth in this case is 2B.
A2.2 3-dB Bandwidth
A 3-dB bandwidth is the displacement between
the two (positive) frequencies, at which the
magnitude spectrum of the signal reaches its
maximum value, and at which the magnitude
spectrum of the signal drops to 1 2 of the
peak spectrum value.
Drawback: A small 3-dB bandwidth does not
necessarily indicate that most of the power will be
confined within a small range. (E.g., the signal may
have slowly decreasing tail.)
A2.2 3-dB Bandwidth
S X ( f ) A2Tsinc 2 ( fΤ ) 1
2
S X ( 0) AT 2
0.32
T
1-165
A2.2 Root-Mean-Square Bandwidth
rms bandwidth
1/ 2
f S X ( f )df
2
Brms
S X ( f )df
Disadvantage: Sometimes,
f 2 S X ( f )df
even if
S X ( f )df .
A2.2 Bandwidth of Deterministic Signals
The previous definitions can also be applied to
Deterministic Signals where PSD is replaced by
ESD.
For example, a deterministic signal with spectrum
G(f) has rms bandwidth:
1/ 2
2 2
f | G ( f ) | df
Brms
| G ( f ) |2 df
A2.2 Noise Equivalent Bandwidth
An important consideration in communication
system is the noise power at a linear filter
output due to a white noise input.
We can characterize the noise-resistant ability of
this filter by its noise equivalent bandwidth.
Noise equivalent bandwidth = The bandwidth of an
ideal low-pass filter through which the same output
filter noise power is resulted.
A2.2 Noise Equivalent Bandwidth
A2.2 Noise Equivalent Bandwidth
Output noise power for a general linear filter
N0
W
2
S ( f ) | H ( f ) | df | H ( f ) |2 df
2
SW ( f ) | H ( f ) | df 2
2
| H (0) |2 df BN 0 | H (0) |2
B
BNE
| H ( f ) |2 df
2 | H (0) |2
A2.2 Time-Bandwidth Product
Time-Scaling Property of Fourier Transform
Reducing the time-scale by a factor of a extends the
bandwidth by a factor of a.
Fourier Fourier 1 f
g (t ) G ( f ) g ( at ) G
|a | a
This hints that the product of time- and frequency-
parameters should remain constant, which is named
the time-bandwidth product or bandwidth-duration
product.
A2.2 Time-Bandwidth Product
Since there are various definitions of time-parameter
(e.g., duration of a signal) and frequency-parameter
(e.g., bandwidth), the time-bandwidth product
constant may change for different definitions.
E.g., rms duration and rms bandwidth of a pulse g(t)
1/ 2 1/ 2
t 2 | g (t ) |2 dt
2 2
f | G ( f ) | df
Trms Brms
| g (t ) |2 dt | G ( f ) |2 df
1
Then Trms Brms .
4
A2.2 Time-Bandwidth Product
Example: g(t) = exp(t2). Then G(f) = exp(f2).
1/ 2
t 2 e 2t dt
2
1 1
Trms Brms . Then Trms Brms .
e dt
2
2t 2 4
corresponding to 2u(f)G(f).
G( f ) 2u( f )G ( f )
A2.3 Hilbert Transform
How to obtain g+(t)?
Answer: Hilbert Transformer.
Proof: Observe that
1, f 0
2u ( f ) 1 sgn( f ), where sgn( f ) 0, f 0
1, f 0
Then by the next slide, we learn that
Inverse Fourier
1
2u ( f ) (t ) j 1{t 0}
t
By extended Fourier transform,
4t j
InverseFourier
, t0
sgn( f ) lim j 2 t
a 0
a 4t 2
0, t 0
InverseFourier
j
2u( f ) 1 sgn( f ) (t ) 1{t 0}
t
1-176
A2.3 Hilbert Transform
g (t ) Fourier 12u ( f )G ( f )
Fourier 1{2u ( f )} * Fourier 1{G ( f )}
1
(t ) j 1{t 0} * g (t )
t
1
g (t ) j 1{t 0} * g (t )
t
g (t ) jgˆ (t ),
g ( )
where g (t )
ˆ d is named the Hilbert Transform of g (t ).
(t )
A2.3 Hilbert Transform
g t ) 1 ˆ t)
g
h )
1, f 0
1 Fourier
h ) H ( f ) j sgn( f ), where sgn( f ) 0, f 0
1,
f 0
g t ) 1 ˆ t)
g
h )
ˆ t)
g 1 g t )
h )
1
A2.3 Hilbert Transform
An important property of Hilbert Transform is that:
1-182
A2.4 Complex Representation of Signals
and Systems
g+(t) is named the pre-envelope or analytical
signal of g(t).
We can similarly define
g (t ) g (t ) jgˆ (t )
2(1 u( f ))G ( f )
G( f )
A2.4 Canonical Representation
of Band-Pass Signals
Now let G(f) be a narrow-
band signal for which 2W
<< fc.
Then we can obtain its pre-
envelope G+(f).
Afterwards, we can shift the
pre-envelope
~ to its low-pass
signal G ( f ) G ( f f c )
A2.4 Canonical Representation of Band-Pass
Signal
These steps give the relation between the
complex lowpass signal (baseband signal) and
the real bandpass signal (passband signal).
1-187
A2.4 Canonical Representation of Band-Pass
Signal
Canonical transmitter
g (t ) g I (t ) cos(2f c t ) gQ (t ) sin(2f c t )
A2.4 Canonical Representation of Band-Pass
Signal
Canonical
receiver
A2.4 More Terminology
g (t ) pre - envelope
g~(t ) complex envelope
g I (t ) in - phase component of the band - pass signal g (t )
gQ (t ) quadrature component of the band - pass signal g (t )
a (t ) | g (t ) || g~(t ) | g 2 (t ) g 2 (t )
I Q
natural envelope or envelope of g (t )
(t ) tan 1 gQ (t ) phase of g (t )
g (t )
I
A2.4 Bandpass System
Consider the case of passing a band-pass signal
x(t) through a real LTI filter h() to yield an
output y(t).
x (t ) y (t ) h( ) x (t )d
h( )
~
x (t ) x I (t ) jxQ (t )
~
h ( ) hI ( ) jhQ ( ) complex impulse response
1-192
Now, is the filter output y(t) also a bandpass
signal?
X( f )
H( f )
Y( f )
1-193
A2.4 Bandpass System
Question: Is the following system valid?
~
~
x (t ) ~
y (t ) h ( ) ~
x (t )d
~
h ( )
RN , N ( ) RN , N ( )
I Q Q I
1-201
(Continue from the previous slide.) These two terms
must equal zero,
1 since RN() is not
RN ( ) [ RN ( ) RN ( )] cos(2f c )
2 I Q
a function of t.
1
[ RN , N ( ) RN , N ( )] sin(2f c )
2 I Q Q I
1
[ RN ( ) RN ( )] cos(2f c ( 2t ))
2 I Q
1
[ RN , N ( ) RN , N ( )] sin(2f c ( 2t ))
2 I Q Q I
RN ( ) RN ( )
(Property 1)
I Q
RN , N ( ) RN , N ( )
I Q Q I
RN ( ) RN ( ) cos( 2f c ) RN
I Q ,N I
( ) sin( 2f c ) (Property 2)
1-202
For real N (t ),
1.11 PSD of NI(t ) and RN ( ) E [ N (t ) N (t )].
~
NQ(t) For complex N (t ),
1 ~ ~
RN~ ( ) E[ N (t ) N * (t )].
2
1
Factor is added so that N (t )
Some other properties 2
~
and its lowpass isomophism N (t )
1 ~ ~* reasonably have the same power.
RN~ ( ) E [ N (t ) N (t )]
2 (Cf. The next slide.)
1
E [( N I (t ) jN Q (t )) ( N I (t ) jN Q (t ))]
2
1
RN ( ) RN ( ) j[ RN , N ( ) RN , N ( )]
2 I Q Q I I Q
RN ( ) jRN , N ( )
I Q I
(Property 3)
1.11 PSD of NI(t ) and NQ(t)
(Property 4)
1
In fact, without the factor , the desired spectrum relation
2
S N ( f ) 2u( f ) S N ( f )
of
fail to hold.
S N~ ( f ) S N ( f f c )
1.11 Summary of Spectrum Properties
1. RN ( ) RN ( ) and RN
I Q I ,NQ ( ) RN Q ,N I
( )
2. RN ( ) RN ( ) cos( 2f c ) RN
I Q ,N I ( ) sin( 2f c ) RN (0) RN (0)
I
3. RN~ ( ) RN ( ) jRN ( ) R N ( 0)
Q ,N I
Q
I
6. S N ( f ) S N ( f ) and S N , N ( f ) S N
I Q I Q Q ,N I
(f) [From 1.]
1
7. S N ( f ) S N~ ( f f c ) S N~ ( f f c ) [From 4.
2 See the next slide.]
S N ( f ) RN ( )e j 2f d
1
2
RN~ ( )e j 2f c
RN~ ( )e
e d
j 2f * j 2f
c
1
2
RN~ ( )e j 2f RN*~ ( )e j 2f e j 2f d
c c
*
1 1
RN~ ( )e j 2 ( f f )c
d RN~ ( )e j 2 ( f f )
c
d
2 2
1
S N~ ( f f c ) S N*~ ( f f c )
2
1
S N~ ( f f c ) S N~ ( f f c ) , since S N~ ( f ) is real valued.
2
1-206
8. N I (t ) and N Q (t ) are uncorrelated, since they have zero means.
RN , N ( ) RN , N ( )
I Q Q I
RN , N ( ) E[ N I (t ) N Q (t )] RN
I Q Q ,N I ( )
RN , N ( ) RN , N ( )
I Q I Q
RN , N ( 0 ) R N
I Q I ,NQ ( 0)
RN , N (0) E[ N I (t ) N Q (t )] 0.
I Q
1-207
VI (t )
N I (t )
N (t ) N I (t ) cos(2f c t )
2 cos(2f c t )
N Q (t ) sin(2f c t )
VQ (t )
N Q (t )
2 sin(2f c t )
RV (t , u ) E [VI (t )VI (u )]
I
4 E [ N (t ) cos(2f c t ) N (u ) cos(2f c u )]
4 RN (t , u ) cos(2f c t ) cos(2f c u )
1-208
1 T
RV ( ) lim
I T
2T T
RV (t , t )dt
I
1 T
lim
T
2T T
4 RN ( ) cos(2f c (t )) cos(2f ct )dt
1 T
RN ( ) lim
T
T T
[cos( 2 f c ( 2 t )) cos( 2 f c )]dt
1 T
RN ( ) lim T cos(2f c ( 2t ))dt 2 RN ( ) cos(2f c )
T
T
2 RN ( ) cos(2f c )
SV ( f ) S N ( f f c ) S N ( f f c )
I
1, | f | B
S N ( f ) | H ( f ) | SV ( f ), where | H ( f ) |
2 2
0, f B
I I
1-209
S N ( f ) S N ( f ) [ S N ( f f c ) S N ( f f c )] | H ( f ) |2
I I
S N ( f f c ) S N ( f f c ), | f | B
0, otherwise
Similarly,
S N ( f f c ) S N ( f f c ), | f | B
SN ( f )
Q
0, otherwise
1-210
Next, we turn to RN I ,NQ ( ). VI (t )
N I (t )
N (t ) N I (t ) cos(2f c t )
2 cos(2f c t )
N Q (t ) sin(2f c t )
VQ (t )
N Q (t )
2 sin( 2f c t )
RV ,V (t , u ) E [VI (t )VQ (u )]
I Q
1-211
1 T
RV ,V
I Q
( ) lim
T
2T
T
RV ,V (t , t )dt
I Q
1 T
RN ( ) lim T [sin(2f c ( 2t ) sin( 2f c )]dt
T
T
2 RN ( ) sin(2f c )
SV ,V ( f ) j[ S N ( f f c ) S N ( f f c )]
I Q
RN , NQ (t , u ) E N I (t ) N Q (u )
I
E hI ( 1 )VI (t 1 )d 1 hQ ( 2 )VQ (u 2 )d 2
hI ( 1 )hQ ( 2 ) E[VI (t 1 )VQ (u 2 )]d 1d 2
hI ( 1 )hQ ( 2 ) RV ,V (t 1 , u 2 )d 1d 2
I Q
1-212
1-213
SN I ,NQ ( f ) hI ( 1 )hQ ( 2 ) RV ,V ( 2 1 )e j 2f dd 1d 2
I Q
hI ( 1 )hQ ( 2 ) RV ,V (u )e j 2f ( u
I Q
2 1 )
dud 1d 2 ,
(Let u 2 1.)
H I ( f ) H Q ( f ) SV ,V ( f )
I Q
SN I ,NQ
( f ) SN I ,NQ
(f)
j[ S N ( f f c ) S N ( f f c )] | H ( f ) |2
j[ S N ( f f c ) S N ( f f c )], | f | B
0, otherwise
1-214
Finally,
1-215
Example 1.12 Ideal Band-Pass Filtered
White Noise
1.12 Representation of Narrowband Noise in
Terms of Envelope and Phase Components
Now we turn to envelope R(t) and phase (t)
components of a random process of the form
N (t ) N I (t ) cos(2f c t ) N Q (t ) sin(2f c t )
R(t ) cos[2f c t (t )]
r r2 1 r r2
So f R , ( r, ) exp
2
2 exp .
2
2 2
2 2 2
1.12 Pdf of R(t) and (t)
R and are therefore independent.
r r2
f R ( r ) 2 exp for r 0.
2
Rayleigh distribution.
2
f ( ) 1 for 0 .
2
Normalized Rayleigh
distribution with 2 =
1.
1.13 Sine Wave Plus Narrowband Noise
Now suppose the previous Gaussian white
noise is added to a sinusoid of amplitude A.
Then
x (t ) A cos( 2f c t ) n(t )
A cos( 2f c t ) nI (t ) cos( 2f c t ) nQ sin( 2f c t )
x I (t ) cos( 2f c ) xQ (t ) sin( 2f c )
2 r r 2 A2 2 Ar cos( )
exp d
0 2 2
2 2
1.13 Sine Wave Plus Narrowband Noise
r r 2 A2 2 2 Ar cos( )
f R (r)
2 2
exp
2 0
2
exp
2 2
d
r r 2 A2 Ar
2 exp I 0 2 ,
2
2
1 2
where I 0 ( x )
2 0
exp( x cos( ))d is the modified Bessel function
v2 a2
fV ( v ) v exp I 0 av ,
2
A3.1 Bessel Functions
Bessel’s equation of order n
2
d y dy
x2 2 x ( x2 n2 ) y 0
dx dx
2 exp jx sin jn d 2n m!(n m)!
m 0
A3.2 Properties of the Bessel Function
1. J n ( x ) ( 1) n J n ( x ) ( 1) n J n ( x )
2n xn
2. J n 1 ( x ) J n 1 ( x ) J n ( x ) 3. When x small, J n ( x ) n .
x 2 n!
2 n
4. When x large, J n ( x ) cos x
x 4 2
5. lim J n ( x ) 0.
n
6. J
n
n ( x ) exp( jn ) exp( jx sin( ))
7. n ( x ) 1.
J 2
n
A3.3 Modified Bessel Function
Modified Bessel’s equation of order n
2d2y dy 2 2 2
x 2
x ( j x n )y 0
dx dx
A cos(2f t )
k 1
k c k
A cos(2f c t )
1.14 Computer Experiments: Flat-Fading
Channel
N
Y (t ) Ak cos(2f c t k ) YI cos(2f c t ) YQ sin( 2f c t )
k 1
N N
where YI Ak cos(k ) and YQ Ak sin( k ).
k 1 k 1
~ ~
Input X (t ) A induces output Y (t ) YI jYQ ,
which is independen t of t.
The corresponding
Rayleigh channel Output
1.15 Summary and Discussion
Definition of Probability System and Probability
Measure
Random variable, vector and process
Autocorrelation and crosscorrelation
Definition of WSS
Why ergodicity?
Time average as a good “estimate” of ensemble average
Characteristic function and Fourier transform
Dirichlet’s condition
Dirac delta function
Fourier series and its relation to Fourier transform
1.15 Summary and Discussion
Power spectral density and its properties
Energy spectral density
Cross spectral density
Stable LTI filter
Linearity and convolution
Narrowband process
Canonical low-pass isomorphism
In-phase and quadrature components
Hilbert transform
Bandpass system
1.15 Summary and Discussion
Bandwidth
Null to null, 3dB, rms, noise-equivalent
Time-bandwidth product
Noise
Shot noise, thermal noise and white noise
Gaussian, Rayleigh and Rician
Central limit theorem