Assign - 3 - Solution
Assign - 3 - Solution
Assign - 3 - Solution
MKEL 1143
Advanced DSP
Solutions for Assignment 3
Problem 3.1
𝑦𝑛 = 𝑥𝑛 + 𝜈𝑛
It is given that
1
𝑆𝑥𝑥 (𝑧) = , 𝑆𝑣𝑣 (𝑧) = 5, 𝑆𝑥𝜈 (𝑧) = 0
(1 − 0.5𝑧 −1 )(1 − 0.5𝑧)
(a) Determine the optimal realizable Wiener filter for estimating the signal 𝑥𝑛 on the basis of
the observation 𝑌𝑛 = {𝑦𝑖 , 𝑖 ≤ 𝑛}. Write the difference equation of this filter and compute
the mean-square estimation error.
1 1−0.4𝑧 −1 1−0.4𝑧
𝑆𝑦𝑦 (𝑧) = 𝑆𝑥𝑥 (𝑧) + 𝑆𝜈𝜈 (𝑧) = (1−0.5𝑧 −1 )(1−0,5𝑧) + 5 = 6.25 ∙ ∙
1−0.5𝑧 −1 1−0.5𝑧
1
𝑆𝑥𝑦 (𝑧) −1
(1 − 0.5𝑧 )(1 − 0.5𝑧) 1 1.25
[ ] = [ ] = [ ] =
𝐵(𝑧 −1 ) + 1 − 0.4𝑧 (1 − 0.5𝑧 −1 )(1 − 0.4𝑧) + 1 − 0.5𝑧 −1
1 − 0.5𝑧 +
(b) Determine the optimal realizable Wiener filter for predicting one step into the future: that
is, estimate 𝑥𝑛+1 on the basis of 𝑌𝑛 .
The prediction part is handled by defining 𝑥1 (𝑛) = 𝑥𝑛+1 . The problem of predicting 𝑥𝑛+1
on the basis of 𝑌𝑛 is equivalent to the problem of estimating 𝑥1 (𝑛) on the basis of 𝑌𝑛 .
Using the filtering equation 𝑋1 (𝑧) = 𝑧𝑋(𝑧), we find
𝑧
𝑆𝑥1 𝑦 (𝑧) = 𝑧𝑆𝑥𝑦 (𝑧) =
(1 − 0.5𝑧 −1 )(1 − 0.5𝑧)
The causal part of G(z) is found by first doing the PFE and choose all poles inside the
u.c.:
0.4
[𝐺(𝑧)]+ =
1 − 0.5𝑧 −1
0.1
𝐻1 (𝑧) = 𝑜𝑟, 𝑥̂𝑛+1/𝑛 = 04𝑥̂𝑛/𝑛−1 + 0.1𝑦𝑛
1 − 0.4𝑧 −1
where denoted 𝑥̂𝑛+1/𝑛 = 𝑥̂1 (𝑛). We note that the prediction filter 𝐻1 (𝑧)is related to the
estimation filter 𝐻(𝑧) by 𝐻1 (𝑧) = 0.5𝐻(𝑧). Since both filters have the same input, that is, 𝑦𝑛 , it
follows that
𝑥̂𝑛+1/𝑛 = 0.5𝑥̂𝑛/𝑛
𝑑𝑧 1 𝑑𝑧
𝜀1 = 𝐸 [𝑒 2 1] = ∫ [𝑆𝑥1 𝑥1 (𝑧) ∙ 𝐻1 (𝑧)𝑆𝑦𝑥1 (𝑧)]𝑧 𝑛−1 = ∫ 𝑧 𝑛−1 = 1.25
𝑛+𝑛 2𝜋𝑗𝑧 (𝑧 − 0.4)(1 − 0.5𝑧) 2𝜋𝑗
𝑢.𝑐. 𝑢.𝑐.
This is slightly worse than the estimation error 𝜀 because the estimation filter uses one more
observation than the prediction filter, and thus, makes a better estimate.
Problem 3.2
Show that the average power of the output 𝑒𝑛 can be expressed in the two alternative forms:
𝜋
𝑑𝜔
𝐸[𝑒𝑛2 ] = ∫ 𝑆𝑦𝑦 (𝜔)|𝐴(𝜔)|2 = 𝒂𝑇 𝑅𝑦𝑦 𝒂
−𝜋 2𝜋
Solution
𝑀 𝑦𝑛
𝑦𝑛−1
𝑒𝑛 = ∑ 𝑎𝑚 𝑦𝑛−𝑚 = [𝑎0 , 𝑎1 , … , 𝑎𝑀 ] [ ⋮ ] = 𝐚𝑻 𝐲(𝒏)
𝑚=0 𝑦𝑛−𝑀
where 𝑹𝒚𝒚 = 𝐸[𝐲(𝒏)𝐲(𝒏)𝑻 ]. Its matrix elements are expressed in terms of the autocorrelation
lags, as follows:
𝜋 𝜋
𝑑𝑤 𝑑𝑤
𝐸[𝑒𝑛2 ] = 𝑅𝑒𝑒 (0) = ∫ 𝑆𝑒𝑒 (𝜔) = ∫|𝐴(𝜔)|2 𝑆𝑦𝑦 (𝜔)
2𝜋 2𝜋
−𝜋 −𝜋
where we used eqn (10) from Modul 2a – Random Process as well as 𝑆𝑒𝑒 (𝜔) = |𝐴(𝜔)|2 𝑆𝑦𝑦 (𝜔),
which follows from the fact that 𝑒𝑛 is the output of the linear filter 𝐴(𝑧) when the input is 𝑦𝑛 .
Problem 3.3
Consider the two autoregressive random signals 𝑦𝑛 and 𝑦𝑛′ generated by the two signal models:
𝜖𝑛 1 𝑦𝑛 𝜖′𝑛 1
𝑦′𝑛
𝐴(𝑧) 𝐴′ (𝑧)
(a) Suppose 𝑦𝑛 is filtered through the analysis filter 𝐴′ (𝑧) of 𝑦𝑛′ producing the output signal
𝑒𝑛 ; that is,
𝑀
𝑦𝑛 𝑒𝑛
𝐴′ (𝑧) ′
𝑒𝑛 = ∑ 𝑎𝑚 𝑦𝑛−𝑚
𝑚=0
If 𝑦𝑛 were to be filtered through its own analysis filter 𝐴(𝑧), it would produce the
innovations sequence 𝜖𝑛 . Show that the average power of 𝑒𝑛 compared to the average
power of 𝜖𝑛 is given by:
𝑇 2 2
𝜎𝑒2 𝒂′ 𝑅𝑦𝑦 𝒂′ 𝝅
𝐴′ (𝜔) 𝑑𝜔 𝐴′
= = ∫ | | = ‖ ‖
𝜎𝜖2 𝒂𝑇 𝑅𝑦𝑦 𝒂 −𝝅 𝐴(𝜔) 2𝜋 𝐴
where 𝒂, 𝒂′ and 𝑅𝑦𝑦 have the same meaning as in Problem 2.2 above. This ratio can be
taken as a measure of similarity between the two signal models. The log of this ratio is
Solution
𝜎𝜖2
𝑆𝑦𝑦 (𝜔) = 𝜎𝜖2 |𝐵(𝜔)|2 =
|𝐴(𝜔)|2
Using the results of Problem 2.3 above, applied to the filter 𝐴′ (𝑧), we find
𝜋
𝑑𝑤
𝜎𝑒2 = 𝐸[𝑒𝑛2 ] = ∫ |𝐴′ (𝜔)|2 𝑆𝑦𝑦 (𝜔) = 𝐚′𝑻 𝑹𝒚𝒚 𝐚′ 𝑜𝑟,
2𝜋
−𝜋
𝜋
𝜎𝜖2 𝑑𝑤
𝜎𝑒2 = ∫ |𝐴′ (𝜔)|2 = 𝐚′𝑻 𝑹𝒚𝒚 𝐚′
|𝐴(𝜔)|2 2𝜋
−𝜋
𝜋 2
𝜎𝑒2 𝐴′ (𝜔) 𝑑𝜔 𝐚′𝑻 𝑹𝒚𝒚 𝐚′
= ∫| | = 𝑻
𝜎𝜖2 𝐴(𝜔) 2𝜋 𝐚 𝑹𝒚𝒚 𝐚
−𝜋
(b) Alternatively, show that if 𝑦𝑛′ were to be filtered through 𝑦𝑛′ analysis filter 𝐴(𝑧) resulting
in 𝑒𝑛′ = ∑𝑀 ′
𝑚=0 𝑎𝑚 𝑦𝑛−𝑚 then
2
2
𝜎𝑒′ 𝒂𝑇 𝑅′𝑦𝑦 𝒂 𝝅
𝐴(𝜔) 𝑑𝜔 𝐴 2
= = ∫ | | =‖ ‖
𝜎𝜖′2 𝒂′𝑇 𝑅′𝑦𝑦 𝒂′ −𝝅 𝐴′(𝜔) 2𝜋 𝐴′
Solution
You can use the Scilab coding given in the System Identification example. You should get
something that looks like below:
(a) Generate 1000 samples of zero-mean, unit-variance, white Gaussian noise sequence 𝑥𝑛 ,
𝑛 = 0,1, … , 999, and filter them through the filter defined by the difference equation:
with 𝑎 = 0.95. to avoid the transient effects introduced by the filter, discard the first 900
output samples and save the last 100 samples of 𝑦𝑛 . Compute the sample autocorrelation
of 𝑦𝑛 from the length-100 block samples.
(b) Determine the theoretical autocorrelation 𝑅𝑦𝑦 (𝑘), and on the same graph, plot the
theoretical and sample autocorrelations versus k. Do they agree?
0.052 0.95𝑘
Hence 𝑅𝑦𝑦 (𝑘) = = 0.0256(0.95𝑘 )
1−0.952
Problem 3.6
(a) Suppose the input is zero-mean, unit-variance, white noise. Compute the output spectral
density 𝑆𝑦𝑦 (𝑧) and power spectrum 𝑆𝑦𝑦 (𝜔) and plot it roughly versus frequency.
(b) Compute the output autocorrelation 𝑅𝑦𝑦 (𝑘) for all lags k.
(d) What signal 𝑠[𝑛] can pass through this filter and remain entirely unaffected (at least in the
steady-state region)?
(e) How can the filter coefficients be changed so that (i) the noise reduction capability of the
filter is improved, while at the same time (ii) the above signal 𝑠[𝑛] still goes through
unchanged? Explain any tradeoffs
0.36 0.36
𝐻(𝑧) = =
1 + 0.64𝑧 −2 (1 − 𝑗0.8𝑧 −1 )(1 + 𝑗0.8𝑧 −1 )
0.362
𝑆𝑦𝑦 (𝑧) = 𝜎𝑥2 𝐻(𝑧)𝐻(𝑧 −1 ) = , 𝑤𝑖𝑡ℎ 𝜎𝑥2 = 1
(1 + 0.64𝑧 −2 )(1 + 0.64𝑧 2 )
0.1296 0.1296
𝑆𝑦𝑦 (𝜔) = −2 2
=
(1 + 0.64𝑧 )(1 + 0.64𝑧 ) 1.4096 + 1.28 cos(2𝜔)
(b) The autocorrelation function is obtained by performing the partial expansion fraction and
then select the poles inside the u.c. only. Doing this we should get the following
𝜋𝑘
𝑅𝑦𝑦 (𝑘) = 0.2195(0.8)𝑘 cos ( )
2
for 𝑘 ≥ 0
(c) Since 𝜎𝑦2 = 𝑅𝑦𝑦 (0), we find for the noise reduction ratio
(e) By letting the poles closer to the unit circle, the steady state is reached more slowly, but
the noise reduction ratio will become smaller. This is the basic tradeoff between speed of
response and the effective noise reduction.