DTS Tutorial
DTS Tutorial
DTS Tutorial
Frequency-Domain
Eugenio Schuster
Lehigh University
Mechanical Engineering and Mechanics
[email protected]
Tutorial
For a linear time-invariant (LTI) system with impulse response h[n], the output sequence y[n] is
related to the input sequence u[n] through the convolution sum,
y[n] = h[n] u[n] =
h[k]u[n k],
(1)
k=
h[k]u[n k] =
k=
k=
Defining,
H(ej ) =
h[k]ej(nk) =
h[k]ejk ,
k=
h[k]ejk ejn .
(2)
(3)
k=
(4)
As result we have that the complex exponential ejn is an eigenfunction of the LTI system with
associated eigenvalue equal to H(ej ). The eigenvalue H(ej ) is called the Frequency Response of
the system and describes the changes in amplitude and phase of the complex exponential input.
An important distinction exists between continuous-time and discrete-time LTI systems. While
in the continuous-time domain we need specify the frequency response H() over the interval
< < , in the discrete-time domain we only need specify the frequency response H(ej )
over an interval of length 2, e.g., < . This property is based on the periodicity of the
complex exponential. Using the fact that ej2r = 1 for any integer r, we can show that
ej(+2r)n = ejn ej2rn = ejn .
(5)
As we will show later, a broad class of input signals can be represented by a linear combination
of complex exponentials. In this case, the knowledge of the frequency response allows us to find
the output of the LTI system.
where
U(ej ) =
U(ej )ejn d,
u[n]ejn .
(6)
(7)
n=
The Inverse Fourier Transform (6) represents u[n] as a superposition of infinitesimal complex
exponentials over the interval < . The Discrete-Time Fourier Transform (7), or simply
2
Fourier Transform in this tutorial, determines how much of each frequency component over the
interval < is required to synthesize u[n] using eq. (6). The Fourier Transform is usually
referred as Spectrum. Comparing eqs. (3) and (7), it is possible to note that the frequency response
of a LTI system is the Fourier Transform of the impulse response h[n]. As we stated above, the
frequency response is periodic. Likewise, the Fourier Transform is periodic with period 2.
Z-Tranform
U(z) =
u[n]z n .
(8)
n=
Comparing eqs. (7) and (8) we note that we can obtain the Fourier Transform evaluating the
Z-Transform at the unit circle (z = ej ). Based on this property, the frequency response H(ej )
of a discrete-time LTI system h[n] can be obtained evaluating the Z-Transform H(z) at z = ej .
A sequence u[n] is generally a representation of a sampled signal. Given a continuous signal u(t),
its sampled version us (t) can be written as us (t) = u(t)s(t) with
s(t) =
(t nTs ),
(9)
n=
where is the Dirac delta function and Ts is the sampling period. In this case we write
us (t) = u(t)
(t nTs ) =
n=
(10)
n=
and
u[n] = u(nTs ).
(11)
1
2
Z
U()ejt d
(12)
U() =
u(t)ejt dt,
(13)
n=
u(nTs )
(t nTs )ejt dt =
u(nTs )ejnTs .
(14)
n=
n=
u[n]ejn .
(15)
Comparing eqs. (14) and (15), and taking into account eq. (11) we conclude that
(16)
The Discrete-Time Fourier Transform U(ej ) is simply a frequency-scaled version of the ContinuousTime Fourier Transform Us () where the scale factor is given by
= Ts =
f
= 2 .
fs
fs
(17)
Nyquist theorem relates the sampling frequency fs = 1/T s with the maximum frequency fmax
of the signal before sampling. In order to avoid aliasing distortion, it is required that
fs > 2fmax .
(18)
Therefore, every time we sample with frequency fs we are assuming that the maximum frequency
of the signal to be sampled is less than fs /2. In other words, we are assuming that
U() =
(19)
U(e ) =
6= 0 <
= 0 otherwise.
(20)
implying that the interval < in the discrete-time domain corresponds to the interval
fs < fs (fs /2 < f fs /2) in the continuous-time domain.
We come back now to the idea of representing signals by a linear combination of complex exponential and we consider at this time the periodic sequence u[n] with period N, i.e. u[n] = u[n + rN]
for any integer r. In this case, as in the continuous case, we can represent u[n] by its Fourier Series,
u[n] =
2
1 X
U [k]ej N kn .
N k
(21)
By the Fourier Series, the periodic sequence is represented as a sum of complex exponentials
with frequencies that are integer multiples of the fundamental frequenciy 2/N. We say that
these are harmonically related complex exponentials. The Fourier Series representing continuoustime periodic signals require an infinite number of harmonically related complex exponentials,
whereas the Fourier Series for any discrete-time periodic signal requires only N harmonically
related complex exponentials. This is explained by the periodicity of the complex exponential,
2
ej N (k+rN )n = ej N kn ej N rN n = ej N kn
(22)
for any integer r. Thus, the Discrete Fourier Series of the periodic sequence u[n] with period N
can be written as
1
1 NX
2
u[n] =
U [k]ej N kn ,
(23)
N k=0
4
N
1
X
u[n]ej N kn .
(24)
n=0
We are wondering now how we can represent periodic sequences by the Fourier Transform. To give
ourselves an answer, we must study the convergence of the infinite sum (7). A sufficient condition
for convergence can be found as follows:
U(ej )
jn
u[n]e
n=
n=
|u[n]| ejn
|u[n]| <
(25)
n=
|u[n]| <
(26)
n=
are called absolutely summable. When the sequence u[n] is absolutely summable, the Fourier
Transform U(ej ) not only exists but converges uniformly to a continuos function of . For those
sequences that are not absolutely summable but square summable, i. e.,
|u[n]|2 <
(27)
n=
the Fourier Transform U(ej ) also exists but the convergence condition is relaxed to mean-square
convergence. This means that given
U(ej ) =
u[n]ejn
(28)
n=
and
UM (ej ) =
M
X
u[n]ejn
(29)
n=M
it follows that
lim
2
U(ej ) UM (ej ) d
= 0,
(30)
which means that total energy of the error approaches zero as M . In summary,
Uniform Convergence of the Fourier Transform
Sequence is absolutely summable
Mean-Square Convergence of the Fourier Transform Sequence is square summable.
The periodic sequence u[n] satisfies neither (26) nor (27). However, sequences that can be
expressed as a sum of complex exponentials, as it is the case for all periodic sequences, can be
2( o + 2r),
(31)
r=
where we assume that < o , corresponds to the Fourier Transform of the complex
exponential sequence ejo n . To show that, we replace the expression in (6) to obtain
Z
1 Z
j jn
u[n] =
U(e )e d =
( o )ejn d = ejo n .
2
(32)
ak ejk n
(33)
X
X
ak 2( k + 2r).
(34)
r= k
This means that if u[n] is periodic with period N and Discrete Fourier Series coefficients U [k],
we can write
1
2
1 NX
U [k]ej N kn ,
(35)
u[n] =
N k=0
and the Fourier Transform U (ej ) is defined to be the impulse train
U (ej ) =
N
1
X
X
r= k=0
X
U[k]
2k
U[k]
2k
(
+ 2r) =
2
(
).
N
N
N
N
k=
(36)
[n rN] =
r=
1 n = rN
0 otherwise
(37)
where r is an integer and N is the period. We can compute first the Discrete Fourier Series
coefficients
N
1
N
1
X
X
2
2
P [k] =
p[n]ej N kn =
[n]ej N kn = 1.
(38)
n=0
n=0
2
2k
(
).
N
k= N
(39)
The Fourier Transform of the periodic impulse train becomes important when we want to relate
finite-length and periodic sequences.
Consider now a finite length sequence u[n] (u[n] = 0 everywhere except over the interval 0
n N 1). We can construct an associated periodic sequence u[n] as the convolution of the
finite-length sequence with the impulse train (37) of period N:
u[n] = u[n] p[n] = u[n]
[n rN] =
r=
u[n rN].
(40)
r=
The periodic sequence u[n] is a set of periodically repeated copies of the finite-length sequence u[n].
Assuming that the Fourier Transform of u[n] is U(ej ), and recalling that the Fourier Transform
of a convolution is the product of the Fourier Transforms, we can obtain the Fourier Tranform for
u[n] as
U (ej ) = U(ej )P (ej ) = U(ej )
X
2k
2
2k
2
2k
(
)=
U(ej N )(
),
N
N
k= N
k= N
(41)
where we have used (39). This result must be coincident with our definition (36) and therefore it
must be
j
= U(ej 2k
N ) = U(e
U[k]
) = 2k .
(42)
N
This very important result implies that the periodic sequence U [k] with period N of Discrete
Fourier Series coefficients are equally spaced samples of the Fourier Transform of the finite-length
sequence u[n] obtained by extracting one period of u[n]. This corresponds to sample the Fourier
Transform at N equally spaced frequencies over the interval < with spacing 2/N.
As we defined
(
u[n] 0 n N 1
0
otherwise
u[n] = u[(n modulo N)]
u[n] =
(43)
(44)
we define now for consistency (and to mantain duality between time and frequency),
U[k] =
U [k] 0 k N 1
0
otherwise
(45)
(46)
We have used the fact that the Discrete Fourier Series sequence U [k] is itself a sequence with period
N. The sequence U[k] is named Discrete Fourier Transform and is written as
U[k] =
u[n] =
N
1
X
n=0
N
1
X
1
N
u[n]ej N kn ,
k=0
U[k]ej N kn .
(47)
(48)
The Discrete Fourier Transform (48) gives us the Discrete-Time Fourier Transform (or simply
the Fourier Transform) (7) at N equally spaced frequencies over the interval 0 2 (or
< ):
(49)
U[k] = U(ej ) = 2k , 0 k N 1.
N
N
1
X
n=0
N
1
X
1
N
u[n] =
u[n]ej N kn ,
0k N 1
U[k]ej N kn ,
(50)
0 n N 1.
(51)
k=0
Considering the relationship (49) between DFT and FT, we can write the spectrum as
U() = U(ej ) =
N
1
X
u[n]ejn ,
n=0
2
k
N
(0 k N 1; 0 < 2).
(52)
Considering the scaling (17) between sequences and sampled signals, we can write the spectrum
as
U() = =
U(f ) = =
N
1
X
n=0
N
1
X
n=0
u[n]ejTs n ,
u[n]ej
2f
fs
2k
NTs
f=
kfs
N
(0 k N 1; 0 <
2
= 2max )
Ts
(0 k N 1; 0 f < fs = 2fmax )
(53)
(54)
In addition to the periodicity of the DFT (U( + 2) = U()), we have that U() = U() for
real u[n]. Therefore, the function U() is uniquely defined by its values over the interval [0, ]. We
associate high frequencies with frequencies close to and low frequencies with frequencies close
to 0. As a consequence of these properties, it is exactly the same to define U() over the interval
0 < 2 or the interval < . The DFT will give the values of the FT over any of these
intervals with a frequency spacing equal to 2/N.
Until now, we have assumed that the signals are deterministic. Sometimes, the mechanism of
signal generation is so complex that it is very difficult, if not impossible, to represent the signal as
deterministic. In these cases, modeling the signal as an outcome of a random variable is extremely
useful. Each individual sample u[n] of a particular signal is assumed to be an outcome of a random
8
variable un . The entire signal is represented by a collection of such random variables, one for each
sample time, < n < . The collection of these random variables is called a random process.
We assume that the sequence u[n] for < n < is generated by a random process with specific
probability distribution that underlies the signal.
An individual random variable un is described by the probability distribution function
Fun (un , n) = Probability[un un ],
(55)
where un denotes the random variable and un is a particular value of un . If un takes on a continuous
range of values, it can be specified by the probability density function
fun (un , n) =
Fun (un , n)
un
Fun (un , n) =
un
(56)
(57)
When we have two stochastic processes un and vn , the interdependence is described by the joint
probability distribution function
Fun ,vm (un , n, vm , m) = Probability[un un and vm vm ],
(58)
(59)
When Fun ,vm (un , n, vm , m) = Fun (un , n)Fvm (vm , m) we say that the processes are independent.
It is often useful to characterize a random variable in terms of its mean, variance and autocorrelation. The mean of a random process un is defined as
mu [n] = mun = E{un } =
(60)
where E denotes and operator called mathematical expectation. The variance of un is defined as
u2 [n] = u2 n = E{(un mun )2 } =
(61)
(62)
(63)
In the same way, given two stochastic processes un and vn we can define the cross-correlation as
un vm
fun ,vm (un , n, vm , m)dun dvm ,
(64)
(65)
In general the statistical properties of a random variable may depend on n. For a stationary
process the statistical properties are invariant to a shift of time origin. This means that the first
order averages such as the mean and variance are independent of time and the seconde order
averages such as the autocorrelation are dependent on the time diference. Thus, for a stationary
process we can write
mu [n] = mu = E{un }
u2 [n] = u2 = E{(un mu )2 }
Ruu [n + m, n] = Ruu [m] = E{un+m un }.
(66)
(67)
(68)
In many cases, the random processes are not stationary in the strict sense because their probability
distributions are not time invariant but eqs. (66)(68) still hold. We name those random processes
as wide-sense stationary.
For a stationary random process, the essential characteristics of the process are represented
by averages such as the mean, variance or autocorrelation. Therefore, it is essential to be able to
estimate these quantities from finite-length segments of data. An estimator for the mean value is
the sample mean, defined as
1
1 NX
u[n],
(69)
m
u =
N n=0
which is unbiased (E{m
u } = mu ). An estimator for the variance is the sample variance, defined
as
1
1 NX
u2 =
(u[n] m
u )2 ,
(70)
N n=0
1
1 NX
(u[n] m
u )(u[n ] m
u ),
N n=0
1
1 NX
(u[n] m
u )(v[n ] m
v ).
N n=0
(71)
(72)
Both estimators are asymptotically unbiased (limN E{Cuu ( )} = Cuu ( ), limN E{Cuv ( )} =
Cuv ( )) and in addition it can be showed that
)} =
E{C(
N | |
C( ).
N
(73)
While stochastic signals are not absolutely summable or square summable and consequently do
not have Fourier Transforms, many of the properties of such signals can be summarized in terms
of the autocorrelation or autocovariance sequence, for which the Fourier Transform often exists.
10
We define the Power Spectrum Density (PSD) as the Fourier Transform of the auto-covariance
sequence
uu () = uu (ej ) =
Cuu [n]ejn ,
(74)
n=
and the Cross Spectrum Density (CSD) as the Fourier Transform of the cross-covariance sequence
uv () = uv (e ) =
Cuv [n]ejn .
(75)
n=
By definition of the auto-covariance and the Inverse Fourier Transform we can write
1
2
uu ()d.
(76)
1
Cuu [n]ejn = |U()|2 = Puu (),
N
n=
(77)
and
uv () =
1
Cuv [n]ejn = U()V (),
N
n=
(78)
where U() and V () are the Discrete Fourier Transforms of u[n] and v[n] respectively and Puu ()
is the Periodogram of the sequence u[n]. Assuming that the sequence u[n] is the sampled version
of a stationary random signal s(t) whose PSD ss () is bandlimited by the antialiasing lowpass
filter (2 f2s < < 2 f2s ), its PSD uu () is proportional to ss () over the bandwith of the
antialiasing filter, i.e.,
uu () =
ss
,
Ts
Ts
|| < uu (f ) = ss
Ts
uu ()
,
fs
|| < , f =
fs
.
2
(79)
For a linear time-invariant (LTI) system with impulse response h[n], we know that the output
sequence y[n] is related to the input sequence u[n] through the convolution sum,
y[n] = h[n] u[n] =
h[k]u[n k].
(80)
k=
h[k]mu = 0.
(81)
k=
Ryy [ ] = E{y[n]y[n ]} = E
=
=
k= r=
X
X
k=
h[k]
h[k]
r=
X
r=
k=
11
h[k]u[n k]h[r]u[n r]
(82)
(83)
h[r]Ruu ( + r k]
(84)
l=
Ruu ( l)
h[l + r]h[r]
(85)
r=
(86)
l=
Taking into account that the Fourier Tranform of Rhh (l) = h[n]h[n] is equal to H(ej )H (ej ) =
2
|H(ej )| and applying Fourier Transform to the last equation we can obtain the relationship
2
(87)
The cross-correlation between the input u[n] and output y[n] is given by
k=
h[k]u[n k]
k=
h[k]E{u[n]u[n k]}
h[k]Ruu ( + k]
(88)
(89)
(90)
k=
= h[ ] Ruu [ ]
= h[ ] Ruu [ ]
(91)
(92)
Applying Fourier Transform to the last equation we can obtain the relationship
uy (ej ) = H(ej )uu (ej ).
10
(93)
(94)
where v[n] represents a noise sequence. If the input sequence u[n] is given by
u[n] = A cos(o n)
(95)
we already showed (recall that cos(o n) = (ejo n + ejo n )/2) that the output will be
1
1 NX
y[n] cos(o n),
N n=0
Is (N) =
12
1
1 NX
y[n] sin(o n),
N n=0
(96)
(97)
1
1
1 NX
1 NX
A G(ejo ) cos(o n + arg[G(ejo )]) cos(o n) +
v[n] cos(o n)
N n=0
N n=0
Ic (N) = A G(ejo )
+
Ic (N) =
1 h
i
1 1 NX
cos(2o n + arg[G(ejo )]) + cos(arg[G(ejo )])
2 N n=0
1
1 NX
v[n] cos(o n)
N n=0
(99)
1
1 NX
v[n] cos(o n)
N n=0
(100)
N
1
X
A
A
j
j
j 1
G(e o ) cos(arg[G(e o )]) +
G(e o )
cos(2o n + arg[G(ejo )])
2
2
N n=0
(98)
and similarly
N
1
X
A
A
jo
jo
jo 1
Is (N) = G(e ) sin(arg[G(e )]) + G(e )
sin(2o n + arg[G(ejo )])
2
2
N n=0
1
1 NX
+
v[n] sin(o n).
N n=0
(101)
When N , the second and third terms in the expressions for Ic (N) and Is (N) tend to zero.
We finally obtain
A
G(ejo ) cos(arg[G(ejo )])
2
A
Is (N) = G(ejo ) sin(arg[G(ejo )]).
2
These two expressions suggest the following estimators for the Frequency Response:
Ic (N) =
jo
G(e )
Ic (N)2 + Is (N)2
A/2
Is (N)
jo
arg[G(e
)] = arctan
.
Ic (N)
(102)
(103)
(104)
(105)
N
1
X
y[n]ejn ,
n=0
2
k (0 k N 1; 0 < 2).
N
(106)
1
Y (o ).
N
(107)
N
1
X
n=0
jn
u[n]e
N
1
X
ejo n + ejo n jn
2
=
A
e
,=
k (0 k N 1; 0 < 2), (108)
2
N
n=0
13
resulting
U() =
N A2 for = o if o =
0
otherwise.
2
k
N
(109)
Y ( )
o
U(o )
"
(110)
#
Y (o )
jo
arg[G(e
)] = arg
.
U(o )
(111)
which means that an estimation of the Frequency Response at the frequency of the input signal
can be computed based on the Discrete Fourier Transform of the input and output sequences:
jo ) = Y (o ) .
G(e
U(o )
(112)
In addition to this estimator, eq. (93) suggests that the Frequency Response at the frequency of
the input signal can also be estimated as:
uy (ejo )
jo
)=
,
G(e
uu (ejo )
(113)
which reduces to (112) when we take into account (77) and (78).
References
[1] Alan V. Oppenheim and Ronald W. Schafer, Discrete-Time Signal Processing, Prentice
Hall, Second Edition, 1999.
[2] Lennart Ljung, System Identification: Theory for the User, Prentice Hall, Second Edition,
1999.
14