1. Introduction
In [
1], Pearl gave an upper bound for the rate distortion function (RDF) of finite-length data blocks of Gaussian wide sense stationary (WSS) sources and proved that such bound tends to the RDF of the source when the size of the data block grows. However, he did not give a coding strategy to achieve his bound for a given block length.
In this paper, we present two new upper bounds for the RDF of finite-length data blocks of Gaussian WSS sources and we propose coding strategies to achieve these two bounds for a given block length. Since our bounds are tighter than the one given by Pearl, they also tend to the RDF of the source when the size of the data block grows. In order to obtain our bounds, we previously derive new results on the discrete Fourier transform (DFT) of WSS processes.
It should be mentioned that our coding strategies allow us to deal with Gaussian WSS sources as if they were memoryless. This fact can be used, for instance, to consider Gaussian WSS sources in [
2].
The paper is organized as follows. In
Section 2 we set up notation and review the mathematical definitions and results used in the rest of the paper. In
Section 3 we obtain several results on the DFT of WSS processes which will be applied in
Section 4. Finally, in
Section 4 we present two new upper bounds for the RDF of finite-length data blocks of Gaussian WSS sources and we propose coding strategies to achieve such bounds. In this section, we also present a numerical example to illustrate the difference between Pearl’s bound and our bounds.
2. Preliminaries
2.1. Notation
In this paper,
,
,
, and
denote the set of natural numbers (i.e., the set of positive integers), the set of integer numbers, the set of (finite) real numbers, and the set of (finite) complex numbers, respectively.
is the set of all real
n-dimensional (column) vectors.
denotes the
identity matrix,
stands for conjugate transpose, ⊤ denotes transpose, and
,
, are the eigenvalues of an
Hermitian matrix
A arranged in decreasing order.
E stands for expectation,
is the imaginary unit, and
and
denote real and imaginary parts, respectively. If
, then
and, if
for all
then we denote by
the
n-dimensional (column) vector given by
If is a random variable for all , we denote by the corresponding random process.
We finish this subsection by reviewing the concept of square Toeplitz matrix.
Definition 1. An Toeplitz matrix is an matrix of the formwhere with . Consider a function
that is continuous and
-periodic. For every
, we denote by
the
Toeplitz matrix given by
where
is the sequence of Fourier coefficients of
f:
It should be mentioned that
is Hermitian for all
if and only if
f is a real function (see [
3] (Theorem 4.4.1)). Furthermore, in this case, from [
3] (Theorem 4.4.2), we have
2.2. DFT of Real Vectors
In this subsection, we recall a well-known property of the DFT of real vectors.
Lemma 1. Let . Consider for all . Suppose that is the DFT of , i.e.,where is the Fourier unitary matrix Then, the two following assertions are equivalent:
- (1)
.
- (2)
for all and .
2.3. RDF of Real Gaussian WSS Processes
Kolmogorov gave in [
4] the following formula for the rate distortion function (RDF) of a real zero-mean Gaussian
n-dimensional vector
:
where
is a real number satisfying
can be thought of as the minimum rate (measured in nats) at which one must encode (compress)
in order to be able to recover it with a mean square error (MSE) per dimension not larger than
D, that is:
where
denotes the estimation of
and
is the spectral norm.
We now review the definition of WSS process with continuous power spectral density (PSD).
Definition 2. Let be continuous and -periodic. A random process is said to be WSS with PSD f if it has constant mean (i.e., for all and .
If
is a real zero-mean Gaussian WSS process with continuous PSD
f satisfying
and
, then from Equations (
1) and (
2), we obtain
We recall that the RDF of the source (process) is given by .
3. DFT of WSS Processes
In this section, we present several new results on the DFT of WSS processes in one theorem.
Theorem 1. Consider a WSS process with continuous PSD f. Let and .
- (1)
If thenand - (2)
If the process is real and with thenand
Proof. (1) Since
from Equation (
1), we obtain Equation (
4).
Let
where
with
being the Kronecker delta and
for all
. As
we have
Equation (
5) now follows by taking
in [
5] (Lemma 6).
(2) Fix
with
. Since
we obtain
We begin by proving Equation (
6). Applying Equation (
10) yields
and consequently,
Observe that to finish the proof of Equation (
6), we only need to show that
As
from the formula for the partial sums of the geometric series (see, e.g., [
6] (p. 388)), we have
Applying (
14) and the basic trigonometric formula
(see, e.g., [
6] (p. 97)) yields
and, thus, from Equation (
13), we obtain Equation (
11), and, therefore, Equation (
6) holds.
Finally, we prove Equation (
7). From Equation (
10), we obtain
and, consequently,
Applying Equations (
12) and (
15) yields
and, therefore, Equation (
7) holds. ☐
4. Upper Bounds for the RDF of Finite-Length Data Blocks of Gaussian WSS Sources
Let
be a real zero-mean Gaussian WSS process with continuous PSD
f and
. For a given block length
and a distortion
, Pearl presented in [
1] an upper bound of
, namely:
where
is the matrix defined in Equation (
8). In the following theorem, we give two new upper bounds of
, denoted by
and
, that are tighter than the one given by Pearl.
Theorem 2. Consider a real zero-mean Gaussian WSS process with continuous PSD f and . Let . If and is the DFT of thenwhere is given byandFurthermore, Proof. We divide the proof into four steps:
Step 1: We show that
. We encode
separately with
for all
, where
denotes the smallest integer higher than or equal to
. Observe that if
with
Equation (
19) is equivalent to
From Lemma 1,
for all
, and
with
. Let
, where
with
for all
. Applying Lemma 1 yields
.
As
is unitary and the spectral norm is unitarily invariant, we have
Step 2: We show that
. To do that, we only need to prove that
for all
with
. Fix
with
. We encode
and
separately with
and
Let
. We have
Step 3: We show that
. From Equations (
2) and (
5), we obtain
and applying Equations (
2), (
6) and (
7), the arithmetic mean-geometric mean (AM-GM) inequality, and Lemma 1 yields
for all
with
. Hence, from Equation (
9), if
n is even, we have
and, if
n is odd,
is yielded.
Step 4: We show Equation (
18). Applying Equation (
3) yields
where
with
being a unitary diagonalization of
. Since
is Hermitian and
is positive definite (see [
5] (Lemma 5)),
is positive definite, and applying the AM-GM inequality yields
where
stands for trace and
is the Frobenius norm. From [
5] (Lemma 4), we obtain
and, therefore,
Consequently, applying [
7] (Theorem 5), we conclude that
As an example,
Figure 1 shows Equation (
16) for the case in which
with
,
, and
.
Finally, observe that Theorem 2 also provides coding strategies to achieve the two new bounds of presented: and . Specifically, Theorem 2 shows that can be achieved by encoding separately, with , instead of encoding jointly, and that can be achieved by encoding separately the real part and the imaginary part of instead of encoding when . Therefore, although is a tighter bound, the coding strategy associated with is simpler. It should be mentioned that, in order to achieve and an optimal coding method of Gaussian random variables is required.