0% found this document useful (0 votes)
18 views

An Introduction To Digital Communications

This document provides an outline for a course on digital communications. It begins with an introduction comparing analog and digital communication systems. The outline then covers topics like probability theory, analog-to-digital conversion, source coding, communication channels, receiver design, channel coding, signaling through channels, and synchronization. The document is authored by Costas N. Georghiades from Texas A&M University and is intended for students in an electrical engineering course on digital communications.

Uploaded by

Lamar Shaggie
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

An Introduction To Digital Communications

This document provides an outline for a course on digital communications. It begins with an introduction comparing analog and digital communication systems. The outline then covers topics like probability theory, analog-to-digital conversion, source coding, communication channels, receiver design, channel coding, signaling through channels, and synchronization. The document is authored by Costas N. Georghiades from Texas A&M University and is intended for students in an electrical engineering course on digital communications.

Uploaded by

Lamar Shaggie
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

An Introduction to

Digital Communications
Costas N. Georghiades
Electrical Engineering Department
Texas A&M University

These
Thesenotes
notesare
aremade
madeavailable
availablefor
forstudents
studentsof
ofEE
EE455,
455,and
andthey
theyare
areto
to
be used to enhance understanding of the course. Any unauthorized
be used to enhance understanding of the course. Any unauthorized
copy
copyand
anddistribution
distributionof
ofthese
thesenotes
notesisisprohibited.
prohibited.
1
Course Outline
‰ Introduction
¾ Analog Vs. Digital Communication Systems
¾ A General Communication System

ˆ Some Probability Theory


¾ Probability space, random variables, density
functions, independence
¾ Expectation, conditional expectation, Baye’s rule
¾ Stochastic processes, autocorrelation function,
stationarity, spectral density
Costas N. Georghiades
2
Outline (cont’d)
‰ Analog-to-digital conversion
¾ Sampling (ideal, natural, sample-and-hold)
¾ Quantization, PCM
ˆ Source coding (data compression)
¾ Measuring information, entropy, the source coding
theorem
¾ Huffman coding, Run-length coding, Lempel-Ziv
ˆ Communication channels
¾ Bandlimited channels
¾ The AWGN channel, fading channels
Costas N. Georghiades
3
Outline (cont’d)
ˆ Receiver design
¾ General binary and M-ary signaling
¾ Maximum-likelihood receivers
¾ Performance in an AWGN channel
RThe Chernoff and union/Chernoff bounds
RSimulation techniques

¾ Signal spaces
¾ Modulation: PAM, QAM, PSK, DPSK, coherent FSK,
incoherent FSK

Costas N. Georghiades
4
Outline (cont’d)
ˆ Channel coding
¾ Block codes, hard and soft-decision decoding,
performance
¾ Convolutional codes, the Viterbi algorithm,
performance bounds
¾ Trellis-coded modulation (TCM)
ˆ Signaling through bandlimited channels
¾ ISI, Nyquist pulses, sequence estimation, partial
response signaling
¾ Equalization

Costas N. Georghiades
5
Outline (cont’d)
ˆ Signaling through fading channels
¾ Rayleigh fading, optimum receiver, performance
¾ Interleaving
ˆ Synchronization
¾ Symbol synchronization
¾ Frame synchronization
¾ Carrier synchronization

Costas N. Georghiades
6
Introduction
A General Communication System

Source Transmitter Channel Receiver User

• Source: Speech, Video, etc.


• Transmitter: Conveys information
• Channel: Invariably distorts signals
• Receiver: Extracts information signal
• User: Utilizes information

Costas N. Georghiades
7
Digital vs. Analog Communication
ˆ Analog systems have an alphabet which is
uncountably infinite.
¾ Example: Analog Amplitude Modulation (AM)

Receiver
X

RF
Oscillator

Costas N. Georghiades
8
Analog vs. Digital (cont’d)
r Digital systems transmit signals from a
discrete alphabet
¾ Example: Binary digital communication systems

Data Rate=1/T bits/s 0 T or 0 T

1 0
…0110010... Transmitter

Costas N. Georghiades
9
Digital systems are resistant to noise...

Noise 1

s1(t) 1 r(t) 1
s2(t) 0 + 1?
Channel
?

Optimum (Correlation) Receiver:

r(t) T t=T 1
X ∫ ( )dt >
< 0
0 0
Comparator
s1(t)
Costas N. Georghiades
10
Advantages of Digital Systems
‰Error correction/detection
‰Better encryption algorithms
‰More reliable data processing
‰Easily reproducible designs
¾Reduced cost
‰Easier data multiplexing
‰Facilitate data compression

Costas N. Georghiades
11
A General Digital Communication System

Source A/D Source Channel Modulator


Conversion Encoder Encoder

C
h
Synchronization a
n
n
e
l

D/A Source Channel Demodulator


User
Conversion Decoder Decoder

Costas N. Georghiades
12
Some Probability Theory
Definition: A non-empty collection of subsets α = {A1 , A2 , ...} of
a set Ω , i.e., α = {Ai ; Ai ∈ Ω} is called an algebra of sets if:
1) ∀Ai ∈α and Aj ∈ α ⇒ Ai ∪ Aj ∈ α
2) ∀Ai ∈ α ⇒ Ai ∈ α

Example: Let Ω = {0,1,2} .


1) α = {Ω, ∅} an algebra
2) α = { Ω, ∅ , {1} , {2} , {0} , {1,2} , {1,0} , { 0,2}} an algebra
3) α = { Ω, ∅ , {0} , {1} , {2}} not an algebra

Costas N. Georghiades
13
Probability Measure
Definition: A class of subsets, α , of a space Ω is a σ-algebra
(or a Borel algebra) if:
1) Ai ∈α ⇒ Ai ∈α .

2) Ai ∈ α , i = 1,2,3,... ⇒ U Ai ∈α .
i =1

Definition: Let α be a σ-algebra of a space Ω . A function P


that maps α onto [0,1] is called a probability measure if:

1) P[Ω ] = 1
2) P[ A] ≥ 0 ∀ A ∈α .
⎡∞ ⎤ ∞
3) P ⎢U Ai ⎥ = ∑ P[ Ai ] for Ai ∩ A j = ∅ , i ≠ j .
⎣ i =1 ⎦ i =1
Costas N. Georghiades
14
Probability Measure
Let Ω = ℜ (the real line) and α be the set of all intervals (x1, x2] in ℜ.
Also, define a real valued function f which maps ℜ → ℜ such that:
1) f ( x) ≥ 0 ∀ x ∈ℜ .

2) ∫ f ( x)dx = 1.
−∞

Then:
[ ] [ ] ∫
P { x ∈ℜ; x1 < x ≤ x 2 } = P ( x1 , x 2 ] = f ( x )dx
x2

x1

is a valid probability measure.

Costas N. Georghiades
15
Probability Space
The following conclusions can be drawn from the above definition:
1) P[∅ ] = 0

[ ]
2) P A = 1 − P[ A] ( P ( A + A ) = P ( Ω ) = 1 = P ( A ) + P ( A )) .
3) If A1 ⊂ A2 ⇒ P ( A1 ) ≤ P ( A2 )
4) P[ A1 ∪ A2 ] = P[ A1 ] + P[ A2 ] − P[ A1 ∩ A2 ] .

Definition: Let Ω be a space, α be a σ-algebra of subsets of Ω , and


P a probability measure on α . Then the ordered triple ( Ω , α , P ) is a
probability space.
Ω sample space
α event space
P probability measure
Costas N. Georghiades
16
Random Variables and Density Functions
Definition: A real valued function X(ω) that maps ω ∈ Ω into the real line ℜ
is a random variable.

Notation: For simplicity, in the future we will refer to X(ω) by X.

Definition: The distribution function of a random variable X is defined by


F X ( x ) = P[ X ≤ x ] = P[ −∞ < X ≤ x ] .
From the previous discussion, we can express the above probability in terms

of a non-negative function f X (⋅) such that ∫f
−∞
X ( x ) d X = 1 as follows

x
FX ( x ) = P [ X ≤ x ] = ∫f X (α ) d α .
−∞

We will refer to f X (⋅) as the density function of random variable X.


Costas N. Georghiades
17
Density Functions
We have the following observations based on the above definitions:
−∞
1) F X ( −∞ ) = ∫
−∞
f X (x)d X = 0

2) F X ( ∞ ) = ∫
−∞
f X (x)d X = 1

3) If x 1 ≥ x 2 ⇒ F X ( x 1 ) ≥ F X ( x 2 ) ( F X ( x ) non-decreasing)

Examples of density functions:


a) The Gaussian density function (Normal)
( x − µ )2
1 −
f X ( x) = e 2σ 2

2π σ 2

Costas N. Georghiades
18
Example Density Functions
b) Uniform in [0,1]
fX (x)
⎧1, x ∈[ 0,1]
f X (x) = ⎨ 1
⎩0, otherwise

x
0 1

c) The Laplacian density function:


fX (x)
a
f X (x) = exp ( − a x )
2

Costas N. Georghiades
19
Conditional Probability
Let A and B be two events from the event space α. Then, the probability of event A,
given that event B has occurred, P[A | B ] , is given by
P[ A ∩ B]
P[ A| B ] = .
P[ B]
Example: Consider the tossing of a dice:
P [{2} | "even outcome"]=1/3, P[{2} | "odd outcome"] = 0
Thus, conditioning can increase or decrease the probability of an event, compared to its
unconditioned value.

The Law of Total Probability


M
Let A1, A2,..., AN be a partition of Ω, i.e., UA
i =1
i = Ω and Ai ∩ A j = ∅ ∀ i ≠ j .

Then, the probability of occurrence of event B ∈α can expressed as


M
P[ B] = ∑ P[ B| Ai ] P[ Ai ] , ∀ B ∈α .
i =1

Costas N. Georghiades
20
Illustration, Law of Total Probability

P ( B | A3 )
A3

A2
A1

P ( B | A2 )

Costas N. Georghiades
21
Example, Conditional Probability
1 P00 P00=P[receive 0 | 0 sent]
Pr( 0) = 0 0
2
P01 P10=P[receive 0 | 1 sent]
P01=P[receive 1 | 0 sent]
1
P10
Pr(1) = 1 1 P11=P[receive 1 | 1 sent]
2 P11

P01 = 0.01 ⇒ P00 = 1 − P01 = 0.99

P10 = 0.01 ⇒ P11 = 1 − P10 = 0.99

1 1
Pr( e) = Pr( 0) ⋅ P01 + Pr(1) ⋅ P10 = ⋅ 0.01 + ⋅ 0.01
2 2
= 0.01

Costas N. Georghiades
22
Baye’s Law
Baye’s Law:
Let Ai , i = 1, 2,..., M be a partition of Ω and B an event in α. Then

[ ]
P Aj | B =
[ ][ ]
P B| A j P A j
M

∑ P[ B| A ] P[ A ]
i =1
i i

Proof:

[ [ ]
P Aj ∩ B
] P[ B] ⇒ P[ A ∩ B] = P[ A | B] P[ B] = P[ B| A ] P[ A ]
P Aj | B = j j j j

P[ B| A ] P[ A ] P[ B| A ] P[ A ]
⇒ P[ A | B] =
j j j j
=
j
P( B ) M

∑ P[ B| A ] P[ A ]
i =1
i i

Costas N. Georghiades
23
Statistical Independence of Events
Two events A and B are said to be statistically independent if
P[ A ∩ B] = P[ A] ⋅ P[ B] .
In intuitive terms, two events are independent if the occurrence of one does not affect the
occurrence of the other, i.e.,
P ( A| B ) = P ( A )
when A and B are independent.

Example: Consider tossing a fair coin twice. Let


A={heads occurs in first tossing}
B={heads occurs in second tossing}.
Then
1
P[ A ∩ B ] = P[ A] ⋅ P[ B] =
.
4
The assumption we made (which is reasonable in this case) is that the outcome of a coin
toss did not affect the other.

Costas N. Georghiades
24
Expectation
Consider a random variable X with density fX (x). The expected (or mean) value of X is
given by

E[ X ] = ∫xf
−∞
X ( x )dx .

In general, the expected value of some function g( X ) of a random variable X is given by


E [ g( X ) ] = ∫ g( x ) f X
( x ) dx .
−∞

When g( X ) = X n for n = 0,1,2,L , the corresponding expectations are referred to as the


n-th moments of random variable X. The variance of a random variable X is given by

var ( X ) = ∫ [ x − E ( X ) ] 2
f X ( x )dx
−∞

= ∫x
−∞
2
f X ( x )dx − E 2 ( X )

= E ( X 2 ) − E 2 ( X ).
Costas N. Georghiades
25
Example, Expectation
Example: Let X be Gaussian with

1 ⎡ ( x − µ) 2 ⎤
f X ( x) = exp ⎢− ⎥
2π σ 2 ⎢⎣ 2σ 2
⎥⎦
Then:
∞ ( x−µ)2
1 −
E( X ) =
2π σ 2 ∫xe
−∞
2σ 2
dx = µ ,

Var( X ) = E [ X 2 ] − E 2 ( X ) = ∫ X
x 2
f ( x ) dx − µ 2
= σ 2
.
−∞

Costas N. Georghiades
26
Random Vectors
Definition: A random vector is a vector whose elements are random variables, i.e., if X1,
X2, ..., Xn are random variables, then
X = ( X 1 , X 2 ,..., X n )
is a random vector.

Random vectors can be described statistically by their joint density function

f X (x) = f X 1 X 2 ... X n ( x1 , x 2 ,L , x n ) .

Example: Consider tossing a coin twice. Let X1 be the random variable associated with
the outcome of the first toss, defined by
⎧1, if heads
X1 = ⎨
⎩0, if tails
Similarly, let X2 be the random variable associated with the second tossing defined as
⎧1, if heads
X2 = ⎨ .
⎩0, if tails

The vector X = ( X 1 , X 2 ) is a random vector.


Costas N. Georghiades
27
Independence of Random Variables
Definition: Two random variables X and Y are independent if
f X ,Y ( x , y ) = f X ( x ) ⋅ f Y ( y ) .
The definition can be extended to independence among an arbitrary number of random
variables, in which case their joint density function is the product of their marginal
density functions.

Definition: Two random variables X and Y are uncorrelated if


E [ XY ] = E [ X ] ⋅ E [ Y ] .
It is easily seen that independence implies uncorrelatedness, but not necessarily the other
way around. Thus, independence is the stronger property.

Costas N. Georghiades
28
The Characteristic Function
Definition: Let X be a random variable with density f X ( x ) . Then the characteristic
function of X is

Ψ X ( jω ) = E e [ jω X
]= ∫ e jω x
f X ( x )dx .
−∞

Example: The characteristic function of a Gaussian random X variable having mean µ


and variance σ 2 is


( x − µ )2 1
jωµ − ω 2σ 2
Ψ X ( jω ) =
1
∫e
jω x
⋅e 2σ
dx = e
2
2

2π σ 2
−∞

Definition: The moment-generating function of a random variable X is defined by


Φ X ( s) = E [ e sX
] = ∫e sx
f X ( x ) dx .
−∞

Fact: The moment-generating function of a random variable X can be used to obtain its
moments according to:
n
Φ X ( s)
E[ X ] =
n
d
|s = 0
ds n
Costas N. Georghiades
29
Stochastic Processes
A stochastic process {X (t ); − ∞ < t < ∞} is an ensemble of signals, each of which can be
realized (i.e. it can be observed) with a certain statistical probability. The value of a
stochastic process at any given time, say t1, (i.e., X(t1)) is a random variable.

Definition: A Gaussian stochastic process is one for which X(t) is a Gaussian random
variable for every time t.
5
Fast varying
Amplitude

-5
0 0.2 0.4 0.6 0.8 1
Time
1
Slow varying
Amplitude

-1
0 0.2 0.4 0.6 0.8 1
Costas N. Georghiades
30
Characterization of Stochastic Processes
Consider a stochastic process { X (τ );−∞ < τ < ∞} . The random variable X(t), t ∈ℜ ,
has a density function f X ( t ) ( x; t ) . The mean and variance of X(t) are

E [ X ( t )] = µ X ( t ) = ∫xf X (t ) ( x; t )dx ,
−∞

[
VAR[ X ( t )] = E ( X ( t ) − µ X ( t ) )
2
].
Example: Consider the Gaussian random process whose value X(t) at time t is a
Gaussian random variable having density
⎡ x2 ⎤
f X ( x; t ) = exp⎢− ⎥,
2π t ⎣ 2 t ⎦
We have E [ X ( t )] = 0 (zero-mean process), and var[ X ( t ) ] = t .
0.4

t= 1
0.3

0.2

0.1 t= 2

0
Costas N. Georghiades -6 -4 -2 0 2 4 6 31
Autocovariance and Autocorrelation
Definition: The autocovariance function of a random process X(t) is:

[( )(
C XX ( t1 , t 2 ) = E X ( t1 ) − µ X ( t1 ) X ( t 2 ) − µ X ( t 2 ) , )] t1 , t 2 ∈ℜ. .

Definition: The autocorrelation function of a random process X(t) is defined by

[
R XX (t1 , t 2 ) = E X (t1 ) ⋅ X (t 2 ) , ] t1 , t 2 ∈ℜ .

Definition: A random process X is uncorrelated if for every pair (t1,t2)


E [ X ( t 1 ) X ( t 2 )] = E [ X ( t 1 )] ⋅ E [ X ( t 2 )] .

Definition: A process X is mean-value stationary if its mean is not a function of time.


Definition: A random process X is correlation stationary if the autocorrelation function
R XX (t1 , t 2 ) is a function only of τ = (t1 − t 2 ) .

Definition: A random process X is wide-sense stationary (W.S.S.) if it is both mean value


stationary and correlation stationary.
Costas N. Georghiades
32
Spectral Density
Example: (Correlation stationary process)
µ X (t ) = a
[ ]
RXX (t1 , t2 ) = exp − t1 − t2 = exp[ − τ ], τ = t1 − t 2 .

Definition: For a wide-sense stationary process we can define a spectral density, which is the Fourier transform of the
stochastic process's autocorrelation function:

SX ( f ) = ∫R XX
(τ ) e j 2 π f τ dτ .
−∞

The autocorrelation function is the inverse Fourier transform of the spectral density:

R XX (τ ) = ∫S (f) e
−∞
X
j2 π f τ
df .

Fact: For a zero mean process X,


var ( X ) = R XX (0) = ∫ S ( f ) df .
−∞
X

Costas N. Georghiades
33
Linear Filtering of Stochastic Signals

x(t ) y (t )
H(f )

SY ( f ) = S X ( f ) ⋅ H ( f )
2

The spectral density at the output of a linear


filter is the product of the spectral density of the
input process and the magnitude square of the
filter transfer function

Costas N. Georghiades
34
White Gaussian Noise
Definition: A stochastic process X is white Gaussian if:

a) µ X ( t ) = µ (constant)

b) R XX (τ ) = 0 δ (τ ) (τ = t 1 − t2 )
N
2
c) X is a Gaussian random variable and X(ti ) is independent of X(tj ) for all
ti ≠ t j .

Note: 1) A white Gaussian process is wide-sense stationary


N
2) S X ( f ) = 0 is not a function of f
2
Sx(f)
N0/2

Costas N. Georghiades
35
Analog-to-Digital Conversion
r Two steps:
¾ Discreetize time: Sampling
¾ Discreetize amplitude: Quantization

A
m
p Analog signal: Continuous time,
l continuous amplitude
i
t
u
d
e
0

0 1 2 3 4 Time

Costas N. Georghiades
36
Sampling
‰ Signals are characterized by their frequency
content
‰ The Fourier transform of a signal describes its
frequency content and determines its bandwidth

x(t) ∞
X(f)
0.3 X ( f ) = ∫ x (t )e − j 2πft dt
−∞
0.5

0
Time, sec ⇔

x (t ) = ∫ X ( f )e j 2πft 0 2 4
df Frequency, Hz
−∞

Costas N. Georghiades
37
Ideal Sampling
Mathematically, the sampled version, xs(t), of signal x(t) is:
x s (t ) = h(t ) ⋅ x (t ) ⇔ X s ( f ) = H ( f )∗ X ( f ) ,
∞ ∞ t
j 2 πk
h (t ) = ∑ δ (t − kTs ) =
1
Ts
∑e Ts Sampling
k = −∞ k = −∞ function

h(t)

... ...
-4Ts -3Ts -2Ts -Ts 0 Ts 2Ts 3Ts 4Ts t

xs(t)

Ts 2 Ts 3 Ts 4 Ts t

Costas N. Georghiades
38
Ideal Sampling
π kt
⎪⎧ 1 ∞
⎪⎫ 1 ∞
⎛ k⎞
∑e ∑δ⎜f −
j2
H ( f ) = ℑ⎨ Ts
⎬= ⎟.
⎪⎩ Ts K =−∞ ⎪⎭ Ts k =−∞ ⎝ Ts ⎠
Then:
1 ∞
⎛ k⎞
X s ( f ) = H( f ) * X ( f ) = ∑ X ⎜

f − ⎟.
Ts K =−∞ Ts⎠

X s ( f)
Aliasing fs < 2 W
X(f)
... ...
-fs -W W fs f
(b )

X s ( f)
No Aliasing fs > 2 W

... ...
-fs -W W fs f
(a )

Costas N. Georghiades
39
Ideal Sampling
If fs>2W, the original signal x(t) can be obtained from xs(t) through simple low-pass
filtering. In the frequency domain, we have
X ( f ) = X s ( f ) ⋅ G ( f ),
where
⎧ Ts , f ≤ B
G( f ) = ⎨ for W ≤ B ≤ f s − W .
⎩0, oherwise.
G( f )
The impulse response of the low-pass filter, g(t), is then
Ts
sin(2π Bt )
[ ]
g( t ) = ℑ−1 G( f ) = ∫ G( f ) ⋅ e j 2π ft df = 2 BTs
B
.
−B 2π Bt −B B f

From the convolution property of the Fourier transform we have:


∞ ∞
x(t ) = ∫ x (a ) ⋅ g (t − a )da = ∑ x(kT ) ⋅ ∫ δ (a − kT )g (t − a )da = ∑ x(kT ) ⋅ g (t − kT ) .
s s s s s
−∞ k −∞ k

Thus, we have the following interpolation formula


x(t ) = ∑ x(kTs ) ⋅ g (t − kTs )
k

Costas N. Georghiades
40
Ideal Sampling G(f)
T

-B -W 0 W B f
g(t)

t
The Sampling Theorem:
A bandlimited signal with no spectral components above W Hz can be recovered
uniquely from its samples taken every Ts seconds, provided that
1 Nyquist
Ts ≤ , or, equivalent ly, f s ≥ 2W . Rate
2W
Extraction of x(t) from its samples can be done by passing the sampled signal through a
low-pass filter. Mathematically, x(t) can be expressed in terms of its samples by:

x (t ) = ∑ x (kTs ) ⋅ g (t − kTs )
k

Costas N. Georghiades
41
Natural Sampling
A delta function can be approximated by a
rectangular pulse p(t)
It can be shown
that in
this case as well
⎧ T1 , T2 ≤ t ≤ T2 the original
p (t ) = ⎨ signal can be
⎩0, elsewhere. reconstructed
T from its samples
at or above
∞ the Nyquist rate
h p (t ) = ∑ p(t − kT )
s
through simple
k =−∞ low-pass filtering

Costas N. Georghiades
42
Zero-Order-Hold Sampling
x s ( t ) = p ( t ) ∗ [x ( t ) ⋅ h ( t ) ]
x(t)

P(f)
0
Ts t

Reconstruction is
x s (t ) 1 x (t ) possible but an
G( f )
P( f ) equalizer may be
Equalizer Low-pass needed
Filter
Costas N. Georghiades
43
Practical Considerations of Sampling
Since in practice low-pass filters are not ideal and have a finitely steep roll-off, in
practice the sampling frequency fs is about 20% higher than the Nyquist rate:

f s ≥ 2.2W
50

-50

-100

-150
0 0.2 0.4 0.6 0.8 1

Example:
Music in general has a spectrum with frequency components in the range ~20kHz. The
ideal, smallest sampling frequency fs is then 40 Ksamples/sec. The smallest practical
sampling frequency is 44Ksamples/sec. In compact disc players, the sampling frequency
is 44.1Ksamples/sec.

Costas N. Georghiades
44
Summary of Sampling (Nyquist) Theorem
ˆ An analog signal of bandwidth W Hz can
be reconstructed exactly from its
samples taken at a rate at or above 2W
samples/s (known as the Nyquist rate)
x(t) xs(t) 1
f =
s T > 2W
Ts
s

0 0
0 1 2 3 4 t 0 1 2 3 4 t
Costas N. Georghiades
45
Summary of Sampling Theorem (cont’d)

Signal reconstruction
xs(t) x(t)

Low-Pass
Filter
0 0
0 1 2 3 4 t 0 1 2 3 4 t

Amplitude still takes values on a continuum => Infinite number of bits


Need to have a finite number of possible amplitudes => Quantization

Costas N. Georghiades
46
Quantization
ˆ Quantization is the process discretizing the
amplitude axis. It involves mapping an infinite
number of possible amplitudes to a finite set
of values.

N bits can represent L = 2 N


amplitudes.
This corresponds to N-bit quantization
Quantization:
Uniform Vs. Nonuniform
Scalar Vs. Vector
Costas N. Georghiades
47
Example (Quantization)

Let N=3 bits. This corresponds to L=8


quantization levels:
x(t)
111
110
101
3-bit
100 Uniform
0
000 1 2 3 t Quantization
001
010
011

Costas N. Georghiades
48
Quantization (cont’d)
ˆ There is an irrecoverable error due to
quantization. It can be made small through
appropriate design.
ˆ Examples:
¾ Telephone speech signals: 8-bit quantization
¾ CD digital audio: 16-bit quantization

Costas N. Georghiades
49
Input-Output Characteristic
7∆
Output, xˆ = Q ( x )
2
3-Bit (8-level) 5∆
Uniform 2
Quantizer 3∆
2

2
− 4∆ − 3∆ − 2∆ −∆ ∆ ∆ 2∆ 3∆ 4∆
− Input, x
2

3∆

2 Quantization Error:

5∆
d = ( x − xˆ ) = ( x − Q ( x ) )
2

7∆

2
Costas N. Georghiades
50
Signal-to-Quantization Noise Ratio (SQNR)

For Stochastic Signals For Random Variables


PX PX
SQNR = SQNR =
D D
where : where :
T

[ ] [ ]
PX = E X 2
∫ T E X (t ) dt
1
PX = lim 2 2
T →∞ T −2

[
D = E ( X − Q( X ))
2
]
{ }
∫ T E [X (t ) − Q ( X (t ))] dt
T
1
D = lim 2 2
T →∞ T −2
Can be used for
stationary processes

Costas N. Georghiades
51
SQNR for Uniform Scalar Quantizers
Let the input x(t) be a sinusoid of amplitude V volts. It can
be argued that all amplitudes in [-V,V] are equally likely.

Then, if the step size is , the quantization error is
uniformly distributed in the interval
p(e)
1/ ∆
⎡ ∆ ∆⎤
⎢⎣ − 2 , 2 ⎥⎦
− ∆/2 ∆/2 e

1 ∆2 2 ∆2 1 T2 2 2 V2
D = ∫ ∆ e de = PX = lim ⋅ ∫ T V sin (ωt )dt =
∆ −2 12 T →∞ T −
2 2
For an N-bit quantizer: ∆ = 2V / 2 N

⎛P ⎞
∴ SQNR = 10 ⋅ log10 ⎜ X ⎟ = 6.02 ⋅ N + 1.76 dB
⎝D ⎠
Costas N. Georghiades
52
Example
A zero-mean, stationary Gaussian source X(t) having spectral density as given below
is to be quantized using a 2-bit quantization. The quantization intervals and levels are
as indicated below. Find the resulting SQNR.
SX ( f ) =
200
⇔ R (τ ) = 100 ⋅ e
−τ

1 + (2πf )
2 XX

x2
PX = RXX (0 ) = 100 1 −
f X ( x) = ⋅e 200
200π
-15 -5 5 15
-10 0 10

[ 2
]
D = E ( X − Q( X )) = 2 ⋅ ∫
0
10
( x − 5) 2 ∞
f X ( x )dx + 2 ⋅ ∫ ( x − 15) f X ( x )dx = 11.885
10
2

⎛ 100 ⎞
SQNR = 10 log10 ⎜ ⎟ = 9.25 dB
⎝ 11.885 ⎠
Costas N. Georghiades
53
Non-uniform Quantization
ˆ In general, the optimum quantizer is non-uniform
ˆ Optimality conditions (Lloyd-Max):
¾ The boundaries of the quantization intervals are the mid-points of the
corresponding quantized values
¾ The quantized values are the centroids of the quantization regions.
¾ Optimum quantizers are designed iteratively using the above rules

ˆ We can also talk about optimal uniform quantizers. These


have equal-length quantization intervals (except possibly the
two at the boundaries), and the quantized values are at the
centroids of the quantization intervals.

Costas N. Georghiades
54
Optimal Quantizers for a Gaussian Source

Costas N. Georghiades
55
Companding (compressing-expanding)
Compressor
Uniform
Quantizer
… Low-pass
Filter
Expander

µ - Law Companding : 1
µ = 255
0.8
µ = 10
ln (1 + µ ⋅ x )
0.6

g ( x) = ⋅ sgn( x ), −1 ≤ x ≤ 1
ln (1 + µ )
0.4
µ =0
0.2

0 0 0.2 0.4 0.6 0.8 1


1

[(1 + µ ) − 1]⋅ sgn( x ),


0.8
−1 1
g ( x) = −1 ≤ x ≤ 1
x
0.6 µ =0
µ 0.4
µ = 10
0.2
µ = 255
0
0 0.2 0.4 0.6 0.8 1
Costas N. Georghiades
56
Examples: Sampling, Quantization
ˆ Speech signals have a bandwidth of about
3.4KHz. The sampling rate in telephone
channels is 8KHz. With an 8-bit quantization,
this results in a bit-rate of 64,000 bits/s to
represent speech.
ˆ In CD’s, the sampling rate is 44.1KHz. With a
16-bit quantization, the bit-rate to represent
(for each channel) is 705,600 bits/s (without
coding).

Costas N. Georghiades
57
Data Compression
A/D Converter
Analog Source
Sampler Quantizer 001011001...
Source Encoder

Discrete Source

Discrete 01101001... Source


10011...
Source Encoder

The job of the source encoder is to efficiently


(using the smallest number of bits)
represent the digitized source
Costas N. Georghiades
58
Discrete Memoryless Sources
Definition: A discrete source is memoryless if successive
symbols produced by it are independent.

For a memoryless source, the probability of a


sequence of symbols being produced equals the
product of the probabilities of the individual
symbols.

Costas N. Georghiades
59
Measuring “Information”
ˆ Not all sources are created equal:
¾ Example:

P(0)=1
Discrete
Source 1 No information provided
P(1)=0

Discrete P(0)=0.99
Source 2 Little information is provided
P(1)=0.01

Discrete P(0)=0.5
Source 3
Much information is provided
P(1)=0.5

Costas N. Georghiades
60
Measuring “Information” (cont’d)
ˆ The amount of information provided is a function of
the probabilities of occurrence of the symbols.
ˆ Definition: The self-information of a symbol x which
has probability of occurrence p is
I(x)=-log2(p) bits
ˆ Definition: The average amount of information in
bits/symbol provided by a binary source with P(0)=p
is
H ( x ) = − p log 2 ( p ) − (1 − p ) log 2 (1 − p )

H(x) is known as the entropy of the binary source


Costas N. Georghiades
61
The Binary Entropy Function
Maximum information is conveyed when the
probabilities of the two symbols are equal

H(x)
1

0.5

0 0.5 1 p
Costas N. Georghiades
62
Non-Binary Sources
In general, the entropy of a source that produces L symbols with
probabilities p1, p2, …,pL, is
L
H ( X ) = −∑ pi ⋅ log 2 ( pi ) bits
i =1

Property: The entropy function satisfies

0 ≤ H ( X ) ≤ log 2 (L )

Equality iff the source


probabilities are equal
Costas N. Georghiades
63
Encoding of Discrete Sources
• Fixed-length coding: Assigns source symbols binary
sequences of the same length.
• Variable-length coding: Assigns source symbols
binary sequences of different lengths.
Example: Variable-length code

Symbol Probability Codeword Length 4


3 3 1 1
M = ∑ mi ⋅ pi = 1× + 2 × + 3 × + 3 ×
a 3/8 0 1 i =1 8 8 8 8
b 3/8 11 2 = 1.875 bits/symbol
c 1/8 100 3 4

d 1/8 101 3 H ( X ) = −∑ pi ⋅ log 2 ( pi ) = 1.811 bits/symbol


i =1

Costas N. Georghiades
64
Theorem (Source Coding)
ˆ The smallest possible average number of
bits/symbol needed to exactly represent
a source equals the entropy of that
source.
ˆ Example 1: A binary file of length 1,000,000
bits contains 100,000 “1”s. This file can be
compressed by more than a factor of 2:
H ( x ) = −0.9 ⋅ log(. 9 ) − 0.1 ⋅ log(. 1) = 0.47 bits
S = 106 × H ( x ) = 4.7 × 105 bits
Compression Ratio=2.13
Costas N. Georghiades
65
Some Data Compression Algorithms
ˆ Huffman coding
ˆ Run-length coding
ˆ Lempel-Ziv
ˆ There are also lossy compression algorithms
that do not exactly represent the source, but
do a good job. These provide much better
compression ratios (more than a factor of
10, depending on reproduction quality).

Costas N. Georghiades
66
Huffman Coding (by example)
A binary source produces bits with P(0)=0.1. Design a Huffman code that
encodes 3-bit sequences from the source.

Lengths Code bits Source bits Probability


1

largest
1 1 111 .729
1
3 011 110 .081 .162 1 1.0
3 010 101 .081 0
1 .271
3 001 011 .081 0
5 00011 100 .009 1 .109
.018 1 0
5 00010 010 .009 0 .028
5 00001 001 smallest .009 1
.01
0
0
5 00000 000 .001 0

H ( x ) = [− 0.1 ⋅ log2 (0.1) − 0.9 ⋅ log2 (0.9)] = 0.469

M = (1 × 0.729 + 3 × 3 × 0.081 + 5 × 3 × 0.009 + 5 × 0.001) = 0.53


1
3
Costas N. Georghiades
67
Run-length Coding (by example)
A binary source produces binary digits with P(0)=0.9. Design a run-length code
to compress the source.

Source Bits Run Lengths Probability Codewords


1 0 0.100 1000 M 1 = 4 × (1 − 0.43) + 1 × 0.43 = 2.71

01 1 0.090 1001
M 2 = .1 + 2 × .09 + 3 × .081 + 4 × .073
001 2 0.081 1010 + 5 × .066 + 6 × .059 + 7 × .053
0001 3 0.073 1011 + 8 × .048 + 8 × .430
= 5.710
00001 4 0.066 1100
000001 5 0.059 1101 M 1 2.710
M = = = 0.475
M 2 5.710
0000001 6 0.053 1110 H ( X ) = −0.9 ⋅ log2 (0.9) − 0.1 ⋅ log2 (0.1)
00000001 7 0.048 1111 = 0.469 < 0.475

00000000 8 0.430 0

Costas N. Georghiades
68
Examples: Speech Compression
ˆ Toll-quality speech signals can be produced at
8Kbps (a factor of 8 compression compared to
uncompressed telephone signals).
ˆ Algorithms that produce speech at 4.8Kbps or
even 2.4Kbps are available, but have reduced
quality and require complex processing.

Costas N. Georghiades
69
Example: Video Compression
ˆ Uncompressed video of a 640x480 pixel2
image at 8 bits/pixel and 30 frames/s requires
a data-rate of 72 Mbps.
ˆ Video-conference systems operate at 384Kbps.
ˆ MPEG2 (standard) operates at 3Mbps.

Costas N. Georghiades
70

You might also like