An Introduction To Digital Communications
An Introduction To Digital Communications
Digital Communications
Costas N. Georghiades
Electrical Engineering Department
Texas A&M University
These
Thesenotes
notesare
aremade
madeavailable
availablefor
forstudents
studentsof
ofEE
EE455,
455,and
andthey
theyare
areto
to
be used to enhance understanding of the course. Any unauthorized
be used to enhance understanding of the course. Any unauthorized
copy
copyand
anddistribution
distributionof
ofthese
thesenotes
notesisisprohibited.
prohibited.
1
Course Outline
Introduction
¾ Analog Vs. Digital Communication Systems
¾ A General Communication System
¾ Signal spaces
¾ Modulation: PAM, QAM, PSK, DPSK, coherent FSK,
incoherent FSK
Costas N. Georghiades
4
Outline (cont’d)
Channel coding
¾ Block codes, hard and soft-decision decoding,
performance
¾ Convolutional codes, the Viterbi algorithm,
performance bounds
¾ Trellis-coded modulation (TCM)
Signaling through bandlimited channels
¾ ISI, Nyquist pulses, sequence estimation, partial
response signaling
¾ Equalization
Costas N. Georghiades
5
Outline (cont’d)
Signaling through fading channels
¾ Rayleigh fading, optimum receiver, performance
¾ Interleaving
Synchronization
¾ Symbol synchronization
¾ Frame synchronization
¾ Carrier synchronization
Costas N. Georghiades
6
Introduction
A General Communication System
Costas N. Georghiades
7
Digital vs. Analog Communication
Analog systems have an alphabet which is
uncountably infinite.
¾ Example: Analog Amplitude Modulation (AM)
Receiver
X
RF
Oscillator
Costas N. Georghiades
8
Analog vs. Digital (cont’d)
r Digital systems transmit signals from a
discrete alphabet
¾ Example: Binary digital communication systems
1 0
…0110010... Transmitter
Costas N. Georghiades
9
Digital systems are resistant to noise...
Noise 1
s1(t) 1 r(t) 1
s2(t) 0 + 1?
Channel
?
r(t) T t=T 1
X ∫ ( )dt >
< 0
0 0
Comparator
s1(t)
Costas N. Georghiades
10
Advantages of Digital Systems
Error correction/detection
Better encryption algorithms
More reliable data processing
Easily reproducible designs
¾Reduced cost
Easier data multiplexing
Facilitate data compression
Costas N. Georghiades
11
A General Digital Communication System
C
h
Synchronization a
n
n
e
l
Costas N. Georghiades
12
Some Probability Theory
Definition: A non-empty collection of subsets α = {A1 , A2 , ...} of
a set Ω , i.e., α = {Ai ; Ai ∈ Ω} is called an algebra of sets if:
1) ∀Ai ∈α and Aj ∈ α ⇒ Ai ∪ Aj ∈ α
2) ∀Ai ∈ α ⇒ Ai ∈ α
Costas N. Georghiades
13
Probability Measure
Definition: A class of subsets, α , of a space Ω is a σ-algebra
(or a Borel algebra) if:
1) Ai ∈α ⇒ Ai ∈α .
∞
2) Ai ∈ α , i = 1,2,3,... ⇒ U Ai ∈α .
i =1
1) P[Ω ] = 1
2) P[ A] ≥ 0 ∀ A ∈α .
⎡∞ ⎤ ∞
3) P ⎢U Ai ⎥ = ∑ P[ Ai ] for Ai ∩ A j = ∅ , i ≠ j .
⎣ i =1 ⎦ i =1
Costas N. Georghiades
14
Probability Measure
Let Ω = ℜ (the real line) and α be the set of all intervals (x1, x2] in ℜ.
Also, define a real valued function f which maps ℜ → ℜ such that:
1) f ( x) ≥ 0 ∀ x ∈ℜ .
∞
2) ∫ f ( x)dx = 1.
−∞
Then:
[ ] [ ] ∫
P { x ∈ℜ; x1 < x ≤ x 2 } = P ( x1 , x 2 ] = f ( x )dx
x2
x1
Costas N. Georghiades
15
Probability Space
The following conclusions can be drawn from the above definition:
1) P[∅ ] = 0
[ ]
2) P A = 1 − P[ A] ( P ( A + A ) = P ( Ω ) = 1 = P ( A ) + P ( A )) .
3) If A1 ⊂ A2 ⇒ P ( A1 ) ≤ P ( A2 )
4) P[ A1 ∪ A2 ] = P[ A1 ] + P[ A2 ] − P[ A1 ∩ A2 ] .
x
FX ( x ) = P [ X ≤ x ] = ∫f X (α ) d α .
−∞
3) If x 1 ≥ x 2 ⇒ F X ( x 1 ) ≥ F X ( x 2 ) ( F X ( x ) non-decreasing)
2π σ 2
Costas N. Georghiades
18
Example Density Functions
b) Uniform in [0,1]
fX (x)
⎧1, x ∈[ 0,1]
f X (x) = ⎨ 1
⎩0, otherwise
x
0 1
Costas N. Georghiades
19
Conditional Probability
Let A and B be two events from the event space α. Then, the probability of event A,
given that event B has occurred, P[A | B ] , is given by
P[ A ∩ B]
P[ A| B ] = .
P[ B]
Example: Consider the tossing of a dice:
P [{2} | "even outcome"]=1/3, P[{2} | "odd outcome"] = 0
Thus, conditioning can increase or decrease the probability of an event, compared to its
unconditioned value.
Costas N. Georghiades
20
Illustration, Law of Total Probability
P ( B | A3 )
A3
A2
A1
P ( B | A2 )
Costas N. Georghiades
21
Example, Conditional Probability
1 P00 P00=P[receive 0 | 0 sent]
Pr( 0) = 0 0
2
P01 P10=P[receive 0 | 1 sent]
P01=P[receive 1 | 0 sent]
1
P10
Pr(1) = 1 1 P11=P[receive 1 | 1 sent]
2 P11
1 1
Pr( e) = Pr( 0) ⋅ P01 + Pr(1) ⋅ P10 = ⋅ 0.01 + ⋅ 0.01
2 2
= 0.01
Costas N. Georghiades
22
Baye’s Law
Baye’s Law:
Let Ai , i = 1, 2,..., M be a partition of Ω and B an event in α. Then
[ ]
P Aj | B =
[ ][ ]
P B| A j P A j
M
∑ P[ B| A ] P[ A ]
i =1
i i
Proof:
[ [ ]
P Aj ∩ B
] P[ B] ⇒ P[ A ∩ B] = P[ A | B] P[ B] = P[ B| A ] P[ A ]
P Aj | B = j j j j
P[ B| A ] P[ A ] P[ B| A ] P[ A ]
⇒ P[ A | B] =
j j j j
=
j
P( B ) M
∑ P[ B| A ] P[ A ]
i =1
i i
Costas N. Georghiades
23
Statistical Independence of Events
Two events A and B are said to be statistically independent if
P[ A ∩ B] = P[ A] ⋅ P[ B] .
In intuitive terms, two events are independent if the occurrence of one does not affect the
occurrence of the other, i.e.,
P ( A| B ) = P ( A )
when A and B are independent.
Costas N. Georghiades
24
Expectation
Consider a random variable X with density fX (x). The expected (or mean) value of X is
given by
∞
E[ X ] = ∫xf
−∞
X ( x )dx .
E [ g( X ) ] = ∫ g( x ) f X
( x ) dx .
−∞
var ( X ) = ∫ [ x − E ( X ) ] 2
f X ( x )dx
−∞
∞
= ∫x
−∞
2
f X ( x )dx − E 2 ( X )
= E ( X 2 ) − E 2 ( X ).
Costas N. Georghiades
25
Example, Expectation
Example: Let X be Gaussian with
1 ⎡ ( x − µ) 2 ⎤
f X ( x) = exp ⎢− ⎥
2π σ 2 ⎢⎣ 2σ 2
⎥⎦
Then:
∞ ( x−µ)2
1 −
E( X ) =
2π σ 2 ∫xe
−∞
2σ 2
dx = µ ,
Var( X ) = E [ X 2 ] − E 2 ( X ) = ∫ X
x 2
f ( x ) dx − µ 2
= σ 2
.
−∞
Costas N. Georghiades
26
Random Vectors
Definition: A random vector is a vector whose elements are random variables, i.e., if X1,
X2, ..., Xn are random variables, then
X = ( X 1 , X 2 ,..., X n )
is a random vector.
f X (x) = f X 1 X 2 ... X n ( x1 , x 2 ,L , x n ) .
Example: Consider tossing a coin twice. Let X1 be the random variable associated with
the outcome of the first toss, defined by
⎧1, if heads
X1 = ⎨
⎩0, if tails
Similarly, let X2 be the random variable associated with the second tossing defined as
⎧1, if heads
X2 = ⎨ .
⎩0, if tails
Costas N. Georghiades
28
The Characteristic Function
Definition: Let X be a random variable with density f X ( x ) . Then the characteristic
function of X is
∞
Ψ X ( jω ) = E e [ jω X
]= ∫ e jω x
f X ( x )dx .
−∞
2π σ 2
−∞
Φ X ( s) = E [ e sX
] = ∫e sx
f X ( x ) dx .
−∞
Fact: The moment-generating function of a random variable X can be used to obtain its
moments according to:
n
Φ X ( s)
E[ X ] =
n
d
|s = 0
ds n
Costas N. Georghiades
29
Stochastic Processes
A stochastic process {X (t ); − ∞ < t < ∞} is an ensemble of signals, each of which can be
realized (i.e. it can be observed) with a certain statistical probability. The value of a
stochastic process at any given time, say t1, (i.e., X(t1)) is a random variable.
Definition: A Gaussian stochastic process is one for which X(t) is a Gaussian random
variable for every time t.
5
Fast varying
Amplitude
-5
0 0.2 0.4 0.6 0.8 1
Time
1
Slow varying
Amplitude
-1
0 0.2 0.4 0.6 0.8 1
Costas N. Georghiades
30
Characterization of Stochastic Processes
Consider a stochastic process { X (τ );−∞ < τ < ∞} . The random variable X(t), t ∈ℜ ,
has a density function f X ( t ) ( x; t ) . The mean and variance of X(t) are
∞
E [ X ( t )] = µ X ( t ) = ∫xf X (t ) ( x; t )dx ,
−∞
[
VAR[ X ( t )] = E ( X ( t ) − µ X ( t ) )
2
].
Example: Consider the Gaussian random process whose value X(t) at time t is a
Gaussian random variable having density
⎡ x2 ⎤
f X ( x; t ) = exp⎢− ⎥,
2π t ⎣ 2 t ⎦
We have E [ X ( t )] = 0 (zero-mean process), and var[ X ( t ) ] = t .
0.4
t= 1
0.3
0.2
0.1 t= 2
0
Costas N. Georghiades -6 -4 -2 0 2 4 6 31
Autocovariance and Autocorrelation
Definition: The autocovariance function of a random process X(t) is:
[( )(
C XX ( t1 , t 2 ) = E X ( t1 ) − µ X ( t1 ) X ( t 2 ) − µ X ( t 2 ) , )] t1 , t 2 ∈ℜ. .
[
R XX (t1 , t 2 ) = E X (t1 ) ⋅ X (t 2 ) , ] t1 , t 2 ∈ℜ .
Definition: For a wide-sense stationary process we can define a spectral density, which is the Fourier transform of the
stochastic process's autocorrelation function:
∞
SX ( f ) = ∫R XX
(τ ) e j 2 π f τ dτ .
−∞
The autocorrelation function is the inverse Fourier transform of the spectral density:
∞
R XX (τ ) = ∫S (f) e
−∞
X
j2 π f τ
df .
var ( X ) = R XX (0) = ∫ S ( f ) df .
−∞
X
Costas N. Georghiades
33
Linear Filtering of Stochastic Signals
x(t ) y (t )
H(f )
SY ( f ) = S X ( f ) ⋅ H ( f )
2
Costas N. Georghiades
34
White Gaussian Noise
Definition: A stochastic process X is white Gaussian if:
a) µ X ( t ) = µ (constant)
b) R XX (τ ) = 0 δ (τ ) (τ = t 1 − t2 )
N
2
c) X is a Gaussian random variable and X(ti ) is independent of X(tj ) for all
ti ≠ t j .
Costas N. Georghiades
35
Analog-to-Digital Conversion
r Two steps:
¾ Discreetize time: Sampling
¾ Discreetize amplitude: Quantization
A
m
p Analog signal: Continuous time,
l continuous amplitude
i
t
u
d
e
0
0 1 2 3 4 Time
Costas N. Georghiades
36
Sampling
Signals are characterized by their frequency
content
The Fourier transform of a signal describes its
frequency content and determines its bandwidth
x(t) ∞
X(f)
0.3 X ( f ) = ∫ x (t )e − j 2πft dt
−∞
0.5
0
Time, sec ⇔
∞
x (t ) = ∫ X ( f )e j 2πft 0 2 4
df Frequency, Hz
−∞
Costas N. Georghiades
37
Ideal Sampling
Mathematically, the sampled version, xs(t), of signal x(t) is:
x s (t ) = h(t ) ⋅ x (t ) ⇔ X s ( f ) = H ( f )∗ X ( f ) ,
∞ ∞ t
j 2 πk
h (t ) = ∑ δ (t − kTs ) =
1
Ts
∑e Ts Sampling
k = −∞ k = −∞ function
h(t)
... ...
-4Ts -3Ts -2Ts -Ts 0 Ts 2Ts 3Ts 4Ts t
xs(t)
Ts 2 Ts 3 Ts 4 Ts t
Costas N. Georghiades
38
Ideal Sampling
π kt
⎪⎧ 1 ∞
⎪⎫ 1 ∞
⎛ k⎞
∑e ∑δ⎜f −
j2
H ( f ) = ℑ⎨ Ts
⎬= ⎟.
⎪⎩ Ts K =−∞ ⎪⎭ Ts k =−∞ ⎝ Ts ⎠
Then:
1 ∞
⎛ k⎞
X s ( f ) = H( f ) * X ( f ) = ∑ X ⎜
⎝
f − ⎟.
Ts K =−∞ Ts⎠
X s ( f)
Aliasing fs < 2 W
X(f)
... ...
-fs -W W fs f
(b )
X s ( f)
No Aliasing fs > 2 W
... ...
-fs -W W fs f
(a )
Costas N. Georghiades
39
Ideal Sampling
If fs>2W, the original signal x(t) can be obtained from xs(t) through simple low-pass
filtering. In the frequency domain, we have
X ( f ) = X s ( f ) ⋅ G ( f ),
where
⎧ Ts , f ≤ B
G( f ) = ⎨ for W ≤ B ≤ f s − W .
⎩0, oherwise.
G( f )
The impulse response of the low-pass filter, g(t), is then
Ts
sin(2π Bt )
[ ]
g( t ) = ℑ−1 G( f ) = ∫ G( f ) ⋅ e j 2π ft df = 2 BTs
B
.
−B 2π Bt −B B f
Costas N. Georghiades
40
Ideal Sampling G(f)
T
-B -W 0 W B f
g(t)
t
The Sampling Theorem:
A bandlimited signal with no spectral components above W Hz can be recovered
uniquely from its samples taken every Ts seconds, provided that
1 Nyquist
Ts ≤ , or, equivalent ly, f s ≥ 2W . Rate
2W
Extraction of x(t) from its samples can be done by passing the sampled signal through a
low-pass filter. Mathematically, x(t) can be expressed in terms of its samples by:
x (t ) = ∑ x (kTs ) ⋅ g (t − kTs )
k
Costas N. Georghiades
41
Natural Sampling
A delta function can be approximated by a
rectangular pulse p(t)
It can be shown
that in
this case as well
⎧ T1 , T2 ≤ t ≤ T2 the original
p (t ) = ⎨ signal can be
⎩0, elsewhere. reconstructed
T from its samples
at or above
∞ the Nyquist rate
h p (t ) = ∑ p(t − kT )
s
through simple
k =−∞ low-pass filtering
Costas N. Georghiades
42
Zero-Order-Hold Sampling
x s ( t ) = p ( t ) ∗ [x ( t ) ⋅ h ( t ) ]
x(t)
P(f)
0
Ts t
Reconstruction is
x s (t ) 1 x (t ) possible but an
G( f )
P( f ) equalizer may be
Equalizer Low-pass needed
Filter
Costas N. Georghiades
43
Practical Considerations of Sampling
Since in practice low-pass filters are not ideal and have a finitely steep roll-off, in
practice the sampling frequency fs is about 20% higher than the Nyquist rate:
f s ≥ 2.2W
50
-50
-100
-150
0 0.2 0.4 0.6 0.8 1
Example:
Music in general has a spectrum with frequency components in the range ~20kHz. The
ideal, smallest sampling frequency fs is then 40 Ksamples/sec. The smallest practical
sampling frequency is 44Ksamples/sec. In compact disc players, the sampling frequency
is 44.1Ksamples/sec.
Costas N. Georghiades
44
Summary of Sampling (Nyquist) Theorem
An analog signal of bandwidth W Hz can
be reconstructed exactly from its
samples taken at a rate at or above 2W
samples/s (known as the Nyquist rate)
x(t) xs(t) 1
f =
s T > 2W
Ts
s
0 0
0 1 2 3 4 t 0 1 2 3 4 t
Costas N. Georghiades
45
Summary of Sampling Theorem (cont’d)
Signal reconstruction
xs(t) x(t)
Low-Pass
Filter
0 0
0 1 2 3 4 t 0 1 2 3 4 t
Costas N. Georghiades
46
Quantization
Quantization is the process discretizing the
amplitude axis. It involves mapping an infinite
number of possible amplitudes to a finite set
of values.
Costas N. Georghiades
48
Quantization (cont’d)
There is an irrecoverable error due to
quantization. It can be made small through
appropriate design.
Examples:
¾ Telephone speech signals: 8-bit quantization
¾ CD digital audio: 16-bit quantization
Costas N. Georghiades
49
Input-Output Characteristic
7∆
Output, xˆ = Q ( x )
2
3-Bit (8-level) 5∆
Uniform 2
Quantizer 3∆
2
∆
2
− 4∆ − 3∆ − 2∆ −∆ ∆ ∆ 2∆ 3∆ 4∆
− Input, x
2
3∆
−
2 Quantization Error:
−
5∆
d = ( x − xˆ ) = ( x − Q ( x ) )
2
7∆
−
2
Costas N. Georghiades
50
Signal-to-Quantization Noise Ratio (SQNR)
[ ] [ ]
PX = E X 2
∫ T E X (t ) dt
1
PX = lim 2 2
T →∞ T −2
[
D = E ( X − Q( X ))
2
]
{ }
∫ T E [X (t ) − Q ( X (t ))] dt
T
1
D = lim 2 2
T →∞ T −2
Can be used for
stationary processes
Costas N. Georghiades
51
SQNR for Uniform Scalar Quantizers
Let the input x(t) be a sinusoid of amplitude V volts. It can
be argued that all amplitudes in [-V,V] are equally likely.
∆
Then, if the step size is , the quantization error is
uniformly distributed in the interval
p(e)
1/ ∆
⎡ ∆ ∆⎤
⎢⎣ − 2 , 2 ⎥⎦
− ∆/2 ∆/2 e
1 ∆2 2 ∆2 1 T2 2 2 V2
D = ∫ ∆ e de = PX = lim ⋅ ∫ T V sin (ωt )dt =
∆ −2 12 T →∞ T −
2 2
For an N-bit quantizer: ∆ = 2V / 2 N
⎛P ⎞
∴ SQNR = 10 ⋅ log10 ⎜ X ⎟ = 6.02 ⋅ N + 1.76 dB
⎝D ⎠
Costas N. Georghiades
52
Example
A zero-mean, stationary Gaussian source X(t) having spectral density as given below
is to be quantized using a 2-bit quantization. The quantization intervals and levels are
as indicated below. Find the resulting SQNR.
SX ( f ) =
200
⇔ R (τ ) = 100 ⋅ e
−τ
1 + (2πf )
2 XX
x2
PX = RXX (0 ) = 100 1 −
f X ( x) = ⋅e 200
200π
-15 -5 5 15
-10 0 10
[ 2
]
D = E ( X − Q( X )) = 2 ⋅ ∫
0
10
( x − 5) 2 ∞
f X ( x )dx + 2 ⋅ ∫ ( x − 15) f X ( x )dx = 11.885
10
2
⎛ 100 ⎞
SQNR = 10 log10 ⎜ ⎟ = 9.25 dB
⎝ 11.885 ⎠
Costas N. Georghiades
53
Non-uniform Quantization
In general, the optimum quantizer is non-uniform
Optimality conditions (Lloyd-Max):
¾ The boundaries of the quantization intervals are the mid-points of the
corresponding quantized values
¾ The quantized values are the centroids of the quantization regions.
¾ Optimum quantizers are designed iteratively using the above rules
Costas N. Georghiades
54
Optimal Quantizers for a Gaussian Source
Costas N. Georghiades
55
Companding (compressing-expanding)
Compressor
Uniform
Quantizer
… Low-pass
Filter
Expander
µ - Law Companding : 1
µ = 255
0.8
µ = 10
ln (1 + µ ⋅ x )
0.6
g ( x) = ⋅ sgn( x ), −1 ≤ x ≤ 1
ln (1 + µ )
0.4
µ =0
0.2
Costas N. Georghiades
57
Data Compression
A/D Converter
Analog Source
Sampler Quantizer 001011001...
Source Encoder
Discrete Source
Costas N. Georghiades
59
Measuring “Information”
Not all sources are created equal:
¾ Example:
P(0)=1
Discrete
Source 1 No information provided
P(1)=0
Discrete P(0)=0.99
Source 2 Little information is provided
P(1)=0.01
Discrete P(0)=0.5
Source 3
Much information is provided
P(1)=0.5
Costas N. Georghiades
60
Measuring “Information” (cont’d)
The amount of information provided is a function of
the probabilities of occurrence of the symbols.
Definition: The self-information of a symbol x which
has probability of occurrence p is
I(x)=-log2(p) bits
Definition: The average amount of information in
bits/symbol provided by a binary source with P(0)=p
is
H ( x ) = − p log 2 ( p ) − (1 − p ) log 2 (1 − p )
H(x)
1
0.5
0 0.5 1 p
Costas N. Georghiades
62
Non-Binary Sources
In general, the entropy of a source that produces L symbols with
probabilities p1, p2, …,pL, is
L
H ( X ) = −∑ pi ⋅ log 2 ( pi ) bits
i =1
0 ≤ H ( X ) ≤ log 2 (L )
Costas N. Georghiades
64
Theorem (Source Coding)
The smallest possible average number of
bits/symbol needed to exactly represent
a source equals the entropy of that
source.
Example 1: A binary file of length 1,000,000
bits contains 100,000 “1”s. This file can be
compressed by more than a factor of 2:
H ( x ) = −0.9 ⋅ log(. 9 ) − 0.1 ⋅ log(. 1) = 0.47 bits
S = 106 × H ( x ) = 4.7 × 105 bits
Compression Ratio=2.13
Costas N. Georghiades
65
Some Data Compression Algorithms
Huffman coding
Run-length coding
Lempel-Ziv
There are also lossy compression algorithms
that do not exactly represent the source, but
do a good job. These provide much better
compression ratios (more than a factor of
10, depending on reproduction quality).
Costas N. Georghiades
66
Huffman Coding (by example)
A binary source produces bits with P(0)=0.1. Design a Huffman code that
encodes 3-bit sequences from the source.
largest
1 1 111 .729
1
3 011 110 .081 .162 1 1.0
3 010 101 .081 0
1 .271
3 001 011 .081 0
5 00011 100 .009 1 .109
.018 1 0
5 00010 010 .009 0 .028
5 00001 001 smallest .009 1
.01
0
0
5 00000 000 .001 0
01 1 0.090 1001
M 2 = .1 + 2 × .09 + 3 × .081 + 4 × .073
001 2 0.081 1010 + 5 × .066 + 6 × .059 + 7 × .053
0001 3 0.073 1011 + 8 × .048 + 8 × .430
= 5.710
00001 4 0.066 1100
000001 5 0.059 1101 M 1 2.710
M = = = 0.475
M 2 5.710
0000001 6 0.053 1110 H ( X ) = −0.9 ⋅ log2 (0.9) − 0.1 ⋅ log2 (0.1)
00000001 7 0.048 1111 = 0.469 < 0.475
00000000 8 0.430 0
Costas N. Georghiades
68
Examples: Speech Compression
Toll-quality speech signals can be produced at
8Kbps (a factor of 8 compression compared to
uncompressed telephone signals).
Algorithms that produce speech at 4.8Kbps or
even 2.4Kbps are available, but have reduced
quality and require complex processing.
Costas N. Georghiades
69
Example: Video Compression
Uncompressed video of a 640x480 pixel2
image at 8 bits/pixel and 30 frames/s requires
a data-rate of 72 Mbps.
Video-conference systems operate at 384Kbps.
MPEG2 (standard) operates at 3Mbps.
Costas N. Georghiades
70