Principles of Digital Communications
Principles of Digital Communications
Principles of Digital Communications
CONNEXIONS
Rice University, Houston, Texas
This selection and arrangement of content is licensed under the Creative Commons Attribution License: http://creativecommons.org/licenses/by/2.0/
Table of Contents
1 Syllabus 1.1 Letter to Student . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Contact Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.3 Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.4 Purpose of the Course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.5 Course Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.6 Calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.7 Grading Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Chapter 1: Signals and Systems 2.1 Signal Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 System Classications and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 The Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4 The Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.5 Review of Probability and Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.6 Introduction to Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.7 Second-order Description of Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.8 Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.9 White and Coloured Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.10 Transmission of Stationary Process Through a Linear Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3 Chapter 2: Source Coding 3.1 Information Theory and Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.3 Source Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.4 Human Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4 Chapter 3: Communication over AWGN Channels 4.1 Data Transmission and Reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2 Signalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3 Geometric Representation of Modulation Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.4 Demodulation and Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.5 Demodulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.6 Detection by Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.7 Examples of Correlation Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.8 Matched Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.9 Examples with Matched Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.10 Performance Analysis of Binary Orthogonal Signals with Correlation . . . . . . . . . . . . . . . . . . . . . . 64 4.11 Performance Analysis of Orthogonal Binary Signals with Matched Filters . . . . . . . . . . . . . . . . . . 66 4.12 Carrier Phase Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.13 Carrier Frequency Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.14 Dierential Phase Shift Keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5 Chapter 4: Communication over Band-limitted AWGN Channel 5.1 Digital Transmission over Baseband Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.2 Introduction to ISI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.3 Pulse Amplitude Modulation Through Bandlimited Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.4 Precoding and Bandlimited Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.5 Pulse Shaping to Reduce ISI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.6 Two Types of Error-Performance Degradation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
iv
5.7 Eye Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.8 Transversal Equalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.9 Decision Feedback Equalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.10 Adaptive Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6 Chapter 5: Channel Coding 6.1 Channel Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 6.2 Mutual Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6.3 Typical Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 6.4 Shannon's Noisy Channel Coding Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6.5 Channel Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.6 Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7 Chapter 6: Communication over Fading Channels 7.1 Fading Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.2 Characterizing Mobile-Radio Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.3 Large-Scale Fading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 7.4 Small-Scale Fading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 7.5 Signal Time-Spreading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 7.6 Mitigating the Degradation Eects of Fading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 7.7 Mitigation to Combat Frequency-Selective Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.8 Mitigation to Combat Fast-Fading Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 7.9 Mitigation to Combat Loss in SNR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 7.10 Diversity Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 7.11 Diversity-Combining Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7.12 Modulation Types for Fading Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7.13 The Role of an Interleaver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 7.14 Application of Viterbi Equalizer in GSM System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 7.15 Application of Rake Receiver in CDMA System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Attributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Chapter 1
Syllabus
This course and this Student Manual reect a collective eort by your instructor, the Vietnam Education Foundation, the Vietnam Open Courseware (VOCW) Project and faculty colleagues within Vietnam and the United States who served as reviewers of drafts of this Student Manual. This course is an important component of our academic program. Although it has been oered for many years, this latest version represents an attempt to expand the range of sources of information and instruction so that the course continues to be up-to-date and the methods well suited to what is to be learned. This Student Manual is designed to assist you through the course by providing specic information about student responsibilities including requirements, timelines and evaluations. You will be asked from time-to-time to oer feedback on how the Student Manual is working and how the course is progressing. Your comments will inform the development team about what is working and what requires attention. Our goal is to help you learn what is important about this particular eld and to eventually succeed as a professional applying what you learn in this course. Thank you for your cooperation. Tuan Do-Hong.
Instructor: Dr.-Ing. Tuan Do-Hong Oce Location: Ground oor, B3 Building Phone: +84 (0) 8 8654184 Email: [email protected] Oce Hours: 9:00 am 5:00 pm Assistants: Oce Location: Ground oor, B3 Building Phone: +84 (0) 8 8654184 Email: Oce Hours: 9:00 am 5:00 pm Lab sections/support:
1 This 2 This
CHAPTER 1. SYLLABUS
3
4
1.3 Resources
Connexions: http://cnx.org/ MIT's OpenCourseWare: http://ocw.mit.edu/index.html Computer resource: Matlab and Simulink Textbook(s):
Required: [1] Bernard Sklar, Hall. Recommended: [2] John Proakis,
Communication, 4th
Prenctice Hall.
Digital Communications, 4th edition, 2001, McGraw-Hill. Communication Systems: An Introduction to Signals and Noise in Electrical
edition, 2001, McGraw-Hill.
Pre-requisites: Communication Systems. Thorough knowledge of Signals and Systems, Linear Algebra, Digital Signal Processing, and Probability Theory and Stochastic Processes is essential.
This course explores elements of the theory and practice of digital communications.
model and study the eects of channel impairments such as distortion, noise, interference, and fading, on the performance of communication systems; 2) introduce signal processing, modulation, and coding techniques that are used in digital communication systems. The concepts/ tools are acquired in this course:
3 This content is available online 4 http://cnx.org/ 5 http://ocw.mit.edu/index.html 6 This content is available online 7 This content is available online
Source Coding
Elements of compression, Human coding Elements of quantization theory Pulse code Modulation (PCM) and variations Rate/bandwidth calculations in communication systems
Channel Coding
Block codes
Types of error control Error detection and correction Convolutional codes and the Viterbi algorithm
Application of Viterbi equalizer in GSM system Application of Rake receiver in CDMA system
1.6 Calendar
Week 1: Overview of signals and spectra Week 2: Source coding Week 3: Receiver structure, demodulation and detection Week 4: Correlation receiver and matched lter. Detection of binary signals in AWGN Week 5: Optimal detection for general modulation. Coherent and non-coherent detection (I) Week 6: Coherent and non-coherent detection (II) Week 7: ISI in band-limited channels. Zero-ISI condition: the Nyquist criterion Week 8: Mid-term exam Week 9: Raised cosine lters. Partial response signals
8 This
CHAPTER 1. SYLLABUS
Week 10: Channel equalization Week 11: Channel coding. Block codes Week 12: Convolutional codes Week 13: Viterbi algorithm Week 14: Fading channel. Characterizing mobile-radio propagation Week 15: Mitigating the eects of fading Week 16: Applications of Viterbi equalizer and Rake receiver in GSM and CDMA systems Week 17: Final exam
Homework/Participation/Exams:
Homework and Programming Assignments
Homework and programming assignments will be given to test student's knowledge and understanding of the covered topics. Homework and programming assignments will be assigned frequently throughout the course and will be due in the time and place indicated on the assignment. Homework and programming assignments must be individually done by each student without collaboration with others. allowed. There will be in-class mid-term and nal exams. The mid-term exam and the nal exam will be timelimited to 60 minutes and 120 minutes, respectively. problems. Participation: Question and discussion in class are encouraged. Participation will be noted. They will be closed book and closed notes. It is recommend that the students practice working problems from the book, example problems, and homework No late homework will be
9 This
Chapter 2
Chapter 1: Signals and Systems
This module will lay out some of the fundamentals of signal classication. This is basically a list of denitions and properties that are fundamental to the discussion of signals and systems. It should be noted that some discussions like energy signals vs. power signals discussion, and will not be included here.
2
discrete
to sample a continuous signal, so it will only have values at equally spaced intervals along the time
1 This content is available online at <http://cnx.org/content/m10057/2.17/>. 2 "Signal Energy vs. Signal Power" <http://cnx.org/content/m10055/latest/> 3 "Discrete-Time Signals" <http://cnx.org/content/m0009/latest/> 4 "The Sampling Theorem" <http://cnx.org/content/m0050/latest/>
Figure 2.1
2.1.2.2 Analog vs. Digital The dierence between analog and digital is similar to the dierence between continuous-time and discretetime. In this case, however, the dierence is with respect to the value of the function (y-axis) (Figure 2.2). Analog corresponds to a continuous y-axis, while digital corresponds to a discrete y-axis. An easy example of a digital signal is a binary sequence, where the values of the function can only be one or zero.
Figure 2.2
We can dene a periodic function through the following mathematical expression, where
is a positive constant:
f (t) = f (T + t)
The true.
5 "Periodic
(a)
(b)
Figure 2.3:
2.1.2.4 Causal vs. Anticausal vs. Noncausal Causal signals are signals that are zero for all negative time, while anticausal are signals that are zero for all positive time. Noncausal signals are signals that have nonzero values in both positive and negative time
(Figure 2.4).
(a)
(b)
(c)
Figure 2.4:
such that An
odd signal,
f (t) = f (t).
such that
f (t) =
(a)
(b)
Figure 2.5:
Using the denitions of even and odd signals, we can show that any signal can be written as a combination of an even and odd signal. That is, every signal has an odd-even decomposition. To demonstrate this, we have to look no further than a single equation.
f (t) =
f (t) + f (t)
Example 2.1
10
(a)
(b)
(c)
(d)
Figure 2.6: 1 (f (t) + f (t)) 2
(a) The signal we will decompose using odd-even decomposition (b) Even part: e (t) = 1 (c) Odd part: o (t) = 2 (f (t) f (t)) (d) Check: e (t) + o (t) = f (t)
11
2.1.2.6 Deterministic vs. Random A deterministic signal is a signal in which each value of the signal is xed and can be determined by a
mathematical expression, rule, or table. Because of this the future values of the signal can be calculated from past values with complete condence. On the other hand, a only be guessed based on the averages
7
random signal
about its behavior. The future values of a random signal cannot be accurately predicted and can usually of sets of signals (Figure 2.7).
(a)
(b)
Figure 2.7:
2.1.2.7 Right-Handed vs. Left-Handed A right-handed signal and left-handed signal are those signals whose value is zero between a given variable
and positive or negative innity. Mathematically speaking, a right-handed signal is dened as any signal where
f (t) = 0 for t < t1 < , and a left-handed signal t > t1 > . See (Figure 2.8) for an example. Both gures
negative innity with mainly nonzero values.
f (t) = 0
for
t1
6 "Introduction to Random Signals and Processes" <http://cnx.org/content/m10649/latest/> 7 "Random Processes: Mean and Variance" <http://cnx.org/content/m10656/latest/>
12
(a)
(b)
Figure 2.8:
f (t)
is a
where
t1 >
and
t2 < .
innite-length signal,
f (t),
f (t)
13
Figure 2.9:
Finite-Length Signal. Note that it only has nonzero values on a set, nite interval.
In this module some of the basic classications of systems will be briey introduced and the most important As can be seen, the properties of a system provide an easy Understanding these basic dierence's between systems, and
their properties, will be a fundamental concept used in all signal and system courses, such as digital signal processing (DSP). Once a set of systems can be identied as sharing particular properties, one no longer has to deal with proving a certain characteristic of a system each time, but it can simply be accepted do the the systems classication. Also remember that this classication presented here is neither exclusive (systems can belong to several dierent classications) nor is it unique (there are other methods of classication Examples of simple systems can be found here
10 9
).
discrete system.
continuous system, and one where the input and output signals are discrete is
8 This content is available online at <http://cnx.org/content/m10084/2.19/>. 9 "Introduction to Systems" <http://cnx.org/content/m0005/latest/> 10 "Simple Systems" <http://cnx.org/content/m0006/latest/>
14
2.2.2.2 Linear vs. Nonlinear A linear system is any system that obeys the properties of scaling (homogeneity) and superposition (additivity), while a nonlinear system is any system that does not obey at least one of these.
To show that a system
(2.3)
Figure 2.10:
Figure 2.11:
It is possible to check a system for linearity in a single (though larger) step. To do this, simply combine the rst two steps to get
(2.5)
15
2.2.2.3 Time Invariant vs. Time Variant A time invariant system is one that does not depend on when it occurs:
change with a delay of the input. That is to say that for a system invariant if for all
where
H (f (t)) = y (t), H
is time (2.6)
T H (f (t T )) = y (t T )
Figure 2.12:
This block diagram shows what the condition for time invariance. The output is the same whether the delay is put on the input or the output.
When this property does not hold for a system, then it is said to be
2.2.2.4 Causal vs. Noncausal A causal system is one that is nonanticipative; that is, the output may depend on current and past inputs,
but not future inputs. All "realtime" systems must be causal, since they can not have future inputs available to them. One may think the idea of future inputs does not seem to make much physical sense; however, we have only been dealing with time as our dependent variable so far, which is not always the case. Imagine rather that we wanted to do image processing. Then the dependent variable might represent pixels to the left and right (the "future") of the current position on the image, and we would have a
noncausal system.
16
(a)
(b)
(a) For a typical system to be causal... (b) ...the output at time t0 , y (t0 ), can only depend on the portion of the input signal before t0 .
Figure 2.13:
output does not diverge as long as the input does not diverge.
There
are many ways to say that a signal "diverges"; for example it could have innite energy. One particularly useful denition of divergence relates to whether the signal is bounded or not. Then a system is referred to
bounded input-bounded output (BIBO) stable if every possible bounded input produces a bounded
output.
17
Representing this in a mathematical way, a stable system must have the following property, where is the input and
x (t)
y (t)
|y (t) | My <
when we have an input to the system that can be described as
(2.7)
|x (t) | Mx < Mx
and
(2.8)
My
both represent a set of nite positive numbers and these relationships hold for all of
unstable.
i.e.
t.
neatly described in terms of whether or not its impulse response is absolutely integrable
12
L2
f (x) has bounded variation in the interval (, ), the Fourier series corresponding to f (x) converges f (x) at any point within the interval, at which the function is continuous; it converges 1 to the value 2 [f (x + 0) + f (x 0)] at any such point at which the function is discontinuous. At the 1 points , it converges to the value 2 [f ( + 0) + f ( 0)]. [6] If f (x) is of bounded variation in (, ), the Fourier series converges to f (x), uniformly in any interval (a, b) in which f (x) is continuous, the continuity at a and b being on both sides. [6] 1 If f (x) is of bounded variation in (, ), the Fourier series converges to 2 [f (x + 0) + f (x 0)], bounded throughout the interval (, ). [6] If f (x) is bounded and if it is continuous in its domain at every point, with the exception of a nite
If to the value number of points at which it may have ordinary discontinuities, and if the domain may be divided into a nite number of parts, such that in any one of them the function is monotone; or, in other words, the function has only a nite number of maxima and minima in its domain, the Fourier series of converges to [6][3]
f (x)
f (x)
1 2
[f (x + 0) + f (x 0)]
at points of discontinuity.
If
f (x)
is such that, when the arbitrarily small neighborhoods of a nite number of points in whose
neighborhood
f (x) becomes a function with bounded 1 [f (x + 0) + f (x 0)], at every point in 2 (, ), except the points of innite discontinuity of the function, provided the improper integral f (x) dx exist, and is absolutely convergent. [6] If f is of bounded variation, the Fourier series of f converges at every point x to the value
has no upper bound have been excluded, If f is, in addition, continuous at every point of an interval
|f (x) |
[f (x + 0) + f (x 0)] /2.
I = (a, b),
I.
[10]
18
a (k)
and
b (k)
are absolutely summable, the Fourier series converges uniformly to the Fourier series converges to
f (x)
which is but
continuous. [7]
X,
of the following four conditions is satised: (i) number of maxima and minima on bounded variation on
is piecewise monotonic on
[0, X]
[0, X] and that at least one [0, X], (ii) f has a nite discontinuities on [0, X], (iii) f is of
then it will follow that the Fourier
[0, X],
(iv)
is piecewise smooth on
[0, X]:
series coecients may be dened through the dening integral, using proper Riemann integrals, and that the Fourier series converges to
1 2 [f For any 1
value
f (x) at a.a.x, to f (x) at (x ) + f (x+ )] at all x. [4] p < and any f C p S 1 , the partial sums Sn = Sn (f ) =
|k|n
f,
and to the
f (k) ek
(2.9)
converge to [5]
f , uniformly as n ;
in fact,
||Sn f ||
np+1/2 .
The Fourier series expansion results in transforming a periodic, continuous time function, discrete indexed frequency functions,
x (t),
to two
a (k)
and
b (k)
13
Many practical problems in signal analysis involve either innitely long or very long signals where the For these cases, the Fourier transform (FT) and its inverse (IFT) have been developed. This transform has been used with great success in virtually all quantitative areas of science and technology where the concept of frequency is important. While the Fourier series was used before Fourier worked on it, the Fourier transform seems to be his original idea. It can be derived as an extension of the Fourier series by letting the length increase to innity or the Fourier transform can be independently dened and then the Fourier series shown to be a special case of it. The latter approach is the more general of the two, but the former is more intuitive [9][2].
is dened by
X () =
(2.10)
x (t) =
1 2
X () ejt d.
(2.11)
Because of the innite limits on both integrals, the question of convergence is important. There are useful practical signals that do not have Fourier transforms if only classical functions are allowed because of problems with convergence. The use of delta functions (distributions) in both the time and frequency domains allows a much larger class of signals to be represented [9].
13 This
19
If If If
functions spaced
apart,
Note the Fourier transform takes a function of continuous time into a function of continuous frequency, neither function being periodic. If distribution" or delta functions" are allowed, the Fourier transform of a periodic function will be a innitely long string of delta functions with weights that are the Fourier series coecients.
14
The focus of this course is on digital communication, which involves transmission of information, in its most general sense, from source to destination using digital technology. Engineering such a system requires modeling both the information and the transmission media. Interestingly, modeling both digital or analog information and many physical media requires a probabilistic setting. In this chapter and in the next one we will review the theory of probability, model random signals, and characterize their behavior as they traverse through deterministic systems disturbed by noise and interference. In order to develop practical models for random phenomena we start with carrying out a random experiment. We then introduce denitions, rules, and axioms for modeling within the context of the experiment. denoted by The outcome of a random experiment is Such outcomes
could be an abstract description in words. A scientic experiment should indeed be repeatable where each outcome could naturally have an associated probability of occurrence. This is dened formally as the ratio of the number of times the outcome occurs to the total number of times the experiment is repeated.
Figure 2.14
14 This
20
Example 2.2
Roll a dice. Outcomes
{1 , 2 , 3 , 4 , 5 , 6 }
i = i dots X (i ) = i
2.5.2 Distributions
Probability assignments on intervals
a<Xb X
is a function
F X ( (R R) )
such
that
F X (b)
= =
P r [X b] P r [{ |X () b}]
(2.12)
Figure 2.15
F X (b) =
and
f X ( x ) dx
(2.13)
d dx
i.e., F X ( )
is
p X ( xk )
= =
P r [X = xk ] F X ( xk )
xxk x<xk
lim
F X (x)
(2.14)
F X,Y ( a, b )
= P r [X a, Y b] = P r [{ |X () a Y () b}]
(2.15)
21
Figure 2.16
F X,Y ( a, b ) =
f X,Y ( x, y ) dxdy
(2.16)
e.g., f X,Y ( x, y ) =
p X,Y ( xk , yl ) = P r [X = xk , Y = yl ]
Conditional density function
(2.17)
fY |X (y|x) =
for all Two random variables are
f X,Y ( x, y ) f X (x)
(2.18)
x with f X ( x ) > 0 otherwise conditional density is not dened for those values of x with f X ( x ) = 0
independent if
f X,Y ( x, y ) = f X ( x ) f Y ( y )
for all
(2.19)
xR
and
y R.
p X,Y ( xk , yl ) = p X ( xk ) p Y ( yl )
for all
(2.20)
and l.
2.5.3 Moments
Statistical quantities to represent some of the characteristics of a random variable.
g (X)
= =
E [g (X)] g (x) f
( x ) dx if
continuous if discrete
(2.21)
(g (xk ) p X ( xk ))
22
X = X
Second moment
(2.22)
E X2 = X2
Variance
(2.23)
V ar (X)
= = =
(X) X2
2 2
(2.24)
(X X ) X
Characteristic function
for
u R,
where
i=
X (u) = eiuX 1
(2.25)
RXY
= =
XY
xy f X,Y ( x, y ) dxdy
(2.26)
Covariance
CXY
Cov (X, Y )
(2.27)
= (X X ) (Y Y ) = RXY
Correlation coecient
X Y
(2.28)
XY =
Cov (X, Y ) X Y XY = 0.
15
and
are uncorrelated if
. t, t R : (Xt ())
(2.29)
Example 2.3
Received signal at an antenna as in Figure 2.17.
15 This
23
Figure 2.17
For a given
First-order distribution
t, Xt ()
FXt (b)
(2.30)
FXt (b)
Xt
Second-order distribution
FXt1 ,Xt2 (b1 , b2 ) = P r [Xt1 b1 , Xt2 b2 ]
for all (2.31)
Nth-order distribution
t1 R, t2 R, b1 R, b2 R
(2.32)
if (2.33)
N th
N.
Example 2.4
Xt = cos (2f0 t + ())
where is a random variable dened over
frequency and
() : ( R)
i.e.,
24
[, ]
(2.34)
FXt (b) = P r [ 2f0 t + (arccos (b))] + P r [arccos (b) 2f0 t + ] FXt (b) = =
(arccos(b))2f0 t 1 2 d 2f0 t 1 (2 2arccos (b)) 2
(2.35)
2f0 t 1 d arccos(b)2f0 t 2
(2.36)
fXt (x)
= =
d dx
|x| 1
(2.37)
0 otherwise
Figure 2.18
The second order stationarity can be determined by rst considering conditional densities and the joint density. Recall that
Xt = cos (2f0 t + )
Then the relevant step is to nd
(2.38)
P r [ Xt2 b2 | Xt1 = x1 ]
(2.39)
25
Note that
Xt1 = x1 = cos (2f0 t + ) = arccos (x1 ) 2f0 t Xt2 = cos (2f0 t2 + arccos (x1 ) 2f0 t1 ) = cos (2f0 (t2 t1 ) + arccos (x1 ))
(2.40)
(2.41)
Figure 2.19
b1
(2.42)
t2 t1 . Xt = 1
for
Example 2.5
Every T Xt = 1 seconds, a fair coin is tossed. If heads, then for
nT t < (n + 1) T .
If tails, then
nT t < (n + 1) T .
Figure 2.20
pXt (x) =
for all
1 2 if x = 1 1 2 if x = 1
(2.43)
t R. Xt
is stationary of order 1.
(2.44)
26
pXt2 |Xt1
when
0 if x = x 2 1 (x2 |x1 ) = 1 if x2 = x1
for some
(2.45)
nT t1 < (n + 1) T
and
nT t2 < (n + 1) T
n.
(2.46) with
x1
x2
when
nT t1 < (n + 1) T
and
mT t2 < (m + 1) T
n=m
x2 = x1 for nT t1 , t2 < (n + 1) T
(2.47)
16
Denition 7: Mean
The mean function of a random process
Xt
Xt
for all
t's.
Xt
(2.48)
Denition 8: Autocorrelation
The autocorrelation function of the random process
Xt
is dened as
RX (t2 , t1 )
= =
(2.49)
Xt
RX (t2 , t1 )
only depends on
t2 t1 .
(2.50)
RX (t2 , t1 )
= E Xt2 Xt1 =
RX (t2 , t1 )
16 This
= RX (t2 t1 , 0)
content is available online at <http://cnx.org/content/m10236/2.13/>.
(2.51)
27
If RX (t2 , t1 ) = t2 t1
depends on
t2 t1
only, then we will represent the autocorrelation with only one variable
RX ( )
= RX (t2 t1 ) = RX (t2 , t1 )
(2.52)
Properties
1. 2. 3.
Example 2.6
and
2 .
X (t)
RX (t + , t)
= = = = =
E Xt+ Xt E [cos (2f0 (t + ) + ) cos (2f0 t + )] 1/2E [cos (2f0 )] + 1/2E [cos (2f0 (2t + ) + 2)] 1/2cos (2f0 ) + 1/2 1/2cos (2f0 )
2 0
(2.54)
1 2 d
Not a function of
since the second term in the right hand side of the equality in (2.54) is zero.
Example 2.7
Toss a fair coin every
seconds.
Since
Xt
characteristics can be captured by the pmf and the mean function is written as
X (t)
= = =
RX (t2 , t1 )
= = = 1
xk xl p Xt2 ,Xt1 ( xk , xl )
(2.56)
1 1 1/2 + (1 1 1/2)
when
nT t1 < (n + 1) T RX (t2 , t1 ) = =
and
nT t2 < (n + 1) T
(2.57)
28
nT t1 < (n + 1) T
and
mT t2 < (m + 1) T
with
n=m
(2.58)
t1
and
t2 .
is constant and
RX (t2 , t1 )
is only a function of
t2 t1 .
If
Fact 2.1:
Xt
is strictly stationary, then it is wide sense stationary. The converse is not necessarily true. Autocovariance of a random process is dened as
(2.59)
The variance of
Xt
is
V ar (Xt ) = CX (t, t)
Figure 2.21
RXY (t2 , t1 )
= E Xt2 Yt1 =
(2.60)
(2.61)
Xt
and
t2 t1
only and
Yt X (t)
RXY (t2 , t1 )
is a
Y (t)
are constant.
29
17
any
1
CX (t2 , t1 )
fX (x) =
for all
(2) (detX ) x Rn
where
N 2
1 2
e( 2 (xX )
1
X 1 (xX ))
(2.62)
X =
X (t1 )
. . .
X (tN ) ...
.. .
and
X =
CX (t1 , t1 )
. . .
CX (t1 , tN )
CX (tN , t1 ) . . . Xt
CX (tN , tN )
Properties
1. If a Gaussian process is WSS, then it is strictly stationary. 2. If two Gaussian processes are uncorrelated, then they are also statistically independent. 3. Any linear processing of a Gaussian process results in a Gaussian process.
Example 2.8
X
and
Z =X +Y
is also Gaussian.
X (u)
= eiuX = e
u2 2 2 X
(2.63)
for all
uR Z (u) = eiu(X+Y ) = e = e
u2 2 u2 2 2 X
u2 2
2 Y
(2.64)
2 2 X +Y
therefore
is also Gaussian.
18
X,
it is a
= 0,
rXX ( ) = PX ( )
17 This 18 This
30
where
The PSD of
is then given by
SX ()
PX ( ) e(i ) d
(2.66)
= PX e(i0) = PX
Hence
is
PX
is the PSD of
But:
P ower of X
= =
1 2
SX () d
(2.67)
so the White Noise Process is unrealizable in practice, because of its innite bandwidth. However, it is very useful as a conceptual entity and as an approximation to 'nearly white' processes which have nite bandwidth, but which are 'white' over all frequencies of practical interest. white' processes, For 'nearly
rXX ( )
SX ()
relatively high cuto frequency and then decays to zero above that.
any choice
of
are
fX(t1 ),X(t2 ),
... X(tN )
(x1 , x2 , . . . , xN ) =
i=1
fX(ti ) (xi )
(2.68)
fX(t1 )
fX
a process which has a Gaussian pdf, a white PSD, and is linearly added to whatever signal we are analysing. Note that although 'white' and Gaussian' often go together, this is white' processes). E.g. a very high speed random bit stream has an ACF which is approximately a delta function, and hence is a nearly white process, but its pdf is clearly not Gaussian - it is a pair of delta functions at (especially for 'nearly
+V
and
V ,
Conversely a nearly white Gaussian process which has been passed through a lowpass lter (see next section) will still have a Gaussian pdf (as it is a summation of Gaussians) but will no longer be white.
31
Y (t)
with PSD
SY ()
X (t)
with PSD
PX
H (),
from our
SY ()
= SX () (|H () |) = PX (|H () |)
2
2
(2.70)
|H () | =
then
SY () PX
(2.71)
Y (t)
SY () need only be constant (white) over the passband of the lter, so a nearly
20
white
process which satises this criterion is quite satisfactory and realizable. Using this equation from our discussion of Spectral Properties of Random Signals and (2.65), the ACF of the coloured noise is given by
rY Y ( )
= =
rXX ( ) h ( ) h ( )
(2.72)
= PX h ( ) h ( ) PX h ( ) h ( )
where
h ( )
This Figure
from previous discussion shows two examples of coloured noise, although the upper wave22
form is more 'nearly white' than the lower one, as can be seen in part c of this gure
from previous
discussion in which the upper PSD is atter than the lower PSD. In these cases, the coloured waveforms were produced by passing uncorrelated random noise samples (white up to half the sampling frequency) through half-sine lters (as in this equation
23
Tb = 10
and
50
samples respectively.
24
Xt () dt
(2.73)
Linear Processing
Yt =
h (t, ) X d
(2.74)
Dierentiation
Xt =
d (Xt ) dt
(2.75)
19 "Spectral Properties of Random Signals", (11) <http://cnx.org/content/m11104/latest/#eq27> 20 "Spectral Properties of Random Signals", (9) <http://cnx.org/content/m11104/latest/#eq25> 21 "Spectral Properties of Random Signals", Figure 1 <http://cnx.org/content/m11104/latest/#gure1> 22 "Spectral Properties of Random Signals", Figure 1.3 <http://cnx.org/content/m11104/latest/#gure1c> 23 "Random Signals", (1) <http://cnx.org/content/m10989/latest/#eq9> 24 This content is available online at <http://cnx.org/content/m10237/2.10/>.
32
Properties
1.
Z= =
b a
Xt () dt =
b Xt2 dt2 a
b a
X (t) dt
b b a a
2. Z 2
b a
Xt1 dt1 =
Figure 2.22
Y (t)
= =
h (t, ) X d h (t, ) X ( ) d
(2.76)
If
Xt
Y (t)
h (t ) X d
h (t ) dt
(2.77)
(2.78)
h (t2 t1 ) RX ( ) d
= h RX (t2 t1 )
where
(2.79)
h (t1 , ) X d
(2.80)
h ( (t2 t1 )) RY X ( ) d
(2.81)
= RY (t2 t1 ) h RY X (t2 , t1 )
33
where
= t2
and
h ( ) = h ( )
for all
R. Yt
is WSS if
Xt
time-invariant.
Figure 2.23
Example 2.9
Xt
is a wide sense stationary process with
X = 0,
and
Yt . Y (t) = 0
for all
t. = =
N0 h () h ( 2 N0 e(a| |) 2 2a
RY ( )
) d
(2.82)
Xt
Yt
is a Markov process.
The power spectral density function of a wide sense stationary (WSS) process the Fourier transform of the autocorrelation function of
Xt
is dened to be
Xt .
(2.83)
SX (f ) =
if
RX ( ) e(i2f ) d RX ( ).
Xt
Properties
1. 2. 3. If
SX (f ) = SX (f ) since RX is even and real. V ar (Xt ) = RX (0) = SX (f ) df SX (f ) is real and nonnegative SX (f ) 0 for Yt =
all
f.
h (t ) X d
then
SY (f )
= F (RY ( )) = F h h RX ( ) = = H (f ) H (f ) SX (f ) (|H (f ) |) SX (f )
2
(2.84)
since
H (f ) =
(t) e(i2f t) dt = H (f )
34
Example 2.10
Xt
is a white process and
SY (f ) =
a2 + 4 2 f 2
(2.86)
Chapter 3
Chapter 2: Source Coding
In the previous chapters, we considered the problem of digital transmission over dierent channels. Information sources are not often digital, and in fact, many sources are analog. Although many channels are also analog, it is still more ecient to convert analog sources into digital data and transmit over analog channels using digital transmission techniques. There are two reasons why digital transmission could be more ecient and more reliable than analog transmission: 1. Analog sources could be compressed to digital form eciently. 2. Digital data can be transmitted over noisy channels reliably. There are several key questions that need to be addressed: 1. How can one model information? 2. How can one quantify information? 3. If information can be measured, does its information quantity relate to how much it can be compressed? 4. Is it possible to determine if a particular channel can handle transmission of a source with a particular information quantity?
Figure 3.1
Example 3.1
The information content of the following sentences: "Hello, hello, hello." and "There is an exam today." are not the same. Clearly the second one carries more information. The rst one can be compressed to "Hello" without much loss of information. In other modules, we will quantify information and nd ecient representation of information (Entropy (Section 3.2)). We will also quantify how much (Section 6.1) information can be transmitted through channels, reliably. Channel coding (Section 6.5) can be used to reduce information rate and increase reliability.
1 This
36
3.2 Entropy
Information sources take very dierent forms. Since the information is not known to the destination, it is then best modeled as a random process, discrete-time or continuous time. Here are a few examples:
e.g., a text) can be modeled as a discrete-time and discrete valued random process
a particular
and a specic
bandlimited to around 5 MHz (the value depends on the standards used to raster the frames of image). Audio signals can be modeled as a continuous-time random process. It has been demonstrated that the power spectral density of speech signals is bandlimited between 300 Hz and 3400 Hz. For example, the speech signal can be modeled as a Gaussian process with the shown (Figure 3.2) power spectral density over a small observation period.
Figure 3.2
These analog information signals are bandlimited. Therefore, if sampled faster than the Nyquist rate, they can be reconstructed from their sample values.
Example 3.2
A speech signal with bandwidth of 3100 Hz can be sampled at the rate of 6.2 kHz. If the samples are quantized with a 8 level quantizer then the speech signal can be represented with a binary sequence with the rate of
= =
(3.1)
2 This
37
Figure 3.3
The sampled real values can be quantized to create a discrete-time discrete-valued random process. Since any bandlimited analog information signal can be converted to a sequence of discrete random variables, we will continue the discussion only for discrete random variables.
Example 3.3
The random variable is that 0.1. The statement that
is expected
x takes the value of 0 with probability 0.9 and the value of 1 with probability x = 1 carries more information than the statement that x = 0. The reason to be 0, therefore, knowing that x = 1 is more surprising news!! An intuitive
Example 3.4
The information content in the statement about the temperature and pollution level on July 15th in Chicago should be the sum of the information that July 15th in Chicago was hot and highly polluted since pollution and temperature could be independent.
(3.2)
An intuitive and meaningful measure of information should have the following properties: 1. Self information should decrease with increasing probability. 2. Self information of two independent events should be their sum. 3. Self information should be a continuous function of the probability. The only function satisfying the above conditions is the -log of the probability.
is a function of its
H (X) =
i=1
where
(p X ( xi ) log (p X ( xi ))) X
and
(3.3)
p X ( xi ) = P r [X = xi ].
the unit of entropy is bits. Entropy is a measure of uncertainty in a random variable and a measure of information it can reveal. 2. A more basic explanation of entropy is provided in another module .
3
Example 3.5
source is
{0, 1}
with probabilities
and
1 p.
<http://cnx.org/content/m0070/latest/>
38
p=0
then
H (X) = 0, if p = 1 then H (X) = 0, if p = 1/2 p = 1/2 and the source provides no new
then
information if
has
Figure 3.4
Example 3.6
An analog source is modeled as a continuous-time random process with power spectral density bandlimited to the band between 0 and 4000 Hz. The signal is sampled at the Nyquist rate. The sequence of random variables, as a result of sampling, are assumed to be independent. The samples are quantized to 5 levels values are
{2, 1, 0, 1, 2}.
1 1 1 1 1 2 , 4 , 8 , 16 , 16
H (X)
= = = =
1 2 log 2
1 log 2 8 +
1 16 log 2
1 8
(16)
1 1 16 log 2 16 1 + 16 log 2 16
1 16 log 2
1 16
(3.5)
There are 8000 samples per second. Therefore, the source produces mation.
of infor-
Y)
is dened by
H (X, Y ) =
i
X = (X1 , X2 , . . . , Xn )
is dened as
H (X) =
x1 x2
xn
(p X ( x1 , x2 , . . . , xn ) log (p X ( x1 , x2 , . . . , xn )))
(3.7)
H (X|Y ) =
i
is dened by
(3.8)
39
(3.9)
H (X, Y )
(3.10)
If
X1 , X2 , . . ., Xn
H (X) =
i=1
(H (Xi ))
(3.11)
(3.12)
H = lim
n n
H (X1 , X2 , . . . , Xn )
(3.13)
The entropy rate is a measure of the uncertainty of information content per output symbol of the source. Entropy is closely tied to source coding (Section 3.3). The extent to which a source can be compressed is related to its entropy. In 1948, Claude E. Shannon introduced a theorem which related the entropy to the number of bits per second required to represent a source without much loss.
As mentioned earlier, how much a source can be compressed should be related to its entropy (Section 3.2). In 1948, Claude E. Shannon introduced three theorems and developed very rigorous mathematics for digital communications. In one of the three theorems, Shannon relates entropy to the minimum number of bits per second required to represent a source without much loss (or distortion). Consider a source that is modeled by a discrete-time and discrete-valued random process
X1 , X2 , . . .,
Xn , . . . where xi {a1 , a2 , . . . , aN } and dene pXi (xi = aj ) = pj for j = 1, 2, . . . , N , that X1 , X2 ,. . . Xn are mutually independent and identically distributed. Consider a sequence of length n X1 X2 X= . . . Xn
The symbol appear
where it is assumed
(3.14)
a1
np1
a sequence of length
n,
on the average,
a1
will
Therefore,
(3.15)
40
(pi npi )
(3.16)
A typical sequence
all
i. X= a2
. . .
a1 aN a2 a5
. . .
(3.17)
a1
. . .
aN a6
where of
This is referred to as a
typical sequence.
The probability
P (X = x)
(pi npi )
= = = =
(3.18)
2(nH(X))
is the entropy of the random variables X1 , X2 ,. . ., Xn . n, almost all the output sequences of length n of the source are equally probably with probability 2(nH(X)) . These are typical sequences. The probability of nontypical sequences are negn ligible. There are N dierent sequences of length n with alphabet of size N . The probability of typical where For large sequences is almost 1.
H (X)
# of typical seq.
2(nH(X)) = 1
k=1
(3.19)
41
Figure 3.5
Example 3.7
probabilities.
1 1 1 1 2 , 4 , 8 , 8 }. Assume X1 , X2 ,. . ., X8 is an independent and identically distributed sequence with Xi {A, B, C, D} with the above
Consider a source with alphabet {A,B,C,D} with probabilities {
H (X)
= = = =
1 4
1 log 2 8
1 8
1 log 2 8
1 8
(3.20)
28 8 = 214
The number of nontypical sequences times,
14
(3.21)
48 214 = 216 214 = 214 (4 1) = 3 214 1 1 Examples of typical sequences include those with A appearing 8 = 4 times, B appearing 8 = 2 2 4
etc.
Examples of nontypical sequences of length 8: {D,D,B,C,C,A,B,D}, {C,C,C,C,C,B,C,C} and much more. Indeed, these denitions and arguments are valid when n is very large. The probability of a source output to be in the set of typical sequences is 1 when source output to be in the set of nontypical sequences approaches 0 as The essence of source coding or data compression is that as and ignore nontypical sequences. Since there are only
n . The n .
probability of a
n ,
the output of the source. Therefore, one only needs to be able to represent typical sequences as binary codes
2nH(X)
n,
it takes
nH (X)
H (X)
Theorem 3.1:
Shannon's Source-Coding
A source that produced independent and identically distributed random variables with entropy can be encoded with arbitrarily small error probability at any rate
R H.
Conversely, if
R < H,
42
entropy but does not provide any algorithms or ways to construct such codes. If the source is not i.i.d. ory, (independent and identically distributed), but it is stationary with memthen a similar theorem applies with the entropy
H (X)
H =
n In the case of a source with memory, the more the source produces outputs the more one knows about
the source and the more one can compress.
Example 3.8
The English language has 26 letters, with space it becomes an alphabet of size 27. If modeled as a memoryless source (no dependency between letters in a word) then the entropy is bits/letter. If the dependency between letters in a text is captured in a model the entropy rate can be derived to be
H (X) = 4.03
H = 1.3
25
there may be a compression algorithm with the rate of 1.3 bits/letter. Although Shannon's results are not constructive, there are a number of source coding algorithms for discrete time discrete valued sources that come close to Shannon's bound. One such algorithm is the Human source coding algorithm (Section 3.4). Another is the Lempel and Ziv algorithm. Human codes and Lempel and Ziv apply to compression problems where the source produces discrete time and discrete valued outputs. For cases where the source is analog there are powerful compression These are algorithms that specify all the steps from sampling, quantizations, and binary representation. respectively.
referred to as waveform coders. JPEG, MPEG, vocoders are a few examples for image, video, and voice,
One particular source coding (Section 3.3) algorithm is the Human encoding algorithm. brief discussion of the algorithm is also given in another module .
6
It is a source
coding algorithm which approaches, and sometimes achieves, Shannon's bound for source compression. A
Example 3.9
X {A, B, C, D}
with probabilities {
1 1 1 1 2,4,8,8}
5 This content is available online at <http://cnx.org/content/m10176/2.10/>. 6 "Compression and the Human Code" <http://cnx.org/content/m0092/latest/>
43
Figure 3.6
1 Averagelength = 1 1 + 1 2 + 8 3 + 1 3 = 14 . As you may recall, the entropy of the 2 4 8 8 14 14 bits also H (X) = . In this case, the Human code achieves the lower bound of 8 8 output .
In general, we can dene average code length as
source was
=
xX
where
(p X ( x ) (x))
(3.22)
For compressing single source output at a time, Human codes provide nearly optimum code lengths. The drawbacks of Human coding 1. Codes are variable length. 2. The algorithm requires the knowledge of the probabilities,
p X (x)
for all
x X.
Another powerful source coder that does not have the above shortcomings is Lempel and Ziv.
44
Chapter 4
Chapter 3: Communication over AWGN Channels
data transmission by rst considering simple channels. In additional modules, baseband channels with bandwidth constraints and passband Simple additive white Gaussian channels
Figure 4.1:
The concept of using dierent types of modulation for transmission of data is introduced in the module Signalling (Section 4.2). The problem of demodulation and detection of signals is discussed in Demodulation and Detection (Section 4.4).
4.2 Signalling
Example 4.1
1 This 2 This
1 T Hertz.
45
46
Figure 4.2
Figure 4.3
2 T Hertz.
47
Figure 4.4
T 2.
48
Figure 4.5
Em = , m {1, 2, . . . , M } :
0
and how dierent they are in terms of inner products.
sm 2 (t) dt
(4.1)
< sm , sn >=
0
for
sm (t) sn (t)dt
(4.2)
m {1, 2, . . . , M } s1 (t)
and
and
s2 (t)
are antipodal if
are orthogonal if
m = n.
are orthogonal and
are biorthogonal if
s1 (t),. . ., s M (t)
2
sm (t) =
m 1, 2, . . . ,
M 2
49
It is quite intuitive to expect that the smaller (the more negative) the inner products,
< sm , sn >
for all
m = n,
Let {s1 (t) , s2 (t) , . . . , sM (t)} be a set of orthogonal signals with equal energy. s (t) are simplex signals if M
The signals
s1 (t),. . .,
(4.3)
1 s (t) = sm (t) m M
If the energy of orthogonal signals is denoted by
(sk (t))
k=1
m, m {1, 2, ..., M } :
then the energy of simplex signals
Es =
0
sm 2 (t) dt
(4.4)
Es =
and
1 M
Es 1 Es M 1
(4.5)
, m = n :
It is conjectured that among all possible noise channel.
< s , sn >= m
(4.6)
M -ary
in the smallest probability of error when used to transmit information through an additive white Gaussian The geometric representation of signals (Section 4.3) can provide a compact description of signals and can simplify performance analysis of communication systems using the signals. Once signals have been modulated, the receiver must detect and demodulate (Section 4.4) the signals despite interference and noise and decide which of the set of possible transmitted signals was sent.
Geometric representation of signals can provide a compact characterization of signals and can simplify
be a set of signals.
1 (t) = s21
s1 (t) where E1
E1 = >=
=< s2 , 1
2
2 (t) =
where
E2
T 0
k (t) = Ek =
T 0 k1 j=1
(4.7)
where
sk (t)
(skj j (t))
dt.
The process continues until all of the with unit energy, independent,
M signals are exhausted. The results are N orthogonal signals {1 (t) , 2 (t) , . . . , N (t)} where N M . If the signals {s1 (t) , . . . , sM (t)} are linearly then N = M .
3 This
50
sm (t) =
n=1
with
(smn n (t))
and
(4.8)
by
where
Em =
N n=1
smn 2
Example 4.3
Figure 4.6
(4.9)
(4.10)
(4.11)
E2
(4.12)
E2
51
Figure 4.7
E1 = s11 2
and
E2 = s21 2 .
Example 4.4
Figure 4.8
and
0 0 0 Es
s4 =
mn : dmn = |sm sn | =
(smj snj )
j=1
2Es
(4.13)
Example 4.5
Set of 4 equal energy biorthogonal signals.
s (t)
Es =
T 0
52
Signal constellation
Figure 4.9
d21
(4.14)
d12
(4.15)
d13
(4.16)
(4.17)
Minimum distance
dmin =
2Es
{s1 , s2 , . . . , sM },
for
t [0, T ]
is used to transmit
log 2 M
bits. The
53
Figure 4.10:
sm (t) =
(t))
for
m {1, 2, . . . , M }
Nt =
n=1
where
(n n (t)) + Nt nth
basis signal,
(4.18)
n =
T 0
Nt n (t) dt
is to observe
rt
for
one of the
signals were transmitted. Demodulation is covered here (Section 4.5). A discussion about detection can
4.5 Demodulation
4.5.1 Demodulation
Convert the continuous time received signal into a vector without loss of information (or performance).
rt = sm (t) + Nt
N N
(4.19)
rt =
n=1
(smn n (t)) +
n=1 N
(n n (t)) + Nt
(4.20)
rt =
n=1
((smn + n ) n (t)) + Nt
N
(4.21)
rt =
n=1
(rn n (t)) + Nt
(4.22)
Proposition 4.1:
The noise projection coecients
n 's
Proof:
independent if
Nt
(n)
= =
E [n ] E
T 0
Nt n (t) dt
(4.23)
5 This
54
(n)
= =
E [Nt ] n (t) dt
0 Nt k (t) dt
T 0
(4.24)
E [k n ]
= E =
T 0 T
T 0 T 0 T
Nt k (t )dt
(4.25)
E [k n ] =
0 0 T 0 0
E [k n ] =
N0 2
(t t ) k (t) n (t )dtdt = = =
N0 T k (t) n 2 0 N0 2 kn N0 if k = n 2
(4.27)
E [k n ]
(t)dt
(4.28)
N0 2 kn
Proposition 4.2:
The
rn 's,
Proof:
Nt is irrelevant to the decision process on rt . rn = smn + n , given sm (t) was transmitted. Therefore, r (n) = = V ar (rn ) E [smn + n ] smn = V ar (n ) =
N0 2
(4.29)
Nt .
rt
n (t)'s,
are indepen-
(4.30)
rn
and
Nt
N
E Nt rn = E
N
Nt
k=1
(k k (t)) smn + n
N
(4.31)
E Nt rn = E Nt
k=1
(k k (t)) smn + E [k n ]
k=1 T N
(E [k n ] k (t))
(4.32)
E Nt rn = E Nt
0 T
Nt n (t )dt
k=1
N0 kn k (t) 2
(4.33)
E Nt rn =
0
N0 N0 (t t ) n (t ) dt n (t) 2 2
(4.34)
55
E Nt rn
= =
N0 2 n
(t)
N0 2 n
(t)
(4.35)
0
and
Since both
Nt
and
rn
Nt
rn
r1
.
Knowing the vector
Nt
and
rt
= sm (t) + Nt =
N n=1
(rn n (t)) + Nt
(4.36)
Figure 4.11
56
Figure 4.12
Once the received signal has been converted to a vector, the correct transmitted signal must be detected based upon observations of the input vector. Detection is covered elsewhere (Section 4.6).
Figure 4.13
6 This
57
4.6.1 Detection
Decide which
sm (t)
r =
r1 r2 . . . rN
, the vector composed of demodulated (Section 4.5) received signal, that is, N
bases. was transmitted the vector of projection
| r
was observed]
(4.37)
P r [ sm | r]
If
P r [ sm (t) was =
transmitted
| r
was observed]
fr|sm P r [sm ] fr
(4.38)
P r [sm
was transmitted]
(4.39)
2 Since r (t) = sm (t) + Nt for 0 t T and for some m = {1, 2, . . . , M } then r = sm + where = . . . N and n 's are Gaussian and independent. P 2 ( N n=1 ((rn smn ) )) N 1 2 0 2 (4.40) rn , rn R : fr|sm = N e N0 2 2 2 m = arg max fr|sm
1mM
= arg max =
where
N 2 ln (N0 ) N n=1
1 N0 2
N n=1
(rn smn )
(4.41)
arg min
1mM
(rn smn ) r
and
D (r, sm )
sm
dened as
D (r, sm )
N n=1
(rn smn )
(4.42)
dened as
N n=1
(rn )
of such a system are found here (Section 4.7). Another type of receiver involves linear, time-invariant lters and is known as a matched lter (Section 4.8) receiver. An analysis of the performance of a correlator-type receiver using antipodal and orthogonal binary signals can be found in Performance Analysis .
7
7 "Performance
Analysis" <http://cnx.org/content/m10106/latest/>
58
The implementation and theory of correlator-type receivers can be found in Detection (Section 4.6).
Figure 4.14
m=2
since
or
( s1 ) = ( s2 )
and
Figure 4.15
Example 4.7
Data symbols "0" or "1" with equal probability. Modulator
s1 (t) = s (t)
for
0 t T
and
s2 (t) = (s (t))
8 This
for
0 t T.
59
Figure 4.16
1 (t) =
s(t) , A2 T
s11 = A T ,
and
s21 = A T
(4.44)
Figure 4.17
r1 = A T + 1
or
(4.45)
r1 = A T + 1
is Gaussian with zero mean and variance
(4.46)
N0 2 .
Figure 4.18
60
A T > 0
and
P r [s1 ] = P r [s1 ]
s1 (t) s2 (t)
r1 0 r1 < 0 rt = sm (t) + Nt r = sm +
(4.47)
An alternate demodulator:
Signal to Noise Ratio (SNR) at the output of the demodulator is a measure of the quality of the demodSN R =
In the correlator described earlier,
(4.48)
N0 2 . Is it possible to design a demodulator based on linear time-invariant lters with maximum signal-to-noise ratio?
Es = (|sm |)
n 2 =
Figure 4.19
If
sm (t)
k th
lter is given as
yk (t)
= = =
r h (t ) d k (sm ( ) + N ) hk (t s ( ) hk (t ) d m
) d +
(4.49)
N hk (t ) d
9 This
61
yields
yk (T ) =
sm ( ) hk (T ) d +
N hk (T ) d
(4.50)
k =
N hk (T ) d
(4.51)
E [k ]
= E = 0
N hk (T ) d
(4.52)
The variance of the noise component is the second moment since the mean is zero and is given as
(k )
= =
E k 2 E
N hk (T ) d
(4.53)
E k 2
= =
N0 ' hk (T 2 2 N0 (|hk (T ) |) d 2
) hk (T ' )d d '
(4.54)
sm ( ) hk (T ) d
(4.55)
SN R =
2 s ( ) hk (T ) d m 2 N0 (|hk (T ) |) d 2
(4.56)
The signal-to-noise ratio, can be maximized considering the well-known Cauchy-Schwarz Inequality
g1 (x) g2 (x)dx
(|g1 (x) |) dx
(|g2 (x) |) dx
(4.57)
g1 (x) = g2 (x).
2 sm ( ) hk (T ) d 2 N0 (|hk (T ) |) d 2
with equality
2 N0
(|sm ( ) |) d m
should be
(4.58)
: hopt (T ) = sm ( ) k
Matched Filter
: hopt ( ) = sm (T ) m
The constant factor is not relevant when one considers the signal to noise ratio. unchanged when both the numerator and denominator are scaled.
2 N0
(|sm ( ) |) d =
2Es N0
(4.60)
62
Examples involving matched lter receivers can be found here (Section 4.9). An analysis in the frequency domain is contained in Matched Filters in the Frequency Domain .
11
Another type of receiver system is the correlation (Section 4.6) receiver. A performance analysis of both matched lters and correlator-type receivers can be found in Performance Analysis .
12
The theory and rationale behind matched lter receivers can be found in Matched Filters (Section 4.8).
Figure 4.20
Figure 4.21
10 "Matched Filters in the Frequency Domain" <http://cnx.org/content/m10151/latest/> 11 "Performance Analysis" <http://cnx.org/content/m10106/latest/> 12 This content is available online at <http://cnx.org/content/m10150/2.10/>.
63
t, 0 t 2T : s1 (t)
s1 (t) =
s1 ( ) h1 (t ) d
(4.61)
= = =
t (T t + ) d 0 1 1 3 t 2 t 2 (T t) |0 + 3 |0 t2 t 2 T 3
(4.62)
s1 (T ) =
Compared to the correlator-type demodulation
T3 3
(4.63)
s1 (t) 1 (t) = Es
T
(4.64)
s11 =
0 t s 0 1
s1 ( ) 1 ( ) d = =
t 1 d Es 0 1 1 3 t Es 3
(4.65)
( ) 1 ( ) d
(4.66)
Figure 4.22
Example 4.9
Assume binary data is transmitted at the rate of
1 T Hertz.
Xt =
i=P
(bi s (t iT ))
(4.67)
64
Figure 4.23
rt = sm (t) + Nt
for
65
Figure 4.24
Decide
s1 (t)
was transmitted if
r1 r2 . Pe = = P r [m = m] Pr = b b
(4.68)
Pe = 1/2P r [r R2 | s1 (t) transmitted] + 1/2P r [r R1 | s2 (t) transmitted] = 1/2 f r,s1 (t) (r ) dr1 dr2 + 1/2 f r,s2 (t) (r ) dr1 dr2 =
R2
(4.69)
1/2
R2
q 1 N 2 20
((|r1
2 Es |)
R1
N0
1 e N0
(|r2 |)2 N0
q 1 N 2 20
(|r1 |)2 N0
1 e N0
((|r2
2 Es |)
) dr1 dr2
N0
Alternatively, if
2 1 >
s1 (t)
r2 > r1
or
2 > 1 +
Es
or when
Es . Pe = 1/2
1 e Es 2N0 Es N0
2
2N0
= Q
s1
14
and
s2
is
d12 =
2Es .
Pe = Q
d 12 2N0
. Note also that the bit-error probability is the same as for the matched
14 "Performance
66
rt Y =
If
Y1 (T ) Y2 (T )
(4.71)
s1 (t)
is transmitted
Y1 (T )
= =
s 1 s 1
( ) hopt (T ) d + 1 (T ) 1 ( ) s ( ) d + 1 (T ) 1
(4.72)
= Es + 1 (T ) Y2 (T ) = =
If
s 1
( ) s ( ) d + 2 (T ) 2
2 (T ) Y2 (T ) = Es + 2 (T ).
(4.73)
s2 (t)
is transmitted,
Y1 (T ) = 1 (T )
and
Figure 4.25
H0
Y=
Es 0
1 2
(4.74)
H1
Y=
0 Es
1 2
(4.75)
15 This
67
where
and
Pe = Q
Es N0 Y1
and
(4.76)
Note that the maximum likelihood detector decides based on comparing was sent; otherwise
Y2 .
If
Y1 Y2
then
16
s1
s2
was transmitted. For a similar analysis for binary antipodal signals, refer here
. See
Figure 4.26
Figure 4.27
17
Information is impressed on the phase of the carrier. As data changes from symbol period to symbol period,
m, m {1, 2, . . . , M } :
2 (m 1) M
(4.77)
Example 4.10
Binary
s1 (t)
or
s2 (t)
16 "Performance Analysis of Binary Antipodal Signals with Matched Filters" <http://cnx.org/content/m10153/latest/> 17 This content is available online at <http://cnx.org/content/m10128/2.10/>.
68
1 1 (t) = APT (t) cos (2fc t) Es 1 2 (t) = APT (t) sin (2fc t) Es
The signal
(4.78)
(4.79)
(4.80)
2 (m 1) M
(4.81)
Es
= =
dt dt A2 T 2
(4.82)
Es =
A2 T 1 + A2 2 2
cos 4fc t +
0
4 (m 1) M
dt
(4.83)
(Note that in the above equation, the integral in the last step before the aproximation is very small.) Therefore,
1 (t) =
(4.84)
2 (t) =
In general,
(4.85)
m, m {1, 2, . . . , M } :
and
2 (m 1) M
(4.86)
2 (t) =
sm
(4.88)
(4.89)
69
{1, 2, . . . , M }
(4.90)
We must note that due to phase oset of the oscillator at the transmitter,
2 (m 1) + + Nt M
For binary PSK, the modulation is antipodal, and the optimum receiver in AWGN has average bit-error probability
Pe
= =
Q Q A
2Es N0 T N0
(4.92)
(4.93)
r1
T 0
rt cos 2fc t + dt
T 0
T 0
cos 2fc t + Nt dt
(4.94)
r1 = A T cos + 2
T 0
+ 1
(4.95)
r1 =
where
A cos 4fc t + + 2
dt + 1
AT cos 2
2 N0 T . 4
+ 1
(4.96)
1 =
Nt cos c t + dt
variance
Therefore,
Pe
= Q
2 AT cos() 2 q 2 N0 T 2 4 T N0
(4.97)
= Q cos A
which is not a function of
Pe = Q cos
2Es N0
(4.98)
The above result implies that the amplitude of the local oscillator in the correlator structure does not play a role in the performance of the correlation receiver. However, the accuracy of the phase does indeed play a major role. This point can be seen in the following example:
Example 4.11
(4.99)
(4.100)
70
have
the generators of these carrier signals may be dierent. The carriers are
f1 = fc f2 = fc + f fM = fc + (M 1) f
Thus, the
(4.102)
< sm , sn >= 0 A2 cos (2fc t + 2 (m 1) f t + m ) cos (2fc t + 2 (n 1) f t + n(4.103) ) dt = A2 T A2 T cos (4fc t + 2 (n + m 2) f t + m + n ) dt+ 2 0 cos (2 (m n) f t + m n ) dt = 2 0
A2 sin(4fc T +2(n+m2)f T +m +n )sin(m +n ) A2 +2 2 4fc +2(n+m2)f
If
sin(2(mn)f T +m n ) 2(mn)f
sin(m n ) 2(mn)f
f T
2fc T + (n + m 2) f T is an integer, and if (m n) f T is also an 1 < sm , sn > 0 when fc is much larger than T . In case m, m = 0 A2 T < sm , sn > sinc (2 (m n) f T ) 2
is an integer, then Therefore, the frequency spacing could be as small as
integer, then
< Sm , Sn >= 0
if
(4.104)
1 2T since sinc (x) = 0 if x = 1 or 2. If the signals are designed to be orthogonal then the average probability of error for binary FSK with
f =
optimum receiver is
Pe = Q
in AWGN. Note that Therefore if
Es N0
(4.105)
sinc (x) takes its minimum value not at x = 1 but at 1.4 and the minimum value is 0.216. f = 0.7 then T Pe = Q
1.216Es N0
(4.106)
which is a gain of
10log1.216 0.85d
18 This
71
19
The phase lock loop provides estimates of the phase of the incoming modulated signal. A phase ambiguity
b=1
then
=0 b
and if
b=0
then
= + without = 1. b
In the presence of noise, an incorrect decision due to noise may results in a correct nal desicion (in binary case, when there is phase ambiguity with the probability:
Pe = 1 Q
Consider a stream of bits
2Es N0
(4.107)
an {0, 1}
(4.108)
bn = (an bn1 )
e.g. b0 )
(4.109)
(bn1 bn )
= =
(bn1 an bn1 ) (0 an )
(4.110)
= an
If two consecutive bits are detected correctly, if
n = bn b
and
n1 = bn1 b
then
an
= = = =
if
n = (bn 1) b
and
n1 = (bn1 1). b
an
= = = = = =
19 This
72
If
n1 = bn1 , b
that is, one of two consecutive bits is detected in error. In this case there
Pe
= P r [n = an ] a = P r n = bn , n1 = bn1 + P r n = bn , n1 = bn1 b b b b = 2Q Q
2Es N0
(4.113)
1Q
2Es N0
2Q
2Es N0
is small.
Chapter 5
Chapter 4: Communication over Band-limitted AWGN Channel
Until this point, we have considered data transmissions over simple additive Gaussian channels that are not time or band limited. In this module we will consider channels that do have bandwidth constraints, and are
g (t)
xt = sm (t) rt = =
for
0tT
for some
m {1, 2, . . . , M }
x g (t ) d + Nt Sm ( ) g (t ) d + Nt
(5.1)
f : S (f ) = Sm (f ) G (f ) m
The optimum matched lter should match to the ltered signal:
(5.2)
opt f : Hm (f ) = Sm (f )G (f )e(i)2f t
This lter is indeed
(5.3)
Es =
|S (f ) | m
df
(5.4)
The band limited nature of the channel and the stream of time limited modulated signal create aliasing which is referred to as
intersymbol interference.
2
A typical baseband digital system is described in Figure 1(a). At the transmitter, the modulated pulses are ltered to comply with some bandwidth constraint. These pulses are distorted by the reactances of the cable
74
or by fading in the wireless systems. Figure 1(b) illustrates a convenient model, lumping all the ltering into one overall equivalent system transfer function.
H (f ) = Ht (f ) .Hc (f ) .Hr (f )
Intersymbol interference in the detection process. (a) Typical baseband digital system. (b) Equivalent model
Figure 5.1:
Due to the eects of system ltering, the received pulses can overlap one another as shown in Figure 1(b). Such interference is termed InterSymbol Interfernce (ISI). Even in the absence of noise, the eects of ltering and channel-induced distortion lead to ISI. Nyquist investigated and showed that theoretical minimum system bandwidth needed in order to detect
Rs
Rs /2 or 1/2T hertz. For baseband systems, when H (f ) is such a lter with 1/2T (the ideal Nyquist lter) as shown in gure 2a, its impulse response is of the form h (t) = sinc (t/T ), shown in gure 2b. This sinc (t/T )-shaped pulse is called the ideal Nyquist pulse. Even though two successive pulses h (t) and h (t T ) with long tail, the gure shows all tail of h (t) passing through zero amplitude at the instant when h (t T ) is to be sampled. Therefore, assuming that
symbols/s, without ISI, is single-sided bandwidth the synchronization is perfect, there will be no ISI.
75
Nyquist channels for zero ISI. (a) Rectangular system transfer function H(f). (b) Received pulse shape h (t) = sinc (t/T )
Figure 5.2:
Figure 2 Nyquist channels for zero ISI. (a) Rectangular system transfer function H(f ). (b) Received pulse shape
The names "Nyquist lter" and "Nyquist pulse" are often used to describe the general class of ltering and pulse-shaping that satisfy zero ISI at the sampling points. Among the class of Nyquist lters, the most popular ones are the raised cosine and root-raised cosine. A fundamental parameter for communication system is bandwidth eciency, Nyquist ltering, the theoretical maximum symbol-rate packing without ISI is with 64-ary PAM, without ISI is
b10 ,. . ., b1 , b0 b1 ,. . .
levels of amplitude}
an , an {M
The received signal is
xt =
n=
(an s (t nT ))
(5.5)
rt
= = =
(5.6)
s (t) = sg (t)
is sucient.
3 This
76
t : hopt (t) = sg (T t)
The matched lter output is
(5.7)
y (t)
= = =
+ (t) + (t)
(5.8)
k th
kT :
(5.9)
y (kT ) =
n=
The
k th
symbol is of interest:
y (kT ) = ak u (0) +
n=
where
(5.10)
n = k.
The eect of old The eect of lters at the
Since the channel is bandlimited, it provides memory for the transmission system. ISI can be eliminated or controlled by proper design of transmitter, or by
symbols (possibly even future signals) lingers and aects the performance of the receiver.
(5.11)
Xt =
n=
We can design
(an s (t nT ))
(5.12)
s (t)
such that
u (nT ) =
where Also,
large
if
n=0 n=0
(5.13)
zero or small if
n= (an u (kT nT )) + (kT ) (ISI is the sum term, and once again, The signal s (t) can be designed to have reduced ISI.
n=k
.)
4 This
77
yt
y (kT ) =
n=
By observing
(an (kT nT )) + (k (T ))
(5.14)
y (T ) , y (2T ) , . . .
the date symbols are observed frequently. Therefore, ISI can be viewed as
Transfer function beloging to the Nyquist class (zero ISI at the sampling time) is called the raised-cosine
1 H (f ) = {
cos
|f |+W 2W 0 4 W W0
(1a)
0
2
Where
h (t) = 2W 0 sinc (2W 0 t) 1[4(W W0 )t] (1b) W is the absolute bandwidth. W0 = 1/2T
spectrum and the -6 dB bandwith (or half-amplitude point) for the raised-cosine spectrum. termed the "excess bandwith" The roll-o factor is dened to be With the Nyquist constrain
W W0
is
as
W =
5 This
1 2
(1 + r) Rs
78
Figure 5.3:
response
Raised-cosine lter characteristics. (a) System transfer function. (b) System impulse
excess bandwidth is 100 %, and the system can provide a symbol rate of
Rs
herts (twice the Nyquist minimum bandwidth), thus yielding asymbol-rate packing 1 symbols/s/Hz.
The lager the lter roll-o, the shorter will be the pulse tail. Small tails exhibit less sensitivity to timing errors and thus make for small degradation due to ISI. The smaller the lter roll-o the smaller will be the excess bandwidth. The cost is longer pulse tails, larger pulse amplitudes, and thus, greater sensitivity to timing errors.
Recall that the raised-cosine frequency transfer function describes the composite
H (f )
including trans-
mitting lter, channel lter and receiving lter. The ltering at the receiver is chosen so that the overall transfer function is a form of raised-cosine. Often this is accomplished by choosing both the receiving lter and the transmitting lter so that each has a transfer function known as a root raised cosine. Neglecting any channel-induced ISI, the product of these root-raised cosine functions yields the composite raised-cosine system transfer function.
Error-performance degradation can be classifyed in two group. The rst one is due to a decrease in received signal power or an increase in noise or inteference power, giving rise to a loss in signal-to-noise ratio
EB /N0 .
6 This
79
Figure 5.4:
PB
versus
Eb /N0 characteristic
corresponding to the solid-line curve plotted in gure 1. Suppose that after the system is congured, the performance dose not follow the theoretical curve, but in facts follows the dashed line plot (1). A loss in
Eb /N0
due to some signal losses or an increased level of noise or interference. This loss in
EB /N0
is not so
terrible when compared with possible eects of degradation caused by a distortion mechanism corresponding to the dashed line plot (2). Instead of suering a simple loss in signal-to-noise ratio there is a degradation eect brought about by ISI. If there is no solution to this problem, there is no a mount of improve this problem. More
EB /N0
EB /N0 EB /N0
An eye pattern is the display that results from measuring a system' s response to baseband signals in a
7 This
80
Figure 5.5:
Eye pattern
Figure 1 describe the eye pattern that results for binary binary pulse signalling.
opening indicates the time over which sampling for detection might be performed. The optimum sampling time corresponds to the maxmum eye opening, yielding the greatest protection against noise. If there were no ltering in the system then the system would look like a box rather than an eye. In gure 1, range of amplitude dierences of the zero crossings, is a measure of distortion caused by ISI.
DA , MN
the is a
JT ,
the range of amplitude dierences of the zero crossing , is a measure of the timmung jitter.
ST
In general, the most frequent use of the eye pattern is for qualitatively assessing the extent of the ISI. As the eye closes, ISI is increase; as the eye opens, ISI is decreaseing.
A training sequence used for equalization is often chosen to be a noise-like sequence which is needed to estimate the channel frequency response. In the simplest sense, training sequence might be a single narrow pulse, but a pseudonoise (PN) signal is preferred in practise because the PN signal has larger average power and hence larger SNR for the same peak transmitted power.
8 This
81
Figure 5.6:
Consider that a single pulse was transmitted over a system designated to have a raised-cosine transfer function
also consider that the channel induces ISI, so that the received demod-
ulated pulse exhibits distortion, as shown in gure 1, such that the pulse sidelobes do not go through zero at sample times. To achieve the desired raised-cosine transfer function, the equalizing lter should have a frequency response
He (f ) =
1 1 j c (f ) (1) Hc (f ) = |Hc (f )| e In other words, we would like the equalizing lter to generate a set of canceling echoes. The transversal
lter, illustrated in gure 2, is the most popular form of an easily adjustable equalizing lter consisting of a delay line with T-second taps (where T is the symbol duration). The tab weights could be chosen to force the system impulse response to zero at all but one of the sampling times, thus making exactly to the inverse of the channel transfer function
He (f )
correspond
Hc (f )
82
Figure 5.7:
Transversal lter
z (k) =
2N + 1 taps with weights cN , cN +1 , ...cN x (k) and tap weights cn as follows: N n=N x (k n) cn k = 2N, ...2N (2) x (N ) 0 0
Output samples
z (k)
are the
... ...
. . .
0 ...
0 ...
x (N + 1) x (N ) 0 . . . . . . . . . z = z (0) c = c0 x = x (N ) x (N 1) x (N 2) . . . . . . . . . z (2N ) cN 0 0 0 0 0 0 We can describe the relationship among z (k), x (k) and cn more compactly z = x.c(3a) z (2N ) cN c = x1 z (3b)
...
. . .
x (N + 1)
... ...
as
x (N ) 0
. . . x (N ) . . . x (N 1) x (N )
Whenever the matrix x is square, we can nd c by solving the following equation:
83
Notice that the index k was arbitrarily chosen to allow for have dimensions
4N + 1
4N + 1
and
2N + 1.
can be solved in deterministic way known as the zero-forcing solution, or, in a statistical way, known as the minimum mean-square error (MSE) solution.
Zero-Forcing Solution
dimension for the
At rst, by disposing top N rows and bottom N rows, matrix x is transformed into a square matrix of
2N + 1 by 2N + 1. Then equation c = x1 z is used to solve the 2N + 1 set of 2N + 1 weights cn . This solution minimizes the peak ISI distortion by 1 0 k=0 k = 1, 2, 3
Cn
weight
so that the equalizer output is forced to zero at N sample points on either side of the desired pulse.
z (k) = {
(4)
For such an equalizer with nite length, the peak distortion is guaranteed to be minimized only if the eye pattern is initially open. However, for high-speed transmission and channels introducing much ISI, the eye is often closed before equalization. Since the zero-forcing equalizer neglects the eect of noise, it is not always the best system solution.
cn
(MSE) of all the ISI term plus the noise power at the out put of the equalizer. MSE is dened as the expected value of the squared dierence between the desire data symbol and the estimated data symbol. By multiplying both sides of equation (4) by
xT ,
we have
x z=x
And
xc(5)
Rxz = Rxx c (6) T T Where Rxz = x z is called the cross-correlation vector and Rxx = x x is call the autocorrelation matrix of the input noisy signal. In practice, Rxz and Rxx are unknown, but they can be approximated by transmitting
a test signal and using time average estimated to solve for the tap weights from equation (6) as follows:
1 c = Rxx Rxz
Most high-speed telephone-line modems use an MSE weight criterion because it is superior to a zeroforcing criterion; it is more robust in the presence of noise and large ISI.
The basic limitation of a linear equalizer, such as the transversal lter, is the poor perform on channel A decision feedback equalizer (DFE) is a nonlinear equalizer that uses previous detector decision to eliminate the ISI on pulses that are currently being demodulated. In other words, the distortion on a current pulse that was caused by previous pulses is subtracted.
9 This
84
Figure 5.8:
Figure 1 shows a simplied block diagram of a DFE where the forward lter and the feedback lter can each be a linear lter, such as transversal lter. The nonlinearity of the DFE stems from the nonlinear characteristic of the detector that provides an input to the feedback lter. The basic idea of a DFE is that if the values of the symbols previously detected are known, then ISI contributed by these symbols can be canceled out exactly at the output of the forward lter by subtracting past symbol values with appropriate weighting. The forward and feedback tap weights can be adjusted simultaneously to fulll a criterion such as minimizing the MSE. The advantage of a DFE implementation is the feedback lter, which is additionally working to remove ISI, operates on noiseless quantized levels, and thus its output is free of channel noise.
10
Another type of equalization, capable of tracking a slowly time-varying channel response, is known as adapIt can be implemented to perform tap-weight adjustments periodically or continually. Periodic adjustments are accomplished by periodically transmitting a preamble or short training sequence of digital data known by the receiver. Continual adjustment are accomplished by replacing the known training sequence with a sequence of data symbols estimated from the equalizer output and treated as known data. When performed continually and automatically in this way, the adaptive procedure is referred to as decision directed. If the probability of error exceeds one percent, the decision directed equalizer might not converge. A common solution to this problem is to initialize the equalizer with an alternate process, such as a preamble
10 This
85
to provide good channel-error performance, and then switch to decision-directed mode. The simultaneous equations described in equation (3) of module Transversal Equalizer
11
, do not include
the eects of channel noise. To obtain stable solution to the lter weights, it is necessary that the data be averaged to obtain the stable signal statistic, or the noisy solution obtained from the noisy data must be averaged. The most robust algorithm that average noisy solution is the least-mean-square (LMS) algorithm. Each iteration of this algorithm uses a noisy estimate of the error gradient to adjust the weights in the direction to reduce the average mean-square error. The noisy gradient is simply the product
e (k) rx
of an error scalar
e (k)and
rx .
(1) are the desired output signal (a sample free of ISI) and the estimate at time k.
z (k)
T
and
z (k)
z (k) = c rx = N n=N x (k n) cn (2) T Where c is the transpose of the weight vector at time k.
Iterative process that updates the set of weights is obtained as follows:
size and thus controls the rate of convergence of the algorithm as well as the variance of the steady state solution. Stability is assured if the parameter
11 http://cnx.org/content/m15522/latest/
86
Chapter 6
Chapter 5: Channel Coding
In the previous section, we discussed information sources and quantied information. We also discussed how to represent (and compress) information sources in binary symbols in an ecient manner. In this section, we consider channels and will nd out how much information can be sent through the channel reliably. We will rst consider simple channels where the input is a discrete random variable and the output is also a discrete random variable. These discrete channels could represent analog channels with modulation and demodulation and detection.
Figure 6.1
X1
(6.1)
X2 X= . . . Xn
where
Xi X
1 This
87
88
Y1
Y2 Y3 Y= . . . Yn
where
(6.2)
Yi Y
xX
pY|X (y|x)
for all
yY
pY|X (y|x) =
i=1
for all
(6.3)
yY
xX
Example 6.1
A binary symmetric channel (BSC) is a discrete memoryless channel with binary input and binary output and
with antipodal signaling and matched lter receiver has probability of error of the error is symmetric with respect to the transmitted bit, then
2Es N0
. Since
pY |X (0|1)
pY |X (1|0)
2Es N0
(6.4)
= Q =
Figure 6.2
It is interesting to note that every time a BSC is used one bit is sent across the channel with probability of error of . The question is how much information or how many bits can be sent per channel use, reliably. Before we consider the above question a few denitions are essential. These are discussed in mutual information (Section 6.2).
89
H (X, Y ) =
x y
(6.5)
(6.6)
Mutual information is a useful concept to measure the amount of information shared between input
In our previous discussions it became clear that when the channel is noisy there may not be reliable communications. Therefore, the limiting factor could very well be reliability when one considers noisy channels. Claude E. Shannon in 1948 changed this paradigm and stated a theorem that presents the rate (speed of communication) as the limiting factor as opposed to reliability.
Example 6.2
Consider a discrete memoryless channel with four possible inputs and outputs.
Figure 6.3
Every time the channel is used, one of the four symbols will be transmitted. Therefore, 2 bits are sent per channel use. The system, however, is very unreliable. For example, if "a" is received, the receiver can not determine, reliably, if "a" was transmitted or "d". However, if the transmitter and receiver agree to only use symbols "a" and "c" and never use "b" and "d", then the transmission will always be reliable, but 1 bit is sent per channel use. Therefore, the rate of transmission was the limiting factor and not reliability. This is the essence of Shannon's noisy channel coding theorem, sponding outputs are disjoint (
e.g.,
i.e.,
far apart). The concept is appealing, but does not seem possible with
binary channels since the input is either zero or one. It may work if one considers a vector of binary inputs referred to as the extension channel.
2 This
90
Figure 6.4
This module provides a description of the basic information necessary to understand Shannon's Noisy Channel Coding Theorem (Section 6.4). However, for additional information on typical sequences, please refer to Typical Sequences (Section 6.3).
then if
is dierent from
in
places if
dH (x, y) n
The number of sequences of length
of length
at
is
n n
= n! (n )! (n n )!
(6.9)
Example 6.3
x = (0, 0, 0)
3! element: 1!2! T
1 = 3 and n = 3 1 . The number of output sequences 3 T T T = 321 = 3 given by (1, 0, 1) , (0, 1, 1) , and (0, 0, 0) . 12
and
dierent from
by one
n! nn en 2n
(6.10)
3 This
91
we can approximate
where
n n
2n((
log 2 )(1 )log 2 (1 ))
= 2nHb (
(6.11)
Hb ( ) ( log 2 ) (1 ) log 2 (1 ) is the entropy of a binary memoryless source. For any x 2nHb ( ) highly probable outputs that correspond to this input. Consider the output vector Y as a very long random vector with entropy nH (Y ). As discussed earlier nH(Y ) n (Example 3.1), the number of typical sequences (or highly probably) is roughly 2 . Therefore, 2 is the nH(Y ) nHb ( ) total number of binary sequences, 2 is the number of typical sequences, and 2 is the number of
there are elements in a group of possible outputs for one input vector. The maximum number of input sequences that produce nonoverlapping output sequences
= =
(6.12)
Figure 6.5
is (6.13)
2n(H(Y )Hb ( n (H (Y ) Hb ( ))
))
The number of information bits that can be sent across the channel reliably per The maximum reliable transmission rate per channel use
channel uses
= = =
(6.14)
H (Y ) Hb ( ) H (Y ).
Note that
probability and can not be minimized any further. The entropy of the channel output is the entropy of a binary random variable. If the input is chosen to be uniformly distributed with
pX (0) = pX (1) =
1 2.
92
pY (0)
= =
(1 ) pX (0) + pX (1)
1 2
(6.15)
and
pY (1)
= =
(1 ) pX (1) + pX (0)
1 2
Resulting in a maximum rate
(6.16)
Then, H (Y ) takes its maximum value of 1. pX (1) = 1 . This result says that ordinarily 2
R = 1 Hb ( )
when
pX (0) = 1 . If
one needs to have probability of error to reach zero then one should reduce transmission of information to
1 Hb ( )
H (Y |X)
= = = =
px (0) H (Y |X = 0) + px (1) H (Y |X = 1) px (0) ( ((1 ) log 2 (1 ) log 2 )) + px (1) ( ((1 ) log 2 (1 ) log 2 )) ((1 ) log 2 (1 )) log 2 Hb ( )
(6.17)
= H (Y ) H (Y |X) = I (X; Y )
(6.18)
Example 6.4
The maximum reliable rate for a BSC is is 0 when
1 Hb ( ).
=0
or
= 1.
The rate
1 2
Figure 6.6
This module provides background information necessary for an understanding of Shannon's Noisy Channel Coding Theorem (Section 6.4). It is also closely related to material presented in Mutual Information (Section 6.2).
93
It is highly recommended that the information presented in Mutual Information (Section 6.2) and in Typical Sequences (Section 6.3) be reviewed before proceeding with this document. An introductory module on the .
Theorem 6.1:
(6.19)
I (X; Y )
transmission rate
is less than
C,
enough whose error probability is less than block length is bounded away from zero.
X and the output Y . If the > 0 there exists a code with block length n large R > C , the error probability of any code with any
Example 6.5
If we have a binary symmetric channel with cross over probability 0.1, then the capacity
C 0.5
bits per transmission. Therefore, it is possible to send 0.4 bits per channel through the channel reliably. This means that we can take 400 information bits and map them into a code of length 1000 bits. Then the whole code can be transmitted over the channels. One hundred of those bits may be detected incorrectly but the 400 information bits may be decoded correctly. Before we consider continuous-time additive white Gaussian channels, let's concentrate on discrete-time Gaussian channels
Yi = Xi + i
where the
2 .
The input
Xi 's are information bearing random variables and i Xi 's are constrained to have power less than P 1 n
n
Xi 2 P
i=1
(6.21)
n Y =X+
(6.22)
For large
n,
1 n
i 2 =
i=1
1 n
(|yi xi |)
i=1
2
an
(6.23)
n 2
centered about
since
Xi 's
are power
approaches innity, Y will be located in 2 (|y x|) n 2 constrained and i and Xi 's are independent
n-dimensional
1 n
yi 2 P + 2
i=1
(6.24)
|Y| n P + 2
This mean
(6.25)
is in a sphere of radius
n (P + 2 )
4 This content is available online at <http://cnx.org/content/m10180/2.10/>. 5 "Noisy Channel Coding Theorem" <http://cnx.org/content/m0073/latest/>
94
X's
n 2
t in a sphere of radius
n (P + 2 ).
= =
n n( 2 +P ) n n 2 P 2
n 2
(6.26)
1+
Figure 6.7
Exercise 6.1
How many bits of information can one send in The capacity of a discrete-time Gaussian channel
(Solution on p. 99.)
C = 1 log 2 1 + 2 W.
P 2
When the channel is a continuous-time, bandlimited, additive white Gaussian with noise power spectral
N0 2 and input power constraint P and bandwidth rate to provide power per sample P and noise power
density
W N0 df W 2
= W N0
The channel capacity
(6.27)
1 2 log 2
1+
P N0 W
2W ,
then
C=
2W P log 2 1 + 2 N0 W
(6.28)
C = W log 2 1 +
(6.29)
Example 6.6
The capacity of the voice band of a telephone channel can be determined using the Gaussian model. The bandwidth is 3000 Hz and the signal to noise ratio is often 30 dB. Therefore,
bits sec
(6.30)
One should not expect to design modems faster than 30 Kbs using this model of telephone channels. It is also interesting to note that since the signal to noise ratio is large, we are expecting to transmit 10 bits/second/Hertz across telephone channels.
95
Channel coding is a viable method to reduce information rate through the channel and increase reliability. This goal is achieved by adding redundancy to the information symbol vector resulting in a longer coded vector of symbols that are distinguishable at the output of the channel. Another brief explanation of channel coding is oered in Channel Coding and the Repetition Code . We consider only two classes of codes, block codes (Section 6.5.1: Block codes) and convolutional codes (Section 6.6).
7
k.
n.
The mapping is independent from previous blocks, that is, there is no memory from one block to
another.
Example 6.7
k=2
and
(6.32)
(6.33)
(6.34)
information sequence
2k
(6.36)
i.e.,
how
ci C
and
cj C
implies
(ci cj ) C
where
is an elementwise modulo 2
dH (ci , cj ) = #of
(6.37)
96
0 0 0 . . . 0 0
by
g1
and
1 0 0 . . . 0 0
by
g2 ,. . .,
and
ek =
0 0 0 . . . 0 1
by
gk .
u = =
and the corresponding codeword could be
u1
. . .
(ui ei )
(6.38)
uk
k i=1
c=
i=1
Therefore
(ui gi )
(6.39)
with
c = {0, 1}
and
u {0, 1}
where
g1 g2 G = . , . . gk
c = uG
a
(6.40)
k xn
Example 6.8
In Example 6.7 with
(6.41)
(6.42)
(6.43)
(6.44)
97
g1 = (0, 1, 1, 1, 1)
T
and
g2 = (1, 0, 1, 0, 0)
and
G=
0 1
1 0
1 1
1 0
1 0
Additional information about coding eciency and error are provided in Block Channel Coding . Examples of good linear codes include Hamming codes, BCH codes, Reed-Solomon codes, and many more. The rate of these codes is dened as
detection properties.
Convolutional codes are one type of code used for channel coding (Section 6.5). Another type of code used is block coding (Section 6.5.1: Block codes).
information bits but also by the previous information bits. This dependence
Example 6.9
A rate
1 2 convolutional coder
k = 1, n = 2
Figure 6.8
Since the length of the shift register is 2, there are 4 dierent rates. convolutional coder can be captured by a 4 state machine. States: For example, arrival of information bit
The encoding and the decoding process can be realized in trellis structure.
98
Figure 6.9
1 1 0 0
the output sequence would be
11 10 10 11
The transmitted codeword is then
11 10 10 11.
11 0 0 10 11
Figure 6.10
00
the Hamming distance between the possible paths and the received
At the end, the path with minimum distance to the received sequence is
chosen as the correct trellis path. The information sequence will then be determined. Convolutional coding lends itself to very ecient trellis based encoding and decoding. practical and powerful codes. They are very
99
(6.45)
100
Chapter 7
Chapter 6: Communication over Fading Channels
For most channels, where signal propagate in the atmosphere and near the ground, the free-space propagation model is inadequate to describe the channel behavior and predict system performance. In wireless system, s signal can travel from transmitter to receiver over multiple reective paths. This phenomenon, called multipath fading, can cause uctuations in the received signal's amplitude, phase, and angle of arrival, giving rise to the terminology multipath fading. Another name, scintillation, is used to describe the fading caused by physical changes in the propagating medium, such as variations in the electron density of the ionosopheric layers that reect high frequency radio signals. Both fading and scintillation refer to a signal's random uctuations.
1 This 2 This
101
102
Figure 7.1:
Large-scale fading represents the average power This phenomenon is aected by prominent
terrain contours (e.g. hills, forests, billboards, clumps of buildings, etc) between the transmitter and receiver. Small-scale fading refers to the dramatic changes in signal amplitude and phase as a result of small changes (as small as half wavelength) in the spatial positioning between a receiver and transmitter. Small-scale fading is called Rayleigh fading if there are multiple reective paths and no line-of-sight signal component otherwise it is called Rician. When a mobile radio roams over a large area it must process signals that experience both types of fading: small-scale fading superimposed on large-scale fading. Large-scale fading (attenuation or path loss) can be considered as a spatial average over the small-scale uctuations of the signal. There are three basic mechanisms that impact signal propagation in a mobile communication system: 1. Reection occurs when a propagating electromagnetic wave impinges upon smooth surface with very large dimensions relative to the RF signal wavelength. 2. Diraction occurs when the propagation path between the transmitter and receiver is obstructed by a dense body with dimensions that are large relative to the RF signal wavelength. Diraction accounts for RF energy traveling from transmitter to receiver without line-of-sight path. obstruction. 3. Scattering occurs when a radio wave impinges on either a large, rough surface or any surface whose dimension are on the other of the RF signal wavelength or less, causing the energy to be spread out or It is often termed shadowing because the diracted eld can reach the receiver even when shadowed by an impenetrable
103
Figure 7.2:
Figure 2 is a convenient pictorial showing the various contributions that must be considered when estimating path loss for link budget analysis in a mobile radio application: (1) mean path loss as a function of distance, due to large-scale fading, (2) near-worst-case variations about the mean path loss or large-scale fading margin (typically 6-10 dB), (3) near-worst-case Rayleigh or small-scale fading margin (typically 20-30 dB) Using complex notation
s (t) = Re{g (t) .ej2f c t }(1) Where Re{.} denotes the real
is called the complex envelope of
part of
{.},
and
fc
g (t)
s (t)
g (t) =| g (t) | .ej(t) = R (t) .ej(t) (2) Where R (t) =| g (t) | is the envelope magnitude,
(t)
is its phase.
104
(t) .ej(t) .
(t) .R (t) = m (t) .r0 (t) .R (t)(3) Where m (t) and r0 (t) are called the
of the envelope respectively. Sometimes,
m (t) is referred to as the local mean or log-normal fading, and r0 (t) is referred to as multipath (t) .m (t).
In gure 3a, the
or Rayleigh fading. For the case of mobile radio, gure 3 illustrates the relationship between signal power received is a function of the multiplicative factor
(t).
scale fading can be readily identied. The typical antenna displacement between adjacent signal-strength nulls due to small-scale fading is approximately half of wavelength. In gure 3b, the large-scale fading or local mean position.
m (t)
r0 (t).
relative slow varying function of position, while the Rayleigh fading is a relatively fast varying function of
105
Figure 7.3:
In general, propagation models for both indoor and outdoor radio channels indicate that mean path loss as
(2)
106
is the distance between transmitter and receiver, and the reference distance
d0
corresponds to
a point located in the far eld of the transmit antenna. Typically, micro cells, and 1 m for indoor channels. Moreover
d0
d0 is evaluated using equation 2 4d0 Ls (d0 ) = [U+E09E] [U+E09F] (3) or by conducting measurement. The value of the path-loss exponent n depends on the frequency, antenna n
can be lower than 2. When obstructions are present,
height and propagation environment. In free space, n is equal to 2. In the presence of a very strong guided wave phenomenon (like urban streets),
is larger.
Measurements have shown that the path loss about the mean distant-dependent value
Lp
Lp (d) Lp (d) (dB) = Ls (d0 ) (dB) + 10nlog10 (d/d0 ) + X (dB)(4) Where X denote a zero-mean, Gaussian random variable (in dB). X is site and distance dependent.
[U+F073]
(in
As can be seen from the equation, the parameters needed to statistically describe path loss due to largescale fading, for an arbitrary location with a specic transmitter-receiver separation are (1) the reference distance, (2) the path-loss exponent, and (3) the standard deviation
X .
Small-scale fading refers to the dramatic changes in signal amplitude and phase that can be experienced as a result of small changes (as small as half wavelength) in the spatial position between transmitter and In this section, we will develop the small-scale fading component
r0 (t).
sumption that the antenna remains within a limited trajectory so that the eect of large-scale fading m(t) is constant. Assume that the antenna is traveling and there are multiple scatter paths, each associated with a time-variant propagation delay
n (t)
n (t).
r (t) =
n (t) s (t n (t))(1)
Substituting Equation (1, module Characterizing Mobile-Radio Propagation) over into Equation (1), we can write the received bandpass signal as follow:
n (t) g (t n (t)) ej2f c (tn (t)) (2) j2f c n (t) g (t n (t)) ej2f c t n n (t) e
n
j2f n (t) g (t n (t))(3) n n (t) ec Consider the transmission of an unmodulated carrier at frequency
fc
s (t) =
phases
j2f c n (t) = n n (t) ejn (t) (4) n n (t) e The baseband signal s(t) consists of a sum of time-variant components having amplitudes
n (t)
and
n (t).
Notice that
n (t)
n (t)
changes by 1/
fc
(very small
delay). These multipath components combine either constructively or destructively, resulting in amplitude variations of s(t). Final equation is very important because it tell us that a bandpass signal s(t) is the signal that experienced the fading eects and gave rise to the received signal r(t), these eects can be described by analyzing r(t) at the baseband level.
4 This
107
Figure 7.4
When the received signal is made up of multiple reective arrays plus a signicant line-of-sight (nonfaded) component, the received envelope amplitude has a Rician pdf as below, and the fading is preferred to as Rician fading
2 (r0 +A2 )
p (r0 ) = {
r0 2 exp
2 2
I0
r0 A 2
r0 0, A 0
otherwise
(5)
0
The parameter
is the pre-detection mean power of the multipath signal. A denotes the peak magnitude
described in terms of a parameter K, which is dened as the ratio of the power in the specular component to the power in the multipath signal. It is given by pdf, shown as When the magnitude of the specular component A approach zero, the Rician pdf approachs a Rayleigh
p (r0 ) = {
r0 2 exp
0 22
r2
r0 0
otherwise
(6)
The Rayleigh pdf results from having no specular signal component, it represents the pdf associated with the worst case of fading per mean received signal power. Small scale manifests itself in two mechanisms - time spreading of signal (or signal dispersion) and time-variant behavior of the channel (gure 2). It is important to distinguish between two dierent time references- delay time
and transmission time t. Delay time refers to the time spreading eect resulting
from the fading channel's non-optimum impulse response. The transmission time, however, is related to the motion of antenna or spatial changes, accounting for propagation path changes that are perceived as the channel's time-variant behavior.
108
Figure 7.5
scattering. The model treats arriving at a receive antenna with dierent delay as uncorrelated. received power vary as a function of time delay
It represents the signal's propagation delay that exceeds the delay of the rst signal arrival at the receiver. In wireless channel, the received signal usually consists of several discrete multipath components causing S( ). For a single transmitted impulse, the time maximum excess delay.
Tm
5 This
109
Figure 7.6
Tm
Ts
can be
viewed in terms of two dierent degradation categories: frequency-selective fading and frequency nonselective
Tm > Ts .
the received multipath components of a symbol extend beyond the symbol's time duration. In fact, another name for this category of fading degradation is channel-induced ISI. In this case of frequency-selective fading, mitigating the distortion is possible because many of the multipath components are resolved by receiver. A channel is said to exhibit frequency nonselective or at fading if
Tm < Ts .
received multipath components of a symbol arrive within the symbol time duration; hence, the components are not resolvable. There is no channel-induced ISI distortion because the signal time spreading does not result in signicant overlap among neighboring received symbols.
| R (f ) |
| R (f ) |
signals as a function of the frequency dierence between two signals. The function
| R (f ) |
helps answer
110
the correlation between received signals that are spaced in the frequency
complex spectra of two separated received signals, and repeating the process many times with ever-larger
f.
Spectral components in that range are aected by the channel in a similar manner. Note that
f0
f0 and the maximum excess delay time Tm are related as approximation below 1 (1) Tm A more useful parameter is the delay spread, most often characterized in terms of its root-mean-square = 2 2
1/2
(2)
Where
is the
square root of the second central moment of S( ). A relationship between coherence bandwidth and delay spread does not exist. However, using Fourier transform techniques an approximation can be derived from actual signal dispersion measurements in various channel. Several approximate relationships have been developed. If the coherence bandwidth is dened as the frequency interval over which the channel's complex frequency transfer function has a correlation of at least 0.9, the coherent bandwidth is approximately
f0
1
50
(3)
With the dense-scatterer channel model, coherence bandwidth is dened as the frequency interval over which the channel's complex frequency transfer function has a correlation of at least 0.5, to be
1 2 (4) Studies involving ionospheric eects often employ the following denition 1 f0 5 (5) The delay spread and coherence bandwidth are related to a channel's multipath characteristic, diering
f0
for dierent propagation paths. It is important to note that all parameters in last equation independent of signaling speed, a system's signaling speed only inuences its transmission bandwidth W. A channel is preferred to as frequency-selective if
the signaling rate or signal bandwidth W). Frequency selective fading distortion occurs whenever a signal's spectral components are not all aected equally by the channel. Some of the signal's spectra components failing outside the coherent bandwidth will be aected dierently, compared with those components contained within the coherent bandwidth (Figure 2(a)). Frequency- nonselective of at-fading degradation occurs whenever
f0 > W .
components will be aected by the channel in a similar manner (fading or non-fading) (Figure 2(b)). Flat fading does not introduce channel-induced ISI distortion, but performance degradation can still be expected due to the loss in SNR whenever the signal is fading. In order to avoid channel-induced ISI distortion, the channel is required to exhibit at fading. This occurs, provide that
f0 > W
(6)
1 Ts
Hence, the channel coherent bandwidth f0 set an upper limit on the transmission rate that can be used without incorporating an equalizer in the receiver. However, as a mobile radio changes its position, there will be times when the received signal experiences frequency-selective distortion even though
f0 > W
can be especially mutilated by deprivation of its low-frequency components. Thus, even though a channel is categorized as at-fading, it still manifests frequency-selective fading.
111
Figure 7.7
112
terizes an electronic lter. Figure 3(a) depicts a wideband lter (narrow impulse response) and its eect on a signal in both time domain and the frequency domain. This lter resembles a at-fading channel yielding an output that is relatively free of dispersion. Figure 3(b) shows a narrowband lter (wide impulse response). The output signal suers much distortion, as shown both time domain and frequency domain. process resembles a frequency-selective channel. Here the
Figure 7.8
113
Figure 1 highlights three major performance categories in terms of bit-error probability PB versus Eb /N0
content is available online at <http://cnx.org/content/m15535/1.1/>.
114
Figure 7.9
The leftmost exponentially shaped curve highlights the performance that can be expected when using any nominal modulation scheme in AWGN interference. Observe that at a reasonable
Eb /N0
level, good
115
Eb /N0 that is characteristic of at fading or slow fading when there is no line-of-sight signal component present. The curve is a function of the reciprocal of Eb /N0 , so for practical values of Eb /N0 , performance
will generally be bad. The curve that reaches an irreducible error-rate level, sometimes called an
performance, where the bit-error probability can level o at values nearly equal to 0.5. This shows the severe performance degrading eects that are possible with frequency-selective fading or fast fading. If the channel introduces signal distortion as a result of fading, the system performance can exhibit an irreducible error rate at a level higher than the desired error rate. In such cases, the only approach available for improving performance is to use some forms of mitigation to remove or reduce the signal distortion. The mitigation method depends on whether the distortion is caused by frequency-selective fading or fast fading. Once the signal distortion has been mitigated, the
PB
versus
Eb /N0
the awful category to the merely bad Rayleigh-limit curve. Next, it is possible to further ameliorate the eects of fading and strive to approach AWGN system performance by using some form of diversity to provide the receiver with a collection of uncorrelated replicas of the signal, and by using a powerful error-correction code.
Figure 2 lists several mitigation techniques for combating the eects of both signal distortion and loss
1) choose the type of mitigation to reduce or remove any distortion degradation; 2) choose a diversity type that can best approach AWGN system performance.
in SNR. The mitigation approaches to be used when designing a system should be considered in two basic steps:
116
Figure 7.10
Equalization can mitigate the eects of channel-induced ISI brought on by frequency-selective fading. It can help modify system performance described by the curve that is awful to the one that is merely bad. The process of equalizing for mitigating ISI eects involves using methods to gather the dispersed symbol energy An equalizer is an inverse lter of the channel. If the channel is frequency selective, the equalizer enhances the frequency components with small amplitudes and attenuates those with large amplitudes. The goal is for the combination of channel and equalizer lter to provide a at composite-received frequency response and linear phase. Because the channel response varies with time, the equalizer lters must be The
adaptive equalizers.
1) a feedforward section that is a linear transversal lter whose stage length and tap weights are selected to coherently combine virtually all of the current symbol's energy. 2) a feedback section that removes energy remaining from previously detected symbols. The basic idea behind the DFE is that once an information symbol has been detected, the ISI that it induces on future symbols can be estimated and subtracted before the detection of subsequent symbols.
7 This
117
and chooses the data sequence that is the most probable of all the candidates. The MLSE is optimal in the sense that it minimizes the probability of a sequence error. Since the MLSE equalizer is implemented by using
Viterbi equalizer.
ISI distortion because the hallmark of spread-spectrum systems is their capability of rejecting interference, and ISI is a type of interference. Consider a DS/SS binary and one reected path. wave that is delayed by expressed as follows:
r (t),
r (t) = Ax (t) g (t) cos (2f c t) + Ax (t ) g (t ) cos (2f c t + ) where x (t) is the data signal, g (t) is the pseudonoise (PN) spreading code, and is the dierential time delay between the two paths. The angle is a random phase, assumed to be uniformly distributed in the range (0, 2), and is the attenuation of the multipath signal relative to the direct path signal. The receiver multiplies the incoming r (t) by the code g (t). If the receiver is synchronized to the direct
path signal, multiplication by the code signal yields the following:
r (t) g (t) = Ax (t) g 2 (t) cos (2f c t) + Ax (t ) g (t) g (t ) cos (2f c t + ) 2 where g (t) = 1. If is greater than the chip duration, then | g (t) g (t ) dt || g 2 (t) dt |
over some appropriate interval of integration (correlation). Thus, the spread spectrum system eectively eliminates the multipath interference by virtue of its code-correlation receiver. Even though channel-induced ISI is typically transparent to DS/SS systems, such systems suer from the loss in energy contained in the multipath components rejected by the receiver. The need to gather this lost energy belonging to a received chip was the motivation for developing the
Rake receiver.
A channel that is classied as at fading can occasionally exhibit frequency-selective distortion when the null of the channel's frequency-transfer function occurs at the center of the signal band. The use of DS/SS is a practical way of mitigating such distortion because the wideband SS signal can span many lobes of the selectively faded channel frequency response. This requires the spread-spectrum bandwidth chip rate
Rch ),
f0 .
Wss
to
Frequency-hopping spread-spectrum (FH/SS): can be used to mitigate the distortion caused by Orthogonal frequency-division multiplexing (OFDM):
1/N )
frequency-selective fading, provided that the hopping rate is at least equal to the symbol rate. FH receivers avoid the degradation eects due to multipath by rapidly changing in the transmitter carrier-frequency band, thus avoiding the interference by changing the receiver band position before the arrival of the multipath signal. can be used for signal transmission in frequency-selective fading channels to avoid the use of an equalizer by lengthening the symbol duration. The approach is to partition (demultiplex) a high symbol-rate sequence into contains a sequence of a lower symbol rate (by the factor is made up of
than the original sequence. The signal band on each carrier to be less than the channel's
orthogonal carrier waves, and each one is modulated by a dierent symbol group.
W 1/Ts ,
Pilot
f0 .
signal is the name given to a signal intended to facilitate the coherent detection of waveforms.
Pilot signals can be implemented in the frequency domain as in-band tones, or in the time domain as digital sequences that can also provide information about the channel state and thus improve performance in fading conditions.
118
For fast-fading distortion, use a robust modulation (non-coherent or dierentially coherent) that does
W 1/Ts ,
fd 1/T0 ,
by adding signal
Error-correction coding and interleaving can provide mitigation, because instead of providing more signal energy, a code reduces the required
Eb /N0 .
For a given
Eb /N0
oor will be lowered compared to the uncoded case. When fast-fading distortion and frequency-selective distortion occur simultaneously, the frequency-selective distortion can be mitigated by the use of an OFDM signal set. Fast fading, however, will typically degrade conventional OFDM because the Doppler spreading corrupts the orthogonality of the OFDM subcarriers. A polyphase ltering technique is used to provide time-domain shaping and partial-response coding to reduce the spectral sidelobes of the signal set, and thus help preserve its orthogonality. and canceling lter. The process introduces known ISI and adjacent channel interference (ACI) which are then removed by a post-processing equalizer
Until this point, we have considered the mitigation to combat frequency-selective and fast-fading distortions. The next step is to use diversity methods to move the system operating point from the error-performance curve labeled as bad to a curve that approaches AWGN performance. The term diversity is used to denote the various methods available for providing the receiver with uncorrelated renditions of the signal of interest. Some of the ways in which diversity methods can be implemented are:
Time diversity:
T0 .
When used along with error-correction coding, interleaving is a form of time diversity.
Frequency f0 ,
diversity:
f0 .
greater than
thus providing the receiver with several independently-fading signal replicas. This achieves is made larger than
L = W/f0 . f0 ,
mitigation in the form of equalization is provided. Thus, an expanded bandwidth can improve system performance (via diversity) only if the frequencyselective distortion that the diversity may have introduced is mitigated.
Spread spectrum:
but to interchip interference. Spread spectrum is a bandwidth-expansion technique that excels at rejecting interfering signals. In the case of
are rejected if they are time-delayed by more than the duration of one chip. However, in order to approach AWGN performance, it is necessary to compensate for the loss in energy contained in those rejected components. The
Rake receiver
makes it possible to coherently combine the energy from several of the multipath The
GSM system uses slow FH (217 hops/s) to compensate for cases in which the mobile unit is moving very slowly (or not at all) and experiences deep fading due to a spectral null.
Spatial diversity is usually accomplished through the use of multiple receive antennas, separated by
a distance of at least 10 wavelengths when located at a base station (and less when located at a mobile unit). Signal-processing techniques must be employed to choose the best antenna output or to coherently combine all the outputs. Systems have also been implemented with multiple transmitters, each at a dierent location.
8 This 9 This
119
Polarization diversity is yet another way to achieve additional uncorrelated samples of the signal. Some techniques for improving the loss in SNR in a fading channel are more ecient and more powerful
than repetition coding. Error-correction coding represents a unique mitigation technique, because instead of providing more signal energy it reduces the required
Eb /N0
coding coupled with interleaving is probably the most prevalent of the mitigation schemes used to provide improved system performance in a fading environment.
10
This section shows the error-performance improvements that can be obtained with the use of diversity
PB ,
averaged through all the ups and downs of the fading experience in a
PB =
where
where
PB (x) p (x) dx PB (x) is the bit-error probability for a given modulation scheme x = 2 Eb /N0 , and p (x) is the pdf of x due to the fading conditions.
Eb
and
N0
constant,
= x, is
distribution:
For
Rayleigh fading, has a Rayleigh distribution so that 2 , and consequently x, have a chi-squared
1 x p (x) = exp x 0 where = 2 Eb /N0 is the SNR averaged through the ups and downs of fading. If each diversity (signal) branch, i = 1, 2,...,M , has an instantaneous SNR = i , and we assume that each branch has the same average SNR given by , then 1 p (i ) = exp i i 0 The probability that a single branch has SNR less than some threshold is: 1 P (i ) = 0 p (i ) d i = 0 exp i d i = 1 exp The probability that all M independent signal diversity branches are received simultaneously with an SNR less than some threshold value is: M P (1 , ..., M ) = 1 exp The probability that any single branch achieves SNR > is: M P (i > ) = 1 1 exp
This is the probability of exceeding a threshold when selection diversity is used.
Assume that four-branch diversity is used, and that each branch receives an independently Rayleigh-
= 20dB,
simultaneously with an SNR less than 10dB (and also, the probability that this threshold will be exceeded). Compare the results to the case when no diversity is used.
Solution
With
= 10dB,
and
120
selection, feedback, maximal ratio, equal gain. Selection combining used in spatial diversity systems involves the sampling of M antenna signals, and feedback or scanning diversity, the M signals are scanned in a xed sequence until one is found that
sending the largest one to the demodulator. Selection-diversity combining is relatively easy to implement but not optimal because it does not make use of all the received signals simultaneously. With
exceeds a given threshold. This one becomes the chosen signal until it falls below the established threshold, and the scanning process starts again. The error performance of this technique is somewhat inferior to the other methods, but feedback is quite simple to implement. In
maximal-ratio combining,
individual SNRs and then summed. The individual signals must be cophased before being summed. Maximal-ratio combining produces an average SNR as shown below:
M M i=1 i = i=1 = M where we assume that each branch has the same average SNR given by
M =
i = .
Thus, maximal-ratio combining can produce an acceptable average SNR, even when none of the individual i
Equal-gain combining is similar to maximal-ratio combining except that the weights are all set to unity.
The possibility of achieving an acceptable output SNR from a number of unacceptable inputs is still retained. The performance is marginally inferior to maximal ratio combining.
12
amplitude shift keying (ASK) or quadrature amplitude modulation (QAM) is inherently vulnerable to performance degradation in a fading environment. Thus, for fading
channels, the preferred choice for a signaling scheme is a frequency or phase-based modulation type.
orthogonal FSK
is useful because its error performance is better than binary signaling. In slow
and
In considering
8-FSK perform within 0.1 dB of each other. PSK modulation for fading channels, higher-order
Doppler spread fd = V / shows that the fading rate is a direct function of velocity. Table 1 shows Doppler spread versus vehicle speed at carrier frequencies of 900 MHz and 1800 MHz. Calculate the phase variation per symbol for the case of signaling with QPSK modulation at the rate of 24.3 kilosymbols/s.
The Assume that the carrier frequency is 1800 MHz and that the velocity of the vehicle is 50 miles/hr (80 km/hr). Repeat for a vehicle speed of 100 miles/hr.
Table 1
11 This 12 This
121
Solution
At a velocity of 100 miles/hr:
13
The primary benet of an interleaver for transmission in fading environment is to provide time diversity
IL ,
13 This
122
Figure 7.11
123
It should be apparent that an interleaver having the largest ratio of large numbersay 1,000 or 10,000.
TIL /T0
demodulated BER leading to small decoded BER). This leads to the conclusion that because the inherent time delay associated with an interleaver would be excessive.
TIL /T0
should be some
The previous section shows that for a cellular telephone system with a carrier frequency of 900 MHz, a
TIL /T0
ratio of 10 is about as large as one can implement without suering excessive delay.
Note that the interleaver provides no benet against multipath unless there is motion between the transmitter and receiver (or motion of objects within the signal-propagating paths). The system error-performance over a fading channel typically degrades with increased speed because of the increase in Doppler spread or fading rapidity. However, the action of an interleaver in the system provides mitigation, which becomes more eective at higher speeds
Figure 2 show that communications degrade with increased speed of the mobile unit (the fading rate
CDMA system meeting the Interim Specication 95 (IS-95) over a link comprising a moving
increases), the benet of an interleaver is enhanced with increased speed. This is the results of eld testing performed on a vehicle and a base station.
Figure 7.12
Typical .
Eb /N0
performance versus vehicle speed for 850 MHz links to achieve a frame-error rate of 1
124
GSM time-division multiple access (TDMA) frame in Figure 1 has duration of 4.615 ms and comprising midamble,
called a
8 slots, one assigned to each active mobile user. A normal transmission burst occupying one time slot contains
training
or
sounding sequence.
The slot-time
duration is 0.577 ms (or the slot rate is 1733 slots/s). The purpose of the midamble is to assist the receiver in estimating the impulse response of the channel adaptively (during the time duration of each 0.577 ms slot). For the technique to be eective, the fading characteristics of the channel must not change appreciably during the time interval of one slot.
Figure 7.13
Consider a GSM receiver used aboard a high-speed train, traveling at a constant velocity of 200 km/hr
= 0.33 m). The distance /2 3 corresponds approximately to the coherence V time. Therefore, the channel coherence time is more than ve times greater than the slot time of 0.577 ms.
(55.56 m/s). Assume the carrier frequency to be 900 MHz (the wavelength is corresponding to a half-wavelength is traversed in
T0
The time needed for a signicant change in channel fading characteristics is relatively long compared to the time duration of one slot. The GSM symbol rate (or bit rate, since the modulation is binary) is 271 kilosymbols/s; the bandwidth, W, is 200 kHz. Since the typical rms delay spread the resulting coherence bandwidth:
1 5 100kHz Since f0 < W , the GSM receiver must utilize some form of mitigation to combat frequency-selective distortion. To accomplish this goal, the is typically implemented.
f0
Viterbi equalizer
14 This
125
Figure 2 shows the basic functional blocks used in a GSM receiver for estimating the channel impulse
response.
Figure 7.14
This estimate is used to provide the detector with channel-corrected reference waveforms as explained below: (the Let
MLSE
str (t)
rtr (t)
rtr (t) = str (t) hc (t) rtr (t) is part of the received normal burst, it is extracted and sent to a lter having impulse response hmf (t) , that is matched to str (t). This matched lter yields at its output an estimate of hc (t), denoted he (t): he (t) = rtr (t) hmf (t) = str (t) hc (t) hmf (t) where Rs (t) = str (t) hmf (t) is the autocorrelation function of str (t). If str (t) is designed to have a highly-peaked (impulse-like) autocorrelation function Rs (t), then he (t) hc (t). Next, we use a windowing function, w (t), to truncate he (t) to form a computationally aordable function, hw (t). The time duration of w (t), denoted L0 , must be large enough to compensate for the eect of typical channel-induced ISI. The term L0 consists of the sum of two contributions, namely LCISI , corresponding to
At the receiver, since the controlled ISI caused by Gaussian ltering of the baseband waveform (which then modulates the carrier using MSK), and
LC , L0 = LCISI + LC
The GSM system is required to provide distortion mitigation caused by signal dispersion having delay spreads of approximately 1520 bit intervals. Thus, the
Viterbi equalizer
s.
s,
we can express
L0
in units of
L0 -bit
interval in the message, the function of the Viterbi equalizer is to nd the most likely
L0 -bit
sequence out
126
of the
2L0
ideal waveforms (generated at the receiver) in the same way that the reference waveforms are convolved with the
channel has disturbed the transmitted slot. Therefore, the windowed estimate of the channel impulse response, channel-corrected reference waveforms.
hw (t)
Next, the channel-corrected reference waveforms are compared against the received data waveforms to yield metric calculations. However, before the comparison takes place, the received data waveforms are convolved with the known windowed autocorrelation function pared to all possible to that used in the data sequence.
w (t) Rs (t),
comparable to the transformation applied to the reference waveforms. This ltered message signal is com-
2L0
channel-corrected reference signals, and metrics are computed in a manner similar It yields the
15
Interim Specication 95 (IS-95) describes a Direct-Sequence Spread-Spectrum (DS/SS) cellular system that
Rake receiver to provide path diversity for mitigating the eects of frequency-selective fading.
The
Rake receiver searches through the dierent multipath delays for code correlation and thus recovers delayed signals that are then optimally combined with the output of other independent correlators. 1 1. Each abscissa shows three components arriving with delays The component arriving at the receiver at time t4 , with delay the components arriving at times
Figure 1 show the power proles associated with the ve chip transmissions of the code sequence 1 0 1
1 , 2 ,
and
3 .
between the transmission times ti and the intervals between the delay times
t3
and
t2
with delays
and
3 , is time-coincident with two others, namely 1 respectively. Since in this example the
delayed components are separated by at least one chip time, they can be resolved.
15 This
127
Figure 7.15
At the receiver, there must be a sounding device dedicated to estimating the time large compared to the chip time duration ( that the receiver can readily adapt to them. Once the
fading rate in mobile radio system is relatively slow (in the order of milliseconds) or the channel coherence
T0 > Tch ).
i delays are estimated, a separate correlator is dedicated to recovering each resolvable multipath
component. In this example, there would be three such dedicated correlators, each one processing a delayed version of the same chip sequence 1 0 1 1 1. Each correlator receives chips with power proles represented by the sequence of components shown along a diagonal line. For simplicity, the chips are all shown as positive signaling elements. In reality, these chips form a
pseudonoise (PN)
both positive and negative pulses. Each correlator attempts to correlate these arriving chips with the same appropriately synchronized PN code. detection is made. The interference-suppression capability of DS/SS systems stems from the fact that a code sequence arriving at the receiver time-shifted by merely one chip will have very low correlation to the particular PN code with which the sequence is correlated. Therefore, any code chips that are delayed by one or more chip times will be suppressed by the correlator. The delayed chips only contribute to raising the interference level (correlation sidelobes). The mitigation provided by the Rake receiver can be termed path diversity, since it allows the energy of a chip that arrives via multiple paths to be combined coherently. Without the Rake receiver, this energy would be transparent and therefore lost to the DS/SS receiver. At the end of a symbol interval (typically there may be hundreds or thousands of chips per symbol), the outputs of the correlators are coherently combined, and a symbol
128
GLOSSARY
Glossary
A antipodal
Signals
Autocorrelation
s1 (t)
and
s2 (t)
are antipodal if
RX (t2 , t1 )
(2.49)
Autocovariance
Autocovariance of a random process is dened as
CX (t2 , t1 )
(2.59)
B biorthogonal
Signals
are biorthogonal if
s1 (t),. . ., s M (t)
2
sm (t) = s M +m (t)
for some
m 1, 2, . . . ,
M 2
C Conditional Entropy
The conditional entropy of the random variable
H (X|Y ) =
i
Y is
dened by
(3.8)
F X (b) =
f X ( x ) dx
e.g., F X ( x )
Crosscorrelation
RXY (t2 , t1 )
= E Xt2 Yt1 =
(2.60)
(2.61)
GLOSSARY
129
Cumulative distribution
The cumulative distribution function of a random variable that
is a function
F X ( (R R) )
such
F X (b)
= P r [X b] = P r [{ |X () b}]
(2.12)
i.e., F X ( ) is
p X ( xk )
= P r [X = xk ] = F X ( xk )
xxk x<xk
lim
F X (x)
(2.14)
E Entropy Rate
The entropy rate of a stationary discrete-time random process is dened by
(3.12)
H = lim
n n
H (X1 , X2 , . . . , Xn )
(3.13)
The entropy rate is a measure of the uncertainty of information content per output symbol of the source.
Entropy
1. The entropy (average self information) of a discrete random variable probability mass function and is dened as
is a function of its
H (X) =
i=1
where
(p X ( xi ) log (p X ( xi ))) X
and
(3.3)
p X ( xi ) = P r [X = xi ].
If log is base 2
then the unit of entropy is bits. Entropy is a measure of uncertainty in a random variable and a measure of information it can reveal. 2. A more basic explanation of entropy is provided in another module
16
FXt (b)
Xt
G Gaussian process
A process with mean if vector, that is,
any
1 2
CX (t2 , t1 )
1 (2) (detX )
N 2
e( 2 (xX )
1
X 1 (xX ))
(2.62)
16 "Entropy"
<http://cnx.org/content/m0070/latest/>
130
GLOSSARY
for all
x Rn
where
X =
X (t1 )
. . .
X (tN ) ...
.. .
and
X =
CX (t1 , t1 )
. . .
CX (t1 , tN )
CX (tN , t1 ) . . . Xt
CX (tN , tN )
J Joint Entropy
The joint entropy of two discrete random variables (X ,
Y)
is dened by
H (X, Y ) =
i
Xt
and
t2 t1
only and
sense stationary if
RXY (t2 , t1 )
is a
M Mean
The mean function of a random process
Xt
Xt
for all
t's.
Xt
= =
(2.48)
Mutual Information
The mutual information between two discrete random variables is denoted by dened as
I (X; Y )
and (6.7)
Mutual information is a useful concept to measure the amount of information shared between
O orthogonal
Signals
are orthogonal if
< sm , sn >= 0
for
m = n.
Xt
is dened to be
Xt .
(2.83)
SX (f ) =
if
RX ( ) e(i2f ) d RX ( ).
Xt
GLOSSARY
131
S Simplex signals
Let
{s1 (t) , s2 (t) , . . . , sM (t)} be a set s1 (t),. . ., s (t) are simplex signals if M
s (t) = sm (t) m
1 M
(sk (t))
k=1
(4.3)
Stochastic Process
Given a sample space, a stochastic process is an indexed collection of random variables dened for each
. t, t R : (Xt ())
(2.29)
and
are uncorrelated if
XY = 0.
is constant and
RX (t2 , t1 )
is only a function
t2 t1 .
132
BIBLIOGRAPHY
Bibliography
[1] R. N. Bracewell. The Fourier Transform and Its Applications. McGraw-Hill, New York, third edition, 1985. [2] R. N. Bracewell. The Fourier Transform and Its Applications. McGraw-Hill, New York, third edition, 1985. [3] H. S. Carslaw. Theory of Fourier's Series and Integrals. Dover, New York, third edition, 1906, 1930. [4] D. C. Champeney. A Handbook of Fourier Theorems. Cambridge University Press, Cambridge, 1987. [5] H. Dym and H. P. McKean. Fourier Series and Integrals. Academic Press, New York, 1972. [6] E. W. Hobson. The Theory of Functions of a Real Variable and the Theory of Fourier's Series, volume 2. Dover, New York, second edition, 1926. [7] A. V. Oppenheim and R. W. Schafer. Discrete-Time Signal Processing. Prentice-Hall, Englewood Clis, NJ, 1989. [8] A. Papoulis. The Fourier Integral and Its Applications. McGraw-Hill, 1962. [9] A. Papoulis. The Fourier Integral and Its Applications. McGraw-Hill, 1962. [10] A. Zygmund. Trigonometrical Series. Dover, New York, 1935, 1955.
133
134
INDEX
Ex.
apples, 1
Adaptive Equalization, 5.10(84) Additive White Gaussian Noise (AWGN), 30 analog, 2.1(5), 6 analysis, 4.10(64), 4.11(66) anticausal, 2.1(5), 7 antipodal, 4.2(45), 48, 4.10(64), 4.11(66) aperiodic, 2.1(5) Assistants, 1 autocorrelation, 2.7(26), 26 autocovariance, 2.7(26), 28 AWGN, 4.1(45), 4.5(53)
Computer resource, 2 conditional entropy, 3.2(36), 38 Connexions, 2 continuous, 5 Continuous Random Variable, 20 continuous system, 13 continuous time, 2.1(5) convolutional code, 6.6(97) correlation, 4.6(56), 57, 4.7(58) correlator, 4.7(58), 4.8(60), 4.9(62), 4.10(64), 4.11(66) correlator-type, 4.6(56) Course Rationale, 2 Credits, 2 crosscorrelation, 2.7(26), 28 CSI, 4.8(60), 4.9(62) Cumulative distribution, 20
bandwidth, 4.1(45), 45 bandwidth constraints, 5.1(73) baseband, 4.1(45), 45 bases, 4.3(49) basis, 4.3(49) binary symbol, 4.10(64), 4.11(66) biorthogonal, 4.2(45), 48 bit-error, 4.10(64), 4.11(66) bit-error probability, 4.10(64), 4.11(66) block code, 6.5(95) bounded input-bounded output (BIBO), 16
data transmission, 4.1(45), 45 Decision Feedback Equalizer, 5.9(83) decision feedback equalizer (DFE), 116 Degradation, 5.6(78) Degradation Categories due to Signal Time-Spreading Viewed in the Frequency Domain, 110 Degradation Categories due to Signal Time-Spreading Viewed in the Time-Delay Domain, 109 demodulation, 4.4(52), 4.5(53), 4.6(56), 4.7(58) detection, 4.4(52), 4.5(53), 4.6(56), 4.7(58) deterministic signal, 11 dierential phase shift keying, 4.14(71) digital, 2.1(5), 6 Direct-sequence spread-spectrum (DS/SS), 117, 118 discrete, 5 discrete memoryless channel, 88 Discrete Random Variable, 20 discrete system, 13 discrete time, 2.1(5)
capacity, 6.4(93) carrier phase, 4.12(67) Cauchy-Schwarz inequality, 4.8(60), 4.9(62) causal, 2.1(5), 7, 2.2(13), 15 channel, 3.1(35), 4.1(45), 6.1(87), 6.2(89), 6.4(93), 7.1(101) channel capacity, 6.1(87) Channel Coding, 3, 3.1(35), 6.5(95), 6.6(97) Characterizing, 7.2(101) coding, 3.1(35) Colored Processes, 2.9(29) coloured noise, 31 communication, 6.2(89) Communication over AWGN Channels, 3 Communication over Band-limited AWGN Channel, 3 Communication over Fading Channel, 3
INDEX
distribution, 2.6(22) DPSK, 4.14(71) (i.i.d), 30 innite-length signal, 12 information, 3.2(36), 3.3(39) information theory, 3.1(35), 3.3(39), 3.4(42), 6.1(87), 6.3(90), 6.4(93), 6.5(95), 6.6(97) Instructor, 1 intersymbol interference, 73, 5.3(75), 5.4(76) ISI, 5.3(75), 5.4(76), 5.5(77)
135
Email, 1, 1 entropy, 3.1(35), 3.2(36), 37, 3.3(39), 6.3(90) entropy rate, 3.2(36), 39 equal gain, 120 Equal-gain, 120 equalizers, 76 error, 4.10(64), 4.11(66) Error-Performance, 5.6(78) even signal, 2.1(5), 8 Example: Benets of Diversity, 119 Example: Phase Variations in a Mobile Communication System, 120 Examples of Flat Fading and Frequency-Selective Fading, 111 Eye Pattern, 5.7(79)
J L
joint entropy, 3.2(36), 38 jointly independent, 30 Jointly Wide Sense Stationary, 28 Lab sections/support, 1 Large-Scale, 7.3(105) left-handed, 11 linear, 2.2(13), 14, 4.8(60), 4.9(62)
Faculty Information, 1 Fading, 7.1(101), 7.3(105) feedback, 120, 120 Figure 1, 113, 121, 124, 126 Figure 2, 115, 123, 125 nite-length signal, 12 First-order stationary process, 23 Fourier, 2.3(17), 2.4(18) Fourier Integral, 2.4(18) Fourier series, 2.3(17) Fourier Transform, 2.4(18) frequency, 2.4(18) Frequency diversity, 118 frequency modulation, 4.13(70) frequency shift keying, 4.13(70) Frequency-hopping spread-spectrum (FH/SS), 117, 118 FSK, 4.13(70) fundamental period, 6
near White Processes, 2.9(29) nearly white, 31 nonanticipative, 15 noncausal, 2.1(5), 7, 2.2(13), 15 nonlinear, 2.2(13), 14
Gaussian, 4.1(45), 4.5(53) Gaussian channels, 5.1(73) Gaussian process, 2.8(29), 29 geometric representation, 4.3(49) Grades for this course will be based on the following weighting, 4
odd signal, 2.1(5), 8 Oce Hours, 1, 1 Oce Location, 1, 1 optimum, 73 orthogonal, 4.2(45), 48, 4.3(49), 4.10(64), 4.11(66) Orthogonal frequency-division multiplexing (OFDM), 117
H I
Hamming distance, 6.5(95) Homework/Participation/Exams, 4 Human, 3.4(42) independent, 21 Independent and Identically Distributed
136
INDEX
PAM, 4.2(45), 5.3(75) passband, 4.1(45), 45 period, 6 periodic, 2.1(5) phase changes, 69 phase jitter, 69 phase lock loop, 4.14(71) phase shift keying, 4.12(67) Phone, 1, 1 Pilot, 117 Polarization diversity, 119 power spectral density, 2.10(31), 33 PPM, 4.2(45) Pre-requisites, 2 precoding, 76, 5.4(76) probability theory, 2.5(19) Propagation, 7.2(101) PSK, 4.12(67) pulse amplitude modulation, 4.2(45), 5.3(75) pulse position modulation, 4.2(45) Pulse Shaping, 5.5(77) Signal Time-Spreading Viewed in the Time-Delay Domain, 108 Signal to Noise Ratio, 60 signal-to-noise ratio, 4.8(60), 4.9(62) signals, 2.2(13) Signals and Systems, 3, 2.1(5) simplex, 4.2(45) Simplex signals, 49 SNR, 4.8(60), 4.9(62) Solution, 119, 121 Source Coding, 3, 3.1(35), 3.3(39), 3.4(42) Spatial diversity, 118 Spread spectrum, 118 stable, 16 stationary, 2.6(22) stochastic process, 2.6(22), 22, 2.7(26), 2.8(29), 2.10(31) Strictly White Noise Process, 30
Table 1, 120 Textbook(s), 2 theorems, 2.3(17) Time diversity, 118 time invariant, 2.2(13), 15 time variant, 15 time varying, 2.2(13) time-invariant, 4.8(60), 4.9(62) Title, 2 transmission, 6.1(87) Transversal Equalizer, 5.8(80) typical sequence, 3.3(39), 40, 6.3(90)
Radio, 7.2(101) Rake receiver, 126 random process, 2.6(22), 2.7(26), 2.8(29), 2.10(31), 4.1(45) random signal, 11 Rayleigh fading, 119 receiver, 4.6(56), 4.7(58), 4.10(64), 4.11(66) right-handed, 11
selection, 120, 120 sequence detectors, 76 Shannon, 3.1(35) Shannon's noisy channel coding theorem, 6.4(93) signal, 4.2(45), 4.3(49), 4.10(64), 4.11(66) Signal Time-Spreading Viewed in the Frequency Domain, 109
ATTRIBUTIONS
137
Attributions
Collection:
Edited by: Tuan Do-Hong URL: http://cnx.org/content/col10474/1.7/ License: http://creativecommons.org/licenses/by/2.0/ Module: "Letter to Student" By: Tuan Do-Hong URL: http://cnx.org/content/m15429/1.1/ Pages: 1-1 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Contact Information" By: Tuan Do-Hong URL: http://cnx.org/content/m15431/1.3/ Pages: 1-1 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Resources" By: Tuan Do-Hong URL: http://cnx.org/content/m15438/1.3/ Pages: 2-2 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Purpose of the Course" By: Tuan Do-Hong URL: http://cnx.org/content/m15433/1.2/ Pages: 2-2 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Course Description" By: Tuan Do-Hong URL: http://cnx.org/content/m15435/1.4/ Pages: 2-3 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Calendar" By: Tuan Do-Hong URL: http://cnx.org/content/m15448/1.3/ Pages: 3-4 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/
138 Module: "Grading Procedures" By: Tuan Do-Hong URL: http://cnx.org/content/m15437/1.2/ Pages: 4-4 Copyright: Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Signal Classications and Properties" By: Melissa Selik, Richard Baraniuk, Michael Haag URL: http://cnx.org/content/m10057/2.17/ Pages: 5-13 Copyright: Melissa Selik, Richard Baraniuk, Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "System Classications and Properties" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10084/2.19/ Pages: 13-17 Copyright: Melissa Selik, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "m04 - Theorems on the Fourier Series" Used here as: "The Fourier Series" By: C. Sidney Burrus URL: http://cnx.org/content/m13873/1.1/ Pages: 17-18 Copyright: C. Sidney Burrus License: http://creativecommons.org/licenses/by/2.0/ Module: "m05 - The Fourier Transform" Used here as: "The Fourier Transform" By: C. Sidney Burrus URL: http://cnx.org/content/m13874/1.2/ Pages: 18-19 Copyright: C. Sidney Burrus License: http://creativecommons.org/licenses/by/2.0/ Module: "Review of Probability Theory" Used here as: "Review of Probability and Random Variables " By: Behnaam Aazhang URL: http://cnx.org/content/m10224/2.16/ Pages: 19-22 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction to Stochastic Processes" By: Behnaam Aazhang URL: http://cnx.org/content/m10235/2.15/ Pages: 22-26 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0
ATTRIBUTIONS
ATTRIBUTIONS
Module: "Second-order Description" Used here as: "Second-order Description of Stochastic Processes" By: Behnaam Aazhang URL: http://cnx.org/content/m10236/2.13/ Pages: 26-28 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Gaussian Processes" By: Behnaam Aazhang URL: http://cnx.org/content/m10238/2.7/ Pages: 29-29 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "White and Coloured Processes" By: Nick Kingsbury URL: http://cnx.org/content/m11105/2.4/ Pages: 29-31 Copyright: Nick Kingsbury License: http://creativecommons.org/licenses/by/1.0 Module: "Linear Filtering" Used here as: "Transmission of Stationary Process Through a Linear Filter" By: Behnaam Aazhang URL: http://cnx.org/content/m10237/2.10/ Pages: 31-34 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Information Theory and Coding" By: Behnaam Aazhang URL: http://cnx.org/content/m10162/2.10/ Pages: 35-35 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Entropy" By: Behnaam Aazhang URL: http://cnx.org/content/m10164/2.16/ Pages: 36-39 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Source Coding" By: Behnaam Aazhang URL: http://cnx.org/content/m10175/2.10/ Pages: 39-42 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0
139
140 Module: "Human Coding" By: Behnaam Aazhang URL: http://cnx.org/content/m10176/2.10/ Pages: 42-43 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Data Transmission and Reception" By: Behnaam Aazhang URL: http://cnx.org/content/m10115/2.9/ Pages: 45-45 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Signalling" By: Behnaam Aazhang URL: http://cnx.org/content/m10116/2.11/ Pages: 45-49 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Geometric Representation of Modulation Signals" By: Behnaam Aazhang URL: http://cnx.org/content/m10035/2.13/ Pages: 49-52 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Demodulation and Detection" By: Behnaam Aazhang URL: http://cnx.org/content/m10054/2.14/ Pages: 52-53 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Demodulation" By: Behnaam Aazhang URL: http://cnx.org/content/m10141/2.13/ Pages: 53-56 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Detection by Correlation" By: Behnaam Aazhang URL: http://cnx.org/content/m10091/2.15/ Pages: 56-57 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Examples of Correlation Detection" By: Behnaam Aazhang URL: http://cnx.org/content/m10149/2.10/ Pages: 58-60 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0
ATTRIBUTIONS
ATTRIBUTIONS
Module: "Matched Filters" By: Behnaam Aazhang URL: http://cnx.org/content/m10101/2.14/ Pages: 60-62 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Examples with Matched Filters" By: Behnaam Aazhang URL: http://cnx.org/content/m10150/2.10/ Pages: 62-64 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Performance Analysis of Binary Orthogonal Signals with Correlation" By: Behnaam Aazhang URL: http://cnx.org/content/m10154/2.11/ Pages: 64-65 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Performance Analysis of Orthogonal Binary Signals with Matched Filters" By: Behnaam Aazhang URL: http://cnx.org/content/m10155/2.9/ Pages: 66-67 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Carrier Phase Modulation" By: Behnaam Aazhang URL: http://cnx.org/content/m10128/2.10/ Pages: 67-69 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Carrier Frequency Modulation" By: Behnaam Aazhang URL: http://cnx.org/content/m10163/2.10/ Pages: 70-70 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Dierential Phase Shift Keying" By: Behnaam Aazhang URL: http://cnx.org/content/m10156/2.7/ Pages: 71-72 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Digital Transmission over Baseband Channels" By: Behnaam Aazhang URL: http://cnx.org/content/m10056/2.12/ Pages: 73-73 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0
141
142 Module: "Introduction to ISI" By: Ha Ta-Hong, Tuan Do-Hong URL: http://cnx.org/content/m15519/1.5/ Pages: 73-75 Copyright: Ha Ta-Hong, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Pulse Amplitude Modulation Through Bandlimited Channel" By: Behnaam Aazhang URL: http://cnx.org/content/m10094/2.7/ Pages: 75-76 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Precoding and Bandlimited Signals" By: Behnaam Aazhang URL: http://cnx.org/content/m10118/2.6/ Pages: 76-77 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Pulse Shaping to Reduce ISI" By: Ha Ta-Hong, Tuan Do-Hong URL: http://cnx.org/content/m15520/1.2/ Pages: 77-78 Copyright: Ha Ta-Hong, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Two Types of Error-Performance Degradation" By: Ha Ta-Hong, Tuan Do-Hong URL: http://cnx.org/content/m15527/1.2/ Pages: 78-79 Copyright: Ha Ta-Hong, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Eye Pattern" By: Ha Ta-Hong, Tuan Do-Hong URL: http://cnx.org/content/m15521/1.2/ Pages: 79-80 Copyright: Ha Ta-Hong, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Transversal Equalizer" By: Ha Ta-Hong, Tuan Do-Hong URL: http://cnx.org/content/m15522/1.4/ Pages: 80-83 Copyright: Ha Ta-Hong, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Decision Feedback Equalizer" By: Ha Ta-Hong, Tuan Do-Hong URL: http://cnx.org/content/m15524/1.4/ Pages: 83-84 Copyright: Ha Ta-Hong, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/
ATTRIBUTIONS
ATTRIBUTIONS
Module: "Adaptive Equalization" By: Ha Ta-Hong, Tuan Do-Hong URL: http://cnx.org/content/m15523/1.2/ Pages: 84-85 Copyright: Ha Ta-Hong, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Channel Capacity" By: Behnaam Aazhang URL: http://cnx.org/content/m10173/2.8/ Pages: 87-88 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Mutual Information" By: Behnaam Aazhang URL: http://cnx.org/content/m10178/2.9/ Pages: 89-90 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Typical Sequences" By: Behnaam Aazhang URL: http://cnx.org/content/m10179/2.10/ Pages: 90-92 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Shannon's Noisy Channel Coding Theorem" By: Behnaam Aazhang URL: http://cnx.org/content/m10180/2.10/ Pages: 93-94 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Channel Coding" By: Behnaam Aazhang URL: http://cnx.org/content/m10174/2.11/ Pages: 95-97 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Convolutional Codes" By: Behnaam Aazhang URL: http://cnx.org/content/m10181/2.7/ Pages: 97-98 Copyright: Behnaam Aazhang License: http://creativecommons.org/licenses/by/1.0 Module: "Fading Channel" By: Ha Ta-Hong, Tuan Do-Hong URL: http://cnx.org/content/m15525/1.2/ Pages: 101-101 Copyright: Ha Ta-Hong, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/
143
144 Module: "Characterizing Mobile-Radio Propagation" By: Ha Ta-Hong, Tuan Do-Hong URL: http://cnx.org/content/m15528/1.2/ Pages: 101-105 Copyright: Ha Ta-Hong, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Large-Scale Fading" By: Ha Ta-Hong, Tuan Do-Hong URL: http://cnx.org/content/m15526/1.2/ Pages: 105-106 Copyright: Ha Ta-Hong, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Small-Scale Fading" By: Thanh Do-Ngoc, Tuan Do-Hong URL: http://cnx.org/content/m15531/1.1/ Pages: 106-108 Copyright: Thanh Do-Ngoc, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Signal Time-Spreading" By: Thanh Do-Ngoc, Tuan Do-Hong URL: http://cnx.org/content/m15533/1.3/ Pages: 108-112 Copyright: Thanh Do-Ngoc, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Mitigating the Degradation Eects of Fading" By: Sinh Nguyen-Le, Tuan Do-Hong URL: http://cnx.org/content/m15535/1.1/ Pages: 113-116 Copyright: Sinh Nguyen-Le, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Mitigation to Combat Frequency-Selective Distortion" By: Sinh Nguyen-Le, Tuan Do-Hong URL: http://cnx.org/content/m15537/1.1/ Pages: 116-117 Copyright: Sinh Nguyen-Le, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Mitigation to Combat Fast-Fading Distortion" By: Sinh Nguyen-Le, Tuan Do-Hong URL: http://cnx.org/content/m15536/1.1/ Pages: 118-118 Copyright: Sinh Nguyen-Le, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Mitigation to Combat Loss in SNR" By: Sinh Nguyen-Le, Tuan Do-Hong URL: http://cnx.org/content/m15538/1.1/ Pages: 118-119 Copyright: Sinh Nguyen-Le, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/
ATTRIBUTIONS
ATTRIBUTIONS
Module: "Diversity Techniques" By: Sinh Nguyen-Le, Tuan Do-Hong URL: http://cnx.org/content/m15540/1.1/ Pages: 119-119 Copyright: Sinh Nguyen-Le, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Diversity-Combining Techniques" By: Sinh Nguyen-Le, Tuan Do-Hong URL: http://cnx.org/content/m15541/1.1/ Pages: 120-120 Copyright: Sinh Nguyen-Le, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "Modulation Types for Fading Channels" By: Sinh Nguyen-Le, Tuan Do-Hong URL: http://cnx.org/content/m15539/1.1/ Pages: 120-121 Copyright: Sinh Nguyen-Le, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "The Role of an Interleaver" By: Sinh Nguyen-Le, Tuan Do-Hong URL: http://cnx.org/content/m15542/1.1/ Pages: 121-123 Copyright: Sinh Nguyen-Le, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "The Viterbi Equalizer as Applied to GSM" Used here as: "Application of Viterbi Equalizer in GSM System" By: Sinh Nguyen-Le, Tuan Do-Hong URL: http://cnx.org/content/m15544/1.1/ Pages: 124-126 Copyright: Sinh Nguyen-Le, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/ Module: "The Rake Receiver Applied to Direct-Sequence Spread-Spectrum (DS/SS) Systems" Used here as: "Application of Rake Receiver in CDMA System" By: Sinh Nguyen-Le, Tuan Do-Hong URL: http://cnx.org/content/m15534/1.2/ Pages: 126-127 Copyright: Sinh Nguyen-Le, Tuan Do-Hong License: http://creativecommons.org/licenses/by/2.0/
145
About Connexions
Since 1999, Connexions has been pioneering a global system where anyone can create course materials and make them fully accessible and easily reusable free of charge. We are a Web-based authoring, teaching and learning environment open to anyone interested in education, including students, teachers, professors and lifelong learners. We connect ideas and facilitate educational communities. Connexions's modular, interactive courses are in use worldwide by universities, community colleges, K-12 schools, distance learners, and lifelong learners. Connexions materials are in many languages, including English, Spanish, Chinese, Japanese, Italian, Vietnamese, French, Portuguese, and Thai. Connexions is part of an exciting new information distribution system that allows for
Connexions
has partnered with innovative on-demand publisher QOOP to accelerate the delivery of printed course materials and textbooks into classrooms worldwide at lower prices than traditional academic publishers.