Unit 2
Unit 2
3. Twisted-pair cabling comes in several varieties. The garden variety deployed in many office buildings
is called Category 5 cabling, or ‘‘Cat 5.’’ A category 3 twisted pair consists of two insulated wires
gently twisted together. Four such pairs are typically grouped in a plastic sheath to protect the wires
and keep them together. This arrangement is shown in Fig. 2.1.
silica/air boundary, as shown in Fig. 2.3(a). Here a light ray incident on the boundary at an angle 1
emerging at an angle 1 . The amount of refraction depends on the properties of the two media.
4. For angles of incidence above a certain critical value, the light is refracted back into the silica; none of
it escapes into the air. Thus, a light ray incident at or above the critical angle is trapped inside the -
fiber, as shown in Fig. 2.2(b), and can propagate for many kilometers with virtually no loss.
Fig 2.2 a) Three examples of a light ray from inside a silica fiber impinging on the air/silica boundary at different angles. (b) Light
trapped by total internal reflection.
5. The sketch of Fig. 2.2(b) shows only one trapped ray, but since any light ray incident on the boundary
above the critical angle will be reflected internally, many different rays will be bouncing around at
different angles. Each ray is said to have a different mode, so a fiber having this property is called a
multimode fiber.
6. However, if the fiber’s diameter is reduced to a few wavelengths of light the fiber acts like a wave
guide and the light can propagate only in a straight line, without bouncing, yielding a single-mode
fiber. Single-mode fibers are more expensive but are widely used for longer distances. Currently
available single-mode fibers can transmit data at 100 Gbps for 100 km without amplification.
Transmission of Light through Fiber:
Optical fibers are made of glass, which, in turn, is made from sand. Glass transparent enough to be useful
for windows was developed during the Renaissance. The attenuation of light through glass depends on the
wavelength of the light. It is defined as the ratio of input to output signal power. For the kind of glass used
in fibers, the attenuation is shown in Fig. 2.3 in units of decibels per linear kilometer of fiber.
Fig 2.4 (a) Side view of a single fiber, (b) End view of a sheath with three fibers
The core is surrounded by a glass cladding with a lower index of refraction than the core, to keep all the
light in the core. Next comes a thin plastic jacket to protect the cladding. Fibers are typically grouped in
bundles, protected by cladding. Fibers are typically grouped in bundles, protected by an outer sheath.
Figure 2.4(b) shows a sheath with three fibers.
………………………. (2-1)
Where f = 1/T is the fundamental frequency, a n and bn are the sine and cosine amplitudes of the nth
harmonics (terms), and c is a constant. Such decomposition is called a Fourier series. From the Fourier
series, the function can be reconstructed. That is, if the period, T, is known and the amplitudes are given,
the original function of time can be found by performing the sums of Eq. (2-1).
A data signal that has a finite duration, which all of them do, can be handled by just imagining that it
repeats the entire pattern over and over forever (i.e., the interval from T to 2T is the same as from 0 to T,
etc.).
The a n amplitudes can be computed for any given g(t) by multiplying both sides of Eq. (2-1) by sin(2πkft
) and then integrating from 0 to T. Since
Only one term of the summation survives: a n . The bn summation vanishes completely. Similarly, by
multiplying Eq. (2-1) by cos(2πkft ) and integrating between 0 and T, we can derive bn . By just integrating
both sides of the equation as it stands, we can find c. The results of performing these operations are as
follows:
The root-mean-square amplitudes, a n2 bn2 for the first few terms as shown on the right hand side of the
fig (a). The squares of these values are proportional to the energy transmitted at the corresponding
frequency.
No transmission facility can transmit signals without losing some power in the process. If all the Fourier
components were equally diminished, the resulting signal would be reduced in amplitude but not distorted.
Unfortunately, all transmission facilities diminish different Fourier components by different amounts, thus
introducing distortion. Usually, for a wire, the amplitudes are transmitted mostly undiminished from 0 up
to some frequency fc [measured in cycles/sec or Hertz (Hz)], with all frequencies above this cutoff
frequency attenuated. The width of the frequency range transmitted without being strongly attenuated is
called the bandwidth.
The bandwidth is a physical property of the transmission medium that depends on the construction,
thickness, and length of a wire or fiber. Filters are often used to further limit the bandwidth of a signal.
802.11 wireless channels are allowed to use up to roughly 20 MHz.
Signals that run from 0 up to a maximum frequency are called baseband signals. Signals that are shifted
to occupy a higher range of frequencies are called pass-band signals.
The Maximum Data Rate of a Channel:
In 1942, an AT&T Engineer, Henry Nyquist, realized that even a perfect channel has a finite transmission
capacity. He derived an equation expressing a maximum data rate for a finite bandwidth noiseless channel.
In 1948, Claude Shannon carried Nyquist work further and extended to the case of a channel subject to
random noise.
Nyquist proved that if an arbitrary signal has been run through a low-pass filter of bandwidth B, the
filtered signal can be completely reconstructed by making only 2B (exact) samples per second. Sampling
the line faster than 2B times per second is pointless because the higher-frequency components that such
sampling could recover have already been filtered out. If the signal consists of V discrete levels, Nyquist’s
theorem states:
If random noise is present, the situation deteriorates rapidly. The amount of thermal noise present is
measured by the ratio of the signal power to the noise power, called the SNR (Signal-to-Noise Ratio). If
we denote the signal power by “S” and the noise power by “N”, then signal-to-noise ratio is S/N. the ratio
is expressed on a log scale as the quantity 10 log10 S /N because it can vary over a tremendous range. The
units of this log scale are called decibels (dB), with ‘‘deci’’ meaning 10 and ‘‘bel’’ chosen to honor
Alexander Graham Bell, who invented the telephone. An S /N ratio of 10 is 10 dB, a ratio of 100 is 20 dB,
a ratio of 1000 is 30 etc.
Shannon’s major result is that the maximum data rate or capacity of a noisy channel whose bandwidth is
B Hz and whose signal-to-noise ratio is S/N is given by:
Shannon’s result was derived from information-theory arguments and applies to any channel subject to
thermal noise.
DIGITAL MODULATION AND MULTIPLEXING
Wires and wireless channels carry analog signals such as continuously varying voltage, light intensity, or
sound intensity. To send digital information, we must devise analog signals to represent bits. The process
of converting between bits and signals that represent them is called digital modulation.
In general there are two types of schemes that are being used. They are:
Baseband Transmission: In this the signal occupies frequencies from zero up to a maximum
that depends on the signaling rate. It is common for wires
Passband Transmissions: In this the schemes that regulate the amplitude, phase or frequency of
a carrier signal to convey bits are used. In this the signal occupies a band of frequencies around
the frequency of the carrier signal.
Baseband Multiplexing:
The most straightforward form of digital modulation is to use a positive voltage to represent a 1 and a
negative voltage to represent a 0. For an optical fiber, the presence of light might represent a 1 and the
absence of light might represent a 0. This scheme is called NRZ (Non-Return-to-Zero).
Once sent, the NRZ signal propagates down the wire. At the other end, the receiver converts it into bits by
sampling the signal at regular intervals of time. This signal will not look exactly like the signal that was
sent. It will be attenuated and distorted by the channel and noise at the receiver. To decode the bits, the
receiver maps the signal samples to the closest symbols. For NRZ, a positive voltage will be taken to
indicate that a 1 was sent and a negative voltage will be taken to indicate that a 0 was sent.
More complex schemes can convert bits to signals. These schemes are called line codes. The below
diagram represents the line code:
Fig 2.6: Line codes: (a) Bits, (b) NRZ, (c) NRZI, (d) Manchester, (e) Bipolar or AMI
Bandwidth Efficiency:
With NRZ, the signal may cycle between the positive and negative levels up to every 2 bits (in the case of
alternating 1s and 0s). This means that we need a bandwidth of at least B/2 Hz when the bit rate is B
bits/sec. Bandwidth is often a limited resource, even for wired channels, Higher-frequency signals are
increasingly attenuated, making them less useful, and higher-frequency signals also require faster
electronics.
The strategy for using limited bandwidth more efficiently is to use more than two signaling levels. By
using four voltages, for instance, we can send 2 bits at once as a single symbol. The rate at which the
signal changes are then half the bit rate, the required bandwidth is reduced. Some definitions are:
Symbol Rate: It is defines as the rate at which the signal changes is called symbol rate. The
older name for the symbol rate is the baud rate.
Bit Rate: it is the symbol rate multiplied by the number of bits per symbol.
Clock Recovery:
For all schemes that encode bits into symbols, the receiver must know when one symbol ends and the next
symbol begins to correctly decode the bits. With NRZ, in which the symbols are simply voltage levels, a
long run of 0s or 1s leaves the signal unchanged. After a while it is hard to tell the bits apart, as 15 zeros
look much like 16 zeros unless you have a very accurate clock.
One strategy is to send a separate clock signal to the receiver, but it is wasteful for most network links
since if we had another line to send a signal we could use it to send data. The alternative here is to mix the
clock signal with the data signal by XORing them together so that no extra line is needed. The results are
shown in Fig. 2.6(d). When the clock is XORed with the 0 level it makes a low-to-high transition that is
simply the clock. This transition is a logical 0. When it is XORed with the 1 level it is inverted and makes
a high-to-low transition. This transition is a logical 1. This scheme is called Manchester encoding and
was used for classic Ethernet.
Drawback of Manchester Encoding:
The downside of Manchester encoding is that it requires twice as much bandwidth as NRZ because of the
clock. To overcome this, well known code 4B/5B is used. Every 4 bits is mapped into a 5-bit pattern with
a fixed translation table. The five bit patterns are chosen so that there will never be a run of more than
three consecutive 0s. The mapping is shown in Fig. 2.6.1. This scheme adds 25% overhead, which is
better than the 100% overhead of Manchester encoding. Since there are 16 input combinations and 32
output combinations, some of the output combinations are not used.
FSK (Frequency Shift Keying): In this, two or more different tones are used. The example in
Fig. 2.7(c) uses just two frequencies.
PSK (Phase Shift Keying): In the simplest form of PSK (Phase Shift Keying), the carrier wave
is systematically shifted 0 or 180 degrees at each symbol period. Because there are two phases, it
is called BPSK (Binary Phase Shift Keying). ‘‘Binary’’ here refers to the two symbols, not that
the symbols represent 2 bits. An example is shown in Fig. 2.7(c).
QPSK (Quadrature Phase Shift Keying)A better scheme that uses the channel bandwidth more
efficiently is to use four shifts, e.g., 45, 135, 225, or 315 degrees, to transmit 2 bits of
information per symbol. This version is called QPSK (Quadrature Phase Shift Keying).
Fig 2.7: (a) Binary Signal, (b) Amplitude Shift keying, (c) Frequency Shift keying, (d) Phase Shift keying
The above schemes can be combines and use more levels to transmit more bits per symbol. Only one of
frequency and phase can be modulated at a time because they are related, with the frequency being the rate
of change of phase over time. Usually amplitude and phases are modulated in combination. Three
examples are shown in the following fig 2.7.1. In this it provides a legal amplitude and phase
combinations of each symbol. In fig 2.7.1(a) equidistant dots are presented at 45, 135, 225, and 315
degrees. The phase of a dot is indicated by the angle a line from it to the origin makes the positive x-axis.
This figure is representation of QPSK.
TDM is used widely as part of the telephone and cellular networks. To avoid one point of confusion, let us
be clear that it is quite different from the alternative STDM (Statistical Time Division Multiplexing).
The prefix ‘‘statistical’’ is added to indicate that the individual streams contribute to the multiplexed
stream not on a fixed schedule, but according to the statistics of their demand. STDM is packet switching
by another name.
SYNCHRONOUS Time Division Multiplexing:
In synchronous TDM, each input connection has an allotment in the output even if it is not sending data.
In synchronous TDM, the data flow of each input connection is divided into units, where each input
occupies one input time slot. A unit can be 1 bit, one character, or one block of data. Each input unit
becomes one output unit and occupies one output time slot. The duration of an output time slot is n times
shorter than the duration of an input time slot. It is shown in the following figure 2.8.1:
Slot Size:
Since a slot carries both data and an address in statistical TDM, the ratio of the data size to address size
must be reasonable to make transmission efficient.
No Synchronization Bit:
There is another difference between synchronous and statistical TDM, but this time it is at the frame level.
The frames in statistical TDM need not be synchronized, so we do not need synchronization bits.
Bandwidth:
In statistical TDM, the capacity of the link is normally less than the sum of the capacities of each channel.
The designers of statistical TDM define the capacity of the link based on the statistics of the load for each
channel.
Frequency Division Multiplexing:
FDM (Frequency Division Multiplexing) is an analog technique that can be applied when the bandwidth
of a link (in hertz) is greater than the combined bandwidths of the signals to be transmitted. The signals do
not overlap with each other’s bandwidth range, because the signals are modulated on different carrier
frequencies. The channels can be separated by strips of unused bandwidth-guard bands-to prevent signals
from overlapping. Based on the channel bandwidth, the number of sub channels is limited. FDM is used in
radio and TV broadcasting. FDM to be an analog multiplexing technique; however, this does not mean
that FDM cannot be used to combine sources sending digital signals. A digital signal can be converted to
an analog signal before FDM is used to multiplex them. FDM is an analog multiplexing technique that
combines analog signals.
Multiplexing Process:
Each source generates a signal of a similar frequency range.
Inside the multiplexer, these similar signals modulate different carrier frequencies (F1, F2, and
F3).
The resulting modulated signals are then combined into a single composite signal that is sent out
over a media link that has enough bandwidth to accommodate it.
FDM does not need synchronization between its transmitter and receiver for proper operation.
Demodulation of FDM is easy.
It supports full duplex information flow which is required by the most of the applications.
Advantages of FDM:
The communication channel must have a very large bandwidth.
Inter modulation distortion takes place, a problem for one user can sometimes affect others.
Large number of modulators and filters are required.
FDM suffers from the problem of crosstalk.
The initial cost is high.
Implementation:
FDM can be implemented very easily. In many cases, such as radio and television broadcasting,
there is no need for a physical multiplexer or de-multiplexer. As long as the stations agree to
send their broadcasts to the air using different carrier frequencies, multiplexing is achieved.
Code Division Multiplexing:
It’s a third kind of multiplexing that works in a completely different way than FDM and TDM. CDM
(Code Division Multiplexing) is a form of spread spectrum communication in which a narrowband
signal is spread out over a wider frequency band. This can make it more tolerant of interference, as well as
allowing multiple signals from different users to share the same frequency band. Because code division
multiplexing is mostly used for the latter purpose it is commonly called CDMA (Code Division Multiple
Access).
CDMA allows each station to transmit over the entire frequency spectrum all the time. Multiple
simultaneous transmissions are separated using coding theory. In CDMA, each bit time is subdivided into
“m” short intervals called chips. Typically, there are 64 or 128 chips per bit. Each station is assigned a
unique m-bit code called a chip sequence. For pedagogical purposes, it is convenient to use a bipolar
notation to write these codes as sequences of −1 and +1. We will show chip sequences in parentheses.
To transmit a 1 bit, a station sends its chip sequence. To transmit a 0 bit, it sends the negation of its chip
sequence. No other patterns are permitted. Thus, for m = 8, if station A is assigned the chip sequence (−1
−1 −1 +1 +1 −1 +1 +1), it can send a 1 bit by transmitting the chip sequence and a 0 by transmitting (+1
+1 +1 −1 −1 +1 −1 −1). It is really signals with these voltage levels that are sent, but it is sufficient for us
to think in terms of the sequences.
In fig 2.10 (a) and (b) the chip sequence assigned to four examples stations and signals that they represent
are shown. Each station has its own unique chip sequence.
Fig 2.10 (a) Chip Sequence for four stations. (b) Signals the sequence represents (c) Six examples of transmissions (d)
Recovery of station C’s signal
Let S be the symbol to indicate the m-chip vector for station S, and S for its negation. All chip sequences
are pair-wise orthogonal, by which we mean that the normalized inner product of any two distinct chip
sequences, S and T (written as S T), is 0. It is known how to generate such orthogonal chip sequences
using a method known as Walsh codes. In mathematical terms, orthogonality of the chip sequences can be
expressed as follows:
Note: if S T = 0, then S T is also 0. The normalized inner product of any chip sequence with itself is 1:
This follows because each of the m terms in the inner product is 1, so the sum is m. Also note that S .S = -
1.
During each bit time, a station can transmit a 1 (by sending its chip sequence), it can transmit a 0 (by
sending the negative of its chip sequence), or it can be silent and transmit nothing. When two or more
stations transmit simultaneously, their bipolar sequences add linearly. For example, if in one chip period
three stations output +1 and one station outputs −1, +2 will be received. One can think of this as signals
that add as voltages superimposed on the channel: three stations output +1 V and one station outputs −1 V,
so that 2 V is received. For instance, in Fig. 2.9(c) there are six examples of one or more stations
transmitting 1 bit at the same time. In the first example, C transmits a 1 bit, so we just get C’s chip
sequence. In the second example, both B and C transmit 1 bits, so we get the sum of their bipolar chip
sequences, namely:
To recover the bit stream of an individual station, the receiver must know that station’s chip sequence in
advance. It does the recovery by computing the normalized inner product of the received chip sequence
and the chip sequence of the station whose bit stream it is trying to recover. If the received chip sequence
is S and the receiver is trying to listen to a station whose chip sequence is C, it just computes the
normalized inner product, .
To make the decoding process more concrete, six examples in Fig. 2.9(d) are represented. Suppose that the
receiver is interested in extracting the bit sent by station C from each of the six signals S1 through S6. It
calculates the bit by summing the pair-wise products of the received S and the C vector of Fig. 2.9(a) and
then taking 1/8 of the result (since m = 8 here). The examples include cases where C is silent, sends a 1
bit, and sends a 0 bit, individually and in combination with other transmissions.
In principle, given enough computing capacity, the receiver can listen to all the senders at once by running
the decoding algorithm for each of them in parallel.