3D153 - Volume 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 184

CDC 3D153

RF Transmission Systems
Journeyman

Volume 1. General Subjects

___________

Air Force Career Development Academy
The Air University
Air Education and Training Command

3D153 01 1206, Edit Code 02


AFSC 3D153
Author: MSgt Charles A. (Tony) Sargent
338th Training Squadron
RF Transmission Systems Training Flight (AETC)
338 TRS/VEAA
809 Hercules St, Room 114
Keesler Air Force Base, Mississippi 39534–2025
DSN: 597–3870
E-mail address: [email protected]

Instructional Systems
Specialist: Linda J. Matthews

Editor: Elizabeth S. Melton

Air Force Career Development Academy (AFCDA)


The Air University (AETC)
Maxwel-Gunter Air Force Base, Alabama 36118–5643

Material in this volume is reviewed annually for technical accuracy, adequacy, and currency. For SKT purposes
the examinee should check the Weighted Airman Promotion System Catalog to determine the correct references
to study.
Preface ___________________________________________________________________ i

CONGRATULATIONS on your enrollment into the 3D153, RF Transmission Systems Journeyman,


Career Development Course (CDC). As a Radio Frequency Transmission Systems technician, you
will have the opportunity to maintain many types of communications systems used in various
missions.
The 3D153 CDC provides you with general knowledge needed to understand and perform your duties
as a RF Transmission Systems technician. This CDC will provide a portion of the needed
requirements for upgrade from the 3-skill-level to the 5-skill level. The specialty training standard
(STS) is the tool used to identify the knowledge requirements for upgrade to the 5-skill-level. Ask
your supervisor to go over it with you. Upon successful completion of the 3DX5X and 3D153 CDCs,
the CDC portion of your upgrade training will be fulfilled. This course alone will not qualify you for
the award of the AFSC 3D153.
The 3D153 CDC is divided into three volumes:
Volume 1, General Subjects, discusses communication principles, protecting our equipment from
electromagnetic interferences and pulses and various types of test equipment.
Volume 2, Advanced Communications Concepts, discusses antenna and electromagnetic wave
propagation theories, linking and jam-resistant communications and theater deployed
communications.
Volume 3, Wideband and Satellite Communications, discusses wideband and satellite
communications principles, satellite and space systems and bandwidth management.
Volume 1 contains five units, covering basic subjects:
Unit 1 contains basic communications principles such as amplitude, frequency and phase modulation.
It also discusses digital transmission as well as lightwave communications.
Unit 2 covers protecting electronic equipment from electromagnetic interferences and pulses that can
damage equipment and/or inhibit communications.
Unit 3 and unit 4 cover various types of test equipment that RF transmissions systems technicians will
use throughout their career.
Unit 5 discusses installation practices. It also discusses grounding of equipment and powering of
equipment.
A glossary of abbreviations and acronyms is included for your use.
Code numbers on figures are for preparing agency identification only.
The use of a name of any specific manufacturer, commercial product, commodity, or service in this
publication does not imply endorsement by the Air Force.
To get a response to your questions concerning subject matter in this course, or to point out technical
errors in the text, unit review exercises, or course examination, call or write the author using the
contact information on the inside front cover of this volume.
NOTE: Do not use the IDEA Program to submit corrections for printing or typographical errors.
Consult your education officer, training officer, or NCOIC if you have questions on course
enrollment, administration, or irregularities (possible scoring errors, printing errors, etc.) on unit
review exercises or course examination. For these and other administrative issues, you may email the
Air University e-Campus Support (helpdesk) at [email protected] should
receive a response in four days or less.
This volume is valued at 21 hours and 7 points.
ii __________________________________________________________________ Preface

Acknowledgement
GRATEFUL acknowledgement is made to the members of the 338 TRS RF Transmission Systems
flight. This CDC was aided through the cooperation of many people: Their contributions to this CDC
are greatly appreciated.
Preparation of this volume was aided through the cooperation and courtesy of Fluke Corporation,
Tektronix, Inc., Hewlett Packard, Bird Electronic Corporation and Liebert Corporation who furnished
their commercial technical materials on Fluke 8025A Multimeter, TDS744A Digital Oscilloscope, HP
436 Power Meter, Thruline Wattmeter RF Power Analyst Model 4391M and Roles of UPS Topology.
Permission to use this information is gratefully acknowledged.
Fluke Corporation granted permission to reproduce the illustration of their 8025A Multimeter.
Tektronix Incorporated furnished information on their TDS 744A Digital Oscilloscopes.
Hewlett Packard granted permission to reproduce the illustration of their 436 Power Meter.
Bird Electronic Corporation granted us permission to use information and illustrations on their
Thruline Wattmeter RF Power Analyst Model 4391M.
Liebert Corporation granted permission to reproduce illustrations from “Protecting Critical Systems
During Utility Outages: The Role of UPS Topology.”
Thank you all for your cooperation.

NOTE:
In this volume, the subject matter is divided into self-contained units. A unit menu begins each unit,
identifying the lesson headings and numbers. After reading the unit menu page and unit introduction,
study the section, answer the self-test questions, and compare your answers with those given at the
end of the unit. Then do the unit review exercises.
Contents _________________________________________________________________ iii

Page
Unit 1. Communications Principles ....................................................................................1–1 
1–1. Modulation ................................................................................................................. 1–1 
1–2. Digital Transmission Techniques ............................................................................. 1–18 
1–3. Lightwave Communications .................................................................................... 1–28 
Unit 2. Protecting Electronic Equipment...........................................................................2–1 
2–1. Electromagnetic Pulse ................................................................................................ 2–1 
2–2. Electromagnetic Interference ..................................................................................... 2–6 
Unit 3. General Purpose Instruments ................................................................................3–1 
3–1. Multimeter .................................................................................................................. 3–1 
3–2. Oscilloscopes ............................................................................................................. 3–6 
3–3. Bit Error Rate Test Set (BERTS) ............................................................................. 3–21 
3–4. Earth Ground Tester ................................................................................................. 3–31 
3–5. Protocol analyzer...................................................................................................... 3–34 
Unit 4. Radio Frequency Test Equipment .........................................................................4–1 
4–1. Radio Frequency Signal Generators ........................................................................... 4–1 
4–2. Measuring Power ....................................................................................................... 4–3 
4–3. Measuring Frequency ............................................................................................... 4–14 
4–4. Communication Analyzer ........................................................................................ 4–17
Unit 5. Standard Practices ..................................................................................................5–1 
5–1. Installation and Inspection ......................................................................................... 5–1 
5–2. Grounding, Bonding, and Shielding Systems ............................................................ 5–7 
5–3. Powering Electrical Equipment................................................................................ 5–13 

Glossary............................................................................................................................................. G–1
Unit 1. Communications Principles
1–1. Modulation ................................................................................................................................ 1–1 
001. Reasons for modulation ..................................................................................................................... 1–1 
002. Amplitude modulation ....................................................................................................................... 1–2 
003. Frequency modulation ....................................................................................................................... 1–5 
004. Phase modulation ............................................................................................................................. 1–13 
1–2. Digital Transmission Techniques .......................................................................................... 1–18 
005. Pulse code modulation ..................................................................................................................... 1–19 
006. Asynchronous and synchronous transmission ................................................................................. 1–23 
007. Error detection and correction ......................................................................................................... 1–24 
1–3. Lightwave Communications .................................................................................................. 1–28 
008. Optical transmitter ........................................................................................................................... 1–28 
009. Optical receiver................................................................................................................................ 1–32 
010. Repeaters, regenerators, and optical amplifiers ............................................................................... 1–34 
011. Fiber optic connectors...................................................................................................................... 1–35 

I NFORMATION BEARING SIGNALS travel through a medium that separates the transmit
equipment from the receive equipment. In radios, the medium is air. In some digital applications,
the medium could be coaxial or fiber optic cable. For efficient transmission, whether we make our
measurement in terms of bandwidth, power required or complexity of circuits, the information must
be processed in some manner before transmitting it. This signal processing is called “modulation.”
This unit will cover various fundamentals of communication principles to include modulation. Digital
transmission techniques and lightwave communications will also be discussed.

1–1. Modulation
Modulation is defined as the process by which the characteristics of one signal are varied by a second
signal. The higher radio frequency (RF) signal, which is modulated, is called the carrier while the
lower frequency signal such as audio, which varies the carrier, is called the modulating signal. In
radio communication systems, the intelligent information to be transmitted is modulated onto a RF
carrier. The type of modulation employed varies the amplitude, frequency or phase of the modulated
wave. As an RF transmission systems technician, you will need to know about modulation as it
applies to RF communications.

001. Reasons for modulation


Ease of radiation is one reason to modulate a signal. An efficient radiator (antenna) of RF energy
must be no smaller than one-quarter wavelength of the radiated signal. Since one wavelength of a 1
kHz tone is equal to a little more than 88 miles and one quarter of this equals 22 miles, this makes
transmissions over long distances impractical at frequencies this low. Modulation allows us to
transmit low frequency signals such as audio at higher radio frequencies, allowing antenna sizes to be
reasonable. Additionally, to radiate digital signals over long distances, we must first change them to
an analog format by using a modulation technique.
Channel allocation is another reason to modulate a signal. All audio signals occupy the same
frequency band between 20 Hz (hertz) and 20 kHz (kilohertz). Before an audio signal can be
broadcasted, it must be translated to a specific frequency range in order to use the available frequency
spectrum. To do this the audio signal (modulating signal) modulates with a much higher radio
frequency (carrier frequency). Each audio signal is assigned to a carrier, a defined channel, so that it
is possible for the receiver to distinguish one signal from the other. Only through modulation, we can
1–2

transmit many audio signals simultaneously by shifting their spectra using different carrier
frequencies.
In the world of RF transmissions today, there are literally dozens of different types of modulation
used. However, they all fall into three general categories, amplitude modulation (AM), frequency
modulation (FM), and phase modulation (PM). AM changes the height of the waveform as time goes
by; FM changes the frequency of the carrier wave, leaving the amplitude unchanged; and PM changes
the phase of successive waves. These changes contain the information. We will first look at the
simplest form of modulation—AM.

002. Amplitude modulation


In AM, the carrier itself does not fluctuate in amplitude. Instead, the modulating signal appears in the
form of signal components at frequencies slightly higher and lower than that of the carrier. Figure 1–1
illustrates the amplitude of the composite waveform that is varied instant by instant as the modulating
signal level varied. In other words, information is contained in the variation of the modulated
waveform.

Figure 1–1. Amplitude modulation.

Amplitude modulation theory


Figure 1–1 shows the modulating signal (A) and the carrier signal (B) as inputs to the modulator and
the modulated waveform (C) is the output signal. Notice that the modulating signal is riding on top of
the carrier wave. The picture of the output waveform gives no clue that the modulating signal was
translated along the frequency scale. Early experimenters discovered this by examining the
frequencies of components that were present in the output waveform. They found that the output
signal contained not only the carrier wave frequency but also two new frequencies. One of these was
equal to the sum of the carrier frequency and the modulating signal frequency, called an upper
sideband (USB). The other frequency was equal to the difference between the carrier frequency and
the modulating frequency, called the lower sideband (LSB). Therefore, the three frequencies are the
unmodulated carrier which conveys no information, an USB, and a LSB. The information we’re
trying to transfer from one point to another is contained in the sidebands. For example, assume a
carrier frequency of 1 MHz (megahertz) and a modulating tone of 10 kHz goes to the modulator. The
outputs include 1 MHz (carrier), 1.01 MHz (USB), and the 0.99 MHz (LSB). Remember, the USB
and LSB carry the information. The information carrying components (sidebands) are on either side
of the carrier frequency and are spaced from it by an amount equal to the modulating signal
frequency.
1–3

Bandwidth of an AM signal
Bandwidth (BW) refers to the space the transmitted signal will occupy in the frequency spectrum.
Recall the transmitted frequencies in AM are the carrier signal and the upper and lower sidebands.
Referring to figure 1–2, we see the USB and LSB are each 10 kHz wide. Therefore, the BW of the
transmitted signal is 20 kHz. We can now state, from the preceding analysis, that the BW of an AM
signal is two times the modulating signal frequency. This holds true only when the carrier signal is
modulated up to, but not more than, 100 percent.

Figure 1–2. Development of sideband.

Calculating percent of modulation


Modulation is the result of mixing an RF signal with an intelligence signal. The amount of effect or
change that the intelligence has on the carrier is expressed as the “percent of modulation.” The
“%mod,” as it is called, can be found using this formula.

Where:
A = the modulating signal amplitude (audio).
R = the modulated signal amplitude (RF).

Figure 1–3. AM wave form values.

Ordinarily, you don’t know the individual values of the audio and the carrier amplitudes. But you can
use an oscilloscope to display the modulation envelope. Referring to figure 1–3, you can see a
modulation envelope is depicted in figure 1–3, C. Using this display, you can calculate the
modulation percentage using the following formula:
1–4

Using this formula and the values given in figure 1–3, C, let’s calculate the modulation percentage.

Close observation of the wave form in figure 1–3, C shows the unmodulated carrier signal to be
100Vpk-pk (volts, peak to peak), while the measured maximum pk-pk voltage after modulation is 200
Vpk-pk. The minimum carrier voltage after modulation is 0Vpk-pk. Remember that in the composite
waveform there were two new frequencies produced during the modulation process. These two
frequencies contain the same intelligence and are equal in amplitude. If the unmodulated carrier in
figure 1–3, C is 100 Vpk-pk and the modulated carrier is 200 Vpk-pk, then the sum of the sidebands must
be 100 Vpk-pk. Since these two signals are equal, each sideband must be 50 Vpk-pk. This is demonstrated
graphically in figure 1–3, A. Any percent of modulation, which is less than that for which your
system is designed to use, is termed undermodulation. Anything greater than 100 percent is termed
overmodulation. With overmodulation, the BW of an AM signal will be increased considerably
because of the severe distortion produced. This distortion produces new frequencies (harmonics) that
were not in the modulating signal and, thus, causes interference with nearby channels. The resulting
information in the carrier will not reflect the information it’s trying to transmit. This is why we
usually run our radios at slightly less than 100 percent modulatation.
Power distribution
The total power radiated in the modulated carrier wave is equal to the sum of the power contained in
the separate components of the modulated wave. The best-case scenario is 100 percent modulation
because we get the maximum amount of power into the information-carrying sidebands.
Undermodulation reduces total power by reducing the power in the sidebands.
100 percent modulation
Figure 1–3, B shows the voltage levels of figure 1–3, A converted to power levels (based on a
characteristic impedance of 50 ohms). The power in the modulated signal breaks down this way:
 Total power is 300 watts.
 200 watts in the carrier.
 50 watts in each sidebands.
Notice that the amount of power in the information-carrying sidebands is one-third of the total power.
That means that the AM process produces a signal where two-thirds of the output power is wasted! A
100 percent modulated signal is the best case scenario for maximum power transfer to the
information-carrying sidebands.
Undermodudulation
If we decrease the amount of the modulating signal in the example, the amount of modulation will
decrease. If we cut our modulating signal by one-half, the modulation percentage would decrease by
the same amount. In figure 1–3,A the voltage in the carrier would still be 100 volts. The sideband
voltages would both drop to 25 volts. The power in the carrier is still 200 watts. Computing the power
1–5

for each sideband, using the values given, results in 12.5 watts for each sideband or the total sideband
power is 25 watts. Total power in the modulated waveform then is found by adding carrier power to
sideband power. Total power is 225 watts. From this analysis of power distribution, it’s clear that at
100 percent modulation the following applies:
 Total sideband power is one-half the carrier power.
 The carrier contains two-thirds of the total power.
 Total sideband power is one-third of the total transmitted power.
 The sideband power is distributed equally in two sidebands.
 Each sideband contains one-sixth of the total transmitted power.
Reducing modulation to less than 100 percent (undermodulation) gives the following:
 No reduction in carrier power.
 Less power in the information carrying sidebands.
 Less total power.
Although AM has the advantages of an inexpensive, simplistic design and use for long-haul
communications, it is highly susceptible to noise and wasted power in carrier. To overcome these
disadvantages, a technique called frequency modulation (FM) was developed.

003. Frequency modulation


The other way of modulating a carrier wave is FM. In FM the modulating signal causes the carrier
frequency to vary. These variations are controlled by both the frequency and amplitude of the
modulating wave. Let’s take a closer look at FM.
Frequency modulation theory
In FM, an audio signal is used to control the frequency of the RF carrier signal, while the amplitude of
the RF signal remains constant. The carrier frequency oscillator changes frequency above and below
the carrier frequency with each cycle of the intelligence waveform to make the modulated wave. In a
FM modulator operating on a carrier frequency of 10 MHz, the oscillator deviates above the carrier
frequency during the positive half cycle of the modulating signal and below the carrier during the
negative half. For example, a 10 MHz oscillator might deviate to 10.1 MHz during the positive half of
the modulating cycle and to 9.9 MHz during the negative half of the modulating cycle as shown in
figure 1–4.
For each cycle of the modulating signal, the oscillator deviates one complete cycle (fig. 1–5).
Therefore, the oscillator deviation rate is directly proportional to the modulating signal frequency. A
1 kHz audio tone makes the oscillator frequency deviate above and below the carrier frequency 1000
times per second. This is the rate of deviation.

Figure 1–4. Effect of intelligence waveform on the carrier frequency.


1–6

Figure 1–5. Frequency oscillator deviation caused by a modulating signal.

How will the amplitude of the modulating signal affect this action? In FM, the modulating signal
causes the oscillator to change frequency. The amount it changes is directly proportional to the
amplitude of the modulating signal. The larger the modulating signal voltage, the greater is the
deviation (fig. 1–6). The term for this is amount of deviation.

Figure 1–6. Frequency oscillator deviation in proportion to a modulating signal.

The two important rules for FM we have established so far are: (1) Rate of deviation is proportional
to the frequency of the modulating signal (2). Amount of deviation is proportional to the amplitude
of the modulating signal.
In figure 1–7 we see that four cycles of an audio tone produce four cycles of deviation. Notice the
increase in amplitude of each cycle causes an increase in the frequency deviation. Look at figure 1–8
for a further explanation. Notice in figure 1–8,A that an audio signal with 2 volts peak amplitude
causes the oscillator to vary 1 kHz above and below the carrier frequency of 100 kHz and establishes
the amount of deviation for the output waveshape.

Figure 1–7. Frequency deviation cycles equal number of cycles of audio tone.
1–7

Figure 1–8. Peak signal voltage determines amount of deviation for output waveshape.

If we increase the amplitude to 4 volts peak (fig. 1–8,B), the oscillator frequency varies 2 kHz above
and below the carrier frequency. If we increase the amplitude of the audio signal to 40 volts, the
oscillator varies 20 kHz above and below the carrier frequency (fig. 1–8,C). The greater the amplitude
of the intelligence, the more the oscillator varies. This is acceptable until the range extends into the
frequencies assigned to another station and causes interference.
You should take note in figure 1–9,A that the FM broadcast band is 88 to 108 MHz. Assume that two
FM stations have assigned carrier frequencies of 90 and 100 MHz. If the modulating signal causes the
oscillator to swing too far, it interferes with the station at 98 MHz, as shown in figure 1–9,B. To
prevent this, the Federal Communications Commission (FCC) established limits. In the FM broadcast
band, each station can deviate its oscillator frequency no more than 75 kHz above or below the center
frequency.

Figure 1–9. Effect of excessive modulation on adjacent FM signal bands.


1–8

In FM, the percentage of modulation is not as concrete as in AM. In AM, 100 percent modulation is
the maximum amount we can modulate without distorting the intelligence. In FM, percentage of
modulation is an imposed limitation. A maximum allowable deviation from the carrier frequency is
not a maximum capability. In FM, 100 percent modulation is merely the point where the actual
modulated frequency swing equals the maximum amount allowable.
Modulation index and significant sidebands
Consider the 100 MHz carrier frequency again. If the oscillator deviates to a maximum frequency of
100.075 MHz and to a minimum frequency of 99.925 MHz, we have a total swing of 150 kHz. This is
the maximum allowable deviation and we call it 100 percent modulation. A smaller audio voltage
causes less deviation and the percentage of modulation would be smaller. Figure 1–10 shows a station
operating on 100 MHz with a frequency deviation of 25 kHz. Since this is only one-third the
allowable deviation, we would be modulating only 33.3 percent. You’ll seldom hear “percent of
modulation.” The term “modulation index” is more useful for FM. Modulation index is the amount of
frequency deviation divided by the frequency of the modulating signal. For example, if a 5 kHz
modulating signal has enough peak voltage amplitude to cause a deviation of 25 kHz, the modulation
index would be 25 kHz ÷ 5 kHz = 5.

Figure 1–10. FM station frequency deviation.

We use the modulation index to calculate two other important aspects of FM: the number of
significant sidebands and the bandwidth requirements. Before we begin any calculations, let us look
at the difference between AM and FM signals produced by the same audio tone (fig. 1–11).

Figure 1–11. Difference between AM and FM signals using the same audio tone.

Neither of the two waveshapes is sinusoidal. Characteristics were changed in the carrier to produce a
modulated wave. In both cases, the waveform produces a combination of frequencies called
sidebands. Therefore, in FM (just as in AM), modulation produces sidebands (fig. 1–12).
1–9

Figure 1–12. Representation of FM sidebands

As you see, the FM wave has many sideband frequencies above and below the carrier. This is true
even when a single tone modulates the carrier. Look at the sideband distribution of a FM wave where
the modulating frequency is 5 kHz and the carrier is 500 kHz (fig.1–13). The distance of each
sideband from the carrier (and each other) is 5 kHz. These sidebands continue far beyond the limits of
the illustration, but notice that as the distance from the carrier increases, the sidebands’ amplitude,
diminishes.

Figure 1–13. Sideband amplitude in relation to distance from the carrier.

These extreme sidebands contain very little of the total transmitted power and therefore are not
significant. Only sidebands containing at least 1 percent of the total transmitted power are considered
significant. You can find the number of significant sidebands for any given case of FM by using a
mathematical ratio called modulation index (MI). The formula is: MI = deviation/frequency of
modulation.
1–10

To make things easier, there is a table that lists the modulation indexes Modulation Significant
Index Sidebands
against pairs of significant sidebands. Here is a partial table listing some
modulation indexes with corresponding significant sidebands. 0.01 2
0.4 2
To find the bandwidth of an FM signal that has 15 kHz deviation at a 5 kHz
0.5 4
modulating signal rate you must follow these steps:
1 6
1. Calculate the MI: 15 kHz ÷5 kHz = 3.
2 8
2. Using the modulation index table, find the number of significant 3 12
sidebands: MI of 3 = 12 significant sidebands.
4 14
3. Multiply the total number of significant sidebands by the frequency 5 16
of the modulation signal to find the bandwidth of the FM signal:
6 18
bandwidth = 12 sidebands × 5 kHz modulation frequency = 60 kHz.
7 22
This procedure establishes the needed bandwidth of a FM signal that has a 8 24
low modulation index (10 or less).
9 26
Figure 1–14 shows the composite wave with a 15 kHz deviation at a 5 kHz 10 28
rate. The resulting frequency swing of 30 kHz may lead to the erroneous
11 32
belief that a bandwidth of 30 kHz is wide enough to pass the FM signal. This
reasoning is wrong because the composite wave is the result of the 12 32

instantaneous additions and cancellations of ALL the significant sidebands 13 36


shown in the spectrum plot of figure 1–14. These significant sidebands 14 38
occupy 60 kHz of spectrum space. Therefore, to produce the composite wave, 15 38
the bandwidth of the circuits passing the FM signal must be 60 kHz.

Figure 1–14. Bandwidth of a composite wave.


1–11

Sideband power distribution


Sideband power is an important consideration in FM, but the distribution is much more complicated
than in AM. In FM, the total output power of the modulated wave never changes. Even when the
carrier is unmodulated, the transmitter is transmitting at maximum power. When modulation does
occur, the total power remains the same. Therefore, the sidebands must take their power from the
unmodulated carrier. A higher modulation index means more power in the sidebands. It is even
possible to have all the power in the sidebands and none in the carrier. At this point, any further
increase in modulation would start taking power from the sidebands and placing it back in the carrier,
resulting in a redistribution of power.
Figure 1–15,A shows a low amplitude modulating signal and its narrow bandwidth sideband
distribution. An increase in the amplitude of the modulating signal causes greater frequency deviation
of the oscillator, wider bandwidth and more power distributed in the sidebands. Notice that when this
happens, the carrier power decreases as shown in figure 1–15,B. Increasing the amplitude still further
results in the power being taken away from the sidebands and placed back in the carrier as shown in
figure 1–15,C.

Figure 1–15. Effect of modulating signal on carrier frequency deviation.

In FM, the factors that determine the bandwidth are oscillator deviation and frequency of the
modulating signal. What are the limits? The FCC has set up frequency assignments in such a way that
it prevents interference from adjacent stations. Referring to figure 1–16, let’s assume we have two
stations transmitting near each other. Notice that carrier 1 has a frequency of 100 MHz and carrier 2
has a frequency of 100.2 MHz. Now, let’s see why frequencies need to be assigned in this manner by
expanding the range to show bandwidth power distribution. Remember, each station can deviate by
75 kHz above and below the carrier frequency. In addition, both stations have a 25 kHz guard band
above and below the deviation bandwidth. Although these guard bands are a part of the station
bandwidth, transmitter deviation should not take place within the guard bands. These are given only
for the existence of sidebands.
1–12

Figure 1–16. Use of transmitter guard bands to prevent adjacent station interference.

Now, looking at figure 1–17 and expanding the diagram even further we see the carrier and sideband
distribution for each station. Notice some spillover of sidebands does occur into the guard band area
of the adjacent station. This can happen as long as no sideband containing more than 1 percent of the
power spills over. Do not assume that the guard band for the adjacent channel is a part of the station
bandwidth. Allowable transmitter swing is still 150 kHz. The guard band allows an actual bandwidth
of 200 kHz. Some sidebands may extend over a 250 kHz area including adjacent station guard bands,
and still not interfere with adjacent station transmission.

Figure 1–17. Station carrier and sideband distribution.

To develop a better understanding of frequency modulation, remember the following:


1. Percent of modulation is arbitrary (actually, it is an imposed amount).
2. FM produces an infinite number of sidebands.
3. The spacing between sideband frequencies is equal to the frequency of the modulating signal.
4. To be considered significant, sidebands must contain at least 1 percent of the total transmitted
power.
5. Frequency deviation is the amount the transmitted frequency shifts away from the carrier
frequency.
6. The rate the transmitter varies above and below the carrier frequency is the frequency of the
modulating signal.
7. How much the transmitter frequency varies from the carrier frequency is a function of
modulating signal amplitude.
1–13

Another type of modulation technique is phase modulation (PM). It is closely related to FM and is
often used for a variety of radio communications applications. In the next section, PM will be
discussed.
004. Phase modulation
Another form of modulation that is widely used, especially for data transmissions, is PM. As its name
implies, PM uses the modulating signal to control the phase of the RF carrier signal, while the RF
amplitude remains constant. It has a high noise and interference rejection. This type of modulation is
similar to FM, but in this case the concern is the variation in phase rather than the change in
frequency.
Phase modulation theory
The easiest and fastest way of understanding PM is by comparing it to FM. In FM, an oscillator’s
output varies above and below a carrier frequency. During this process, the peaks of each successive
cycle in the modulated waveform occur at times other than in the unmodulated carrier. Therefore, an
incidental phase shift takes place along with the frequency shift. In PM, just the opposite takes place.
Figure 1–18 shows that when an audio signal goes to a phase modulator, the resultant wave shifts in
phase. The time period of the carrier changes for each successive cycle when this occurs. Because
frequency is a function of time period per cycle, a phase shift in the carrier causes a frequency
change. The frequency change in FM is vital but in PM it is merely incidental. The amount of
frequency change has nothing to do with the resultant modulated waveshape.

Figure 1–18. PM wave shift.

To get a better understanding of PM, we need to explore phase relationships.


Phase relationship
Look at figure 1–19 and compare the three voltages. Since voltage A begins its cycle and reaches its
peak before voltage B, it is ”leading” B. Voltage C ”lags” voltage B. We use this principle to help us
understand phase modulation.
1–14

Figure 1–19. Phase relationships.

Figure 1–20 compares a phase-modulated carrier to an unmodulated carrier. The unmodulated carrier
remains at a constant phase, amplitude and frequency. Reading from left to right; the first two peaks
of the modulated carrier start to lag behind the unmodulated carrier. On the third peak, the shift is
even greater. On the fourth peak, they start coming back in phase with each other. This shows the
effects of the positive part of a modulating signal. When the modulating signal is negative, the
carrier’s phase starts leading and the peaks occur at times earlier than the unmodulated carrier. In
phase modulation, the carriers phase shift at the rate of the modulating signal. The two diagrams are
shown together to compare a modulated carrier to an unmodulated carrier.

Figure 1–20. Comparison of a phase-modulated carrier to an unmodulated carrier.

PM vs FM
You have seen how the phase and frequency of a carrier shift in FM and PM. Now, let’s see how they
differ. First, only the phase shift is important in PM and it is proportional to the modulating signal.
Refer to the waveshapes in figure 1–21 to see this principle. The FM carrier deviates when the
modulating waveshape changes polarity (polarity amplitude determines the amount of frequency
deviation). For each alternation, the carrier advances or retards in frequency and remains at the new
frequency for the duration of the cycle. In PM, between one alternation and the next, the carrier’s
phase must change and the frequency shift occurs only during the transition time. Then the frequency
returns to its normal rate. In the PM signal, a frequency shift occurs only when the modulating
waveshape is changing polarity. The frequency during the constant amplitude part of each alternation
is the rest frequency.
1–15

Figure 1–21. Differentiating PM and FM carrier signals.

The relationship of the modulating waveshape to the change in the phase shift in PM is easy to see.
Understand and remember these two rules of phase modulation:
1. Amount of phase shift is proportional to the amplitude of the modulating signal (i.e., if a 10
volt signal causes a phase shift of 20°, then a 20 volt signal causes a phase shift of 40°).
2. Rate of phase shift is proportional to the frequency of the modulating signal (i.e., if the carrier
was modulated with a 1 kHz tone, then the carrier would advance and retard, in-phase, 1000
times each second).
There are many different types of PM used today in digital wireless communications. The reason why
so many exist is that different modulation techniques evolved as technology grew. The two most
common PM are binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK). Also,
there are higher orders of phase shift keying (PSK) you should be aware.
Binary phase shift keying
The simplest phase shift keying (PSK) technique is called the binary phase shift keying (BPSK).
BPSK uses two different possible shifts: 0° and 180°. The digital signal is separated time-wise into
individual bits. The state of each bit is determined by the state of the preceding one. If there is no
change in the bits, the phase of the modulated signal stays the same. If the phase of the wave changes
180°, then the signal state changes from 0 to 1 or from 1 to 0 (fig. 1–22). While BPSK modulates one
digital bit onto one sine wave, there are other sophisticated forms of PSK.

Figure 1–22. Binary Phase Shift Keying.

Quadrature phase shift keying


Quadrature phase shift keying (QPSK), or 4 PSK, has four different phase shifts. In QPSK, the four
angles are usually out of phase by 90°. Since QPSK has four possibilities but a digital bit has only
two (0 or 1), each of these possible phases can represent two bits of information. A 0° phase shift
represents the bits 00, 90° represents 01, 180° represents 10, and 270° represents 11. This multi-bit
representation per phase is the key to faster data rates, doubling the data carrying capability over
BPSK. Figure 1–23 is a signal representation of the QPSK process. Because of its excellent
performance in a noisy environment, QPSK is widely used in satellite communications (SATCOM).
1–16

Figure 1–23. Signal representation of formation of QPSK signal.

Higher order of phase shift keying


To get higher levels of PSK such as an 8- or 16-PSK, use smaller phase shifts to allow for more phase
shifts and increased capacity as each shift represents more bits in combination. Again, the advantage
allows higher data rates to be carried within a given bandwidth. However, the downside is that the
higher data rates require a better signal-to-noise ratio. In a noisy channel, the narrow phase difference
is harder to detect which counteracts any data rate performance improvements.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

001. Reasons for modulation


1. What are the two primary purposes for modulating a signal?

2. What must we do first to radiate a signal over long distances?

002. Amplitude modulation theory


1. Name the part of the AM signal that does not fluctuate in amplitude by the modulating signal.

2. What frequencies does the amplitude modulated waveform contains?

3. What part of the modulated carrier wave contains the information carrying component?
1–17

4. In AM, what is the relation between the bandwidth required to transmit the signal and the
frequency of the modulating signal?

5. Define “percent of modulation.”

6. Find the % mod if a 4 Vpk-pk RF signal is modulated by a 2.5 Vpk-pk audio signal.

7. If the signal in figure 1–3,C, has an Emax of 50 mVrms and an Emin of 5 mVrms, what is its %
mod?

8. What are two results of overmodulation?

003. Frequency modulation


1. What determines the amount of deviation of a FM carrier?

2. How does the rate of deviation relate to the frequency of the modulating signal?

3. To prevent interferences with other FM stations, who establishes limits on the maximum amount
of deviation in FM?

4. What is a significant sideband?

5. How do you determine the modulation index in FM?

6. Where do FM sidebands get their power?

7. What is the relationship between modulation index and sideband power?


1–18

004. Phase modulation


1. In PM, what effect does the change in carrier frequency have?

2. Describe the effect the positive and negative alterations of a modulating signal have on the phase
of the carrier in PM.

3. When is the carrier at its rest frequency in PM?

4. What part of the modulating signal controls the amount of phase shift in PM?

5. What part of the modulating signal controls the rate of phase shift?

6. What is the advantage for using QPSK over BPSK?

7. How can higher levels of PSK be achieved?

1–2. Digital Transmission Techniques


An analog signal must undergo analog-to-digital conversion for digital transmission. Digitization is
the process of converting an analog signal into a digital signal. Digitization usually takes place in the
multiplex equipment in a digital system. Some communication systems, particularly tactical voice
systems, use another conversion technique called continuously variable slope delta modulation. We
discuss these techniques in the following paragraphs.
You may select from a number of transmission modes and data codes for transmitting signals from a
sending device to a receiving device. In fact, many networks use a variety of transmission modes and
data codes for signaling. The majority of computers communicate with digital signals.
Digital signals are discrete “on” and “off” pulses of electricity that most computer systems use to
create the bits that make up bytes, or characters. For example, a computer may create an “on” bit by
placing a short positive signal of 5 volts (V) on a channel. Conversely, it may create an “off” bit by
placing a short negative signal of 5 volts on the channel. Sending data with digital signals we call
digital transmission.
Many data channels use digital signaling because computers use digital signals. However, today’s
technology allows both voice and data to be transmitted by either analog or digital signaling. Also,
most new telephone lines that are installed are digital.
Digital transmission offers advantages over analog transmission. Digital is usually easier to reduce
and clean up noise and errors during transmission, especially when messages must be sent over long
distances. Another advantage is that digital transmission is compatible with digital computer systems.
1–19

Thus, it is not necessary to convert data messages to and from analog to digital when computer
systems use digital transmission channels.

005. Pulse code modulation


Pulse code modulation (PCM) is a complete digitization process. It converts an analog signal that is
continuous in time and amplitude into one that is distinct in both time and amplitude (i.e., a true
digital signal). The four steps in the PCM process are band limiting, sampling, quantizing, and
encoding (fig. 1–24).

Figure 1–24. The PCM process.

PCM block diagram


In figure 1–24 you see that the process begins by band limiting the analog input signal using a low-
pass filter. The sampler processes the signal, changing it to a discrete (specific) time and continuous
amplitude signal called a pulse. The sampling pulses go to a quantizer that assigns discrete amplitudes
to the pulses. The final step in the PCM process is encoding. In this step, it assigns a binary number
code word to each sample corresponding to its amplitude. They go out as a series of pulses and empty
intervals (during which no pulse is present). For convenience, the pulses are 1s and intervals are 0s.
On the receive side, the 1s and 0s translate back into amplitude pulses to reconstruct the original
waveform.
Band limiting
Band limiting in PCM establishes the highest and lowest frequency encoded. A 3 kHz filter limits the
signal before it goes to the sampler. The primary function of the band-limiting filter is to ensure the
input to the sampler never exceeds a maximum frequency—in this case about 3.5 kHz. This is
important for the next step.
Sampling
Sampling is converting a continuous time signal into a discrete time signal. The basis for both PCM
and continuously variable slope delta (CVSD) is signal sampling at regular intervals. A sample is a
pulse that represents the amplitude of the signal at a particular time (fig. 1–25). In PCM, the input
analog signal is directly sampled (therefore the name direct PCM). CVSD samples a signal related to
the input signal. We call these sampled pulses (shown in the figure) pulse amplitude modulated
(PAM) samples. While we are on the subject of sampling, let us look at other methods of pulse
modulation.
1–20

Figure 1–25. Sample pulse representing the amplitude of a signal.


Pulse modulation
Pulse modulation is the process of varying the pulses of a pulse train in some way to represent the
analog input signal. If you compare pulse modulation to analog modulation, the pulse train would
function as a carrier and the analog input signal would function as the modulating frequency. See
figure 1–26 for the following definitions.
1. PAM causes the amplitude of the pulse train to vary according to the amplitude of the input
signal.
2. Pulse width modulation (PWM), also known as pulse duration modulation (PDM), varies the
width or duration of the pulses in the pulse train according to the amplitude of the input
signal.
3. Pulse position modulation (PPM) changes the position of the pulses in the pulse train
according to the amplitude of the input signal.

Figure 1–26. Pulse modulation.


1–21

Nyquist sampling theorem


You could construct the waveform of an analog signal by electrically combining a particular set of
different sine waves. The importance of this idea lies in a characteristic of all sine waves. You can
completely define a full cycle of a sine wave by any two points on the wave. So, if you frequently
take regular samples, they will contain all the information of the original signals.
A distinguishing feature of analog signals is they are band limited. Frequencies of a sine wave fall
within a limited portion of the frequency spectrum. The bandwidth determines the upper frequency
limit of the sine waves that comprise an analog signal waveform. Sampling a sine wave at its highest
frequency twice during a cycle completely defines it because you need a minimum of two samples to
define a sine wave. At this rate of sampling, you sample every other component sine wave in the
waveform more than twice per cycle, which is more than adequate to define it. You can also
completely define a complex waveform by taking samples at twice the highest frequency in the
bandlimited signal. We call this rate the Nyquist sampling rate, named for H. Nyquist who did
research on the analog-to-digital conversion process.
Quantization and encoding
Quantization and encoding steps work hand in hand and will be explained together. Since an analog
signal is a continuous signal, the number of amplitude values it contains is infinite. The limit to the
number of values available for coding is the number of available values for transmit, which is a finite
number. Consequently, a finite number of values replace an infinite number of amplitude values. To
do this, we assign discrete amplitude values. All sampled amplitude values are rounded to the nearest
assigned value. This process is quantization.
The final step is to code the quantized samples into a binary word. The general rule for binary coding
is the formula N = 2H where ”N” is the number of possible amplitude values to assign to binary
numbers and ”H” is the number of bits in a word (usually seven or eight). For example, a 7 bit binary
word, or 27, equals 128 possible amplitude values or quantum steps. This is the maximum number of
steps allowed to define the amplitude of the analog signal. The number of steps available usually is
not enough to exactly represent the analog signal; it is approximated or rounded off. Rounded off
values are not true values (fig. 1–27). When compared to the original waveform, the reconstructed
one from a quantized signal is distorted and resembles the stepped signal shown in the figure. This
distortion shows up as noise and is called quantizing noise.

Figure 1–27. Rounded off values representing an analog signal.


Uniform quantizing
Quantizing can be either uniform or non-uniform. Quantizing is uniform when we define the coded
amplitude values as steps of equal amplitude range, as in figure 1–28. Uniform quantization has
drawbacks when applied to voice signals. In human speech, the lower amplitudes are prevalent, but
the quantizing steps are the same size at the lower amplitudes as at the higher amplitudes. Therefore,
most of the energy in a voice signal being concentrated at the lower amplitude levels is quantized into
relatively few amplitude levels. The result is a high level of quantizing noise.
1–22

Non-uniform quantizing
The purpose of non-uniform quantizing is to provide more coded amplitude levels at the low signal
levels and fewer at the high signal levels. This reduces quantizing noise from the range where most of
the signal energy is located. Figure 1–29 illustrates quantizing with non-uniform amplitude level
spacing. The closest spaced levels are concentrated at the low-signal amplitude levels.

Figure 1–28. Uniform quantizing.

Figure 1–29. Non-uniform quantizing.


1–23

Companding
Companding is a process that compresses the high-amplitude parts of a signal before transmission,
decreasing its dynamic range. On reception, it expands the range of the signal to its original values,
restoring the dynamic range. Thus, you need fewer coded amplitude levels. Using a combination of
non-uniform quantizing and companding, it is possible to decrease the code word from 11 to 7 bits
and still achieve the same quality of the reproduced received signal.

006. Asynchronous and synchronous transmission


This lesson briefly discusses the differences between two different methods of serial transmission,
namely, asynchronous and synchronous. A major challenge in data transmission is making sure there
is synchronization between the transmitter and the receiver, that is, that the receiver is accurately
reading what the transmitter is sending. Careful synchronization assures that the transmitted data is
received properly and helps reduce the possibility of transmission errors. As long as the sender and
receiver use the same protocol, information can be reliably exchanged between them. Asynchronous
and synchronous transmissions are both very common data transmission methods.
Asynchronous transmission
In asynchronous transmission, each information character is individually synchronized, usually by
using the start and stop bits. Usually the STOP bits are the same polarity as the channel idle state (for
example zero) and the start bits are of the opposite polarity (for example 1). The range of frequencies
that can be transmitted over a given circuit is called bandwidth. Frequencies are expressed in hertz
(cycles per second). The type of media (cabling) and the method of transmission used can limit the
frequency range. Generally, the greater the bandwidth, more information can be sent through the
circuit in a given amount of time. Asynchronous Transfer Mode (ATM) is viable for both local area
networks (LAN) and wide-area networks (WAN), and it supports many types of traffic including
voice, data, facsimile, real-time video, compact-disk-quality audio and imaging
In asynchronous transmission, each character is sent down a channel separately; that is, each
transmission unit is only one character in length. The character is headed with one or more start bits
and ended with one or more stop bits. The start and stop bits tell the receiving device that a character
is coming and that the character has been sent. The character usually contains a parity bit, which is
used by the receiving device to verify that the transmission was received correctly. In a system using
even parity, the number of 1 bits in all characters sent must equal an even amount. The sending
device places a 1 bit in the parity slot when it sends a character whose 1 bits do not add up to an even
number.
The receiving device checks each character it receives by summing the 1 bits. If the character arrives
with an even number of 1 bits, the device assumes that it has received a correct character. If the
number of 1 bits is odd, the device assumes that an error at transmission has occurred. On the other
hand, some systems verify transmissions using an odd-parity procedure. In that case, the example
above is reversed.
Many terminals send messages asynchronously in American Standard Code for Information
Interchange (ASCII) code (fig. 1–30); that is, they send messages by transmitting a series of separate,
single characters composed of the following elements:
 A start bit.
 Eight bits that represent an ASCII-coded character.
 A parity bit.
 A stop bit.
This means that to send one character, the terminal actually must send a total of 11 bits. Thus, the
overhead to send a character is 28 percent of the total bits sent or 3 bits out of 11. This means that 28
percent of the transmission costs paid to the common carrier are for nonmessage bits.
1–24

Figure 1–30. Message format.

Asynchronous transmission does have some advantages. Each individual character is complete in
itself, therefore, if a character is corrupted during transmission, its successor and predecessor will be
unaffected. Asynchronous transmission is particularly suited for applications where the characters are
generated at irregular intervals, such as data entry from a keyboard. There are also some serious
disadvantages.
For successful transmission using asynchronous transmission, the receiver must accurately recognize
the start bits. Line interference may inadvertently send a false start bit to the receiver or the receiver
may just miss a start bit. Also, as we said earlier, 28 percent of sent bits of data are for control
purposes and carry no useful information. Lastly, because of line distortion, the speed of transmission
is limited.
Synchronous transmission
To reduce the overhead costs of data transmission (the 28 percent control bits we discussed under
asynchronous transmission), some networks send messages using synchronous transmission.
Synchronous transmission blocks many characters together for transmission. It is important to note
that there is no control information associated with the INDIVIDUAL characters. Instead, the
characters are grouped together in blocks of some fixed size and each transmitted block is preceded
by one or more special synchronization characters that can be recognized by the receiver. These bits
are identified as SYN for synchronization. They precede the message block so the receiving device
knows what is coming and can prepare for the message.
Several bits also follow the block so the receiving device can verify what it received. However, the
beginning and ending bits are a small percentage of the total number of message bits sent.
Synchronous transmission reduces the overhead costs of communications and is not as prone to
distortion so it can handle much higher transmission speeds.
There are a few disadvantages to synchronous transmission. For one, the cost of synchronous
equipment is usually higher than that of asynchronous equipment. If an error should occur, an entire
block of data (possibly 100 characters or more) is lost rather than just one character, and the sender
cannot transmit characters simply as they occur and consequently has to store them until it has built
up a complete block—thus the system is unsuitable for applications where characters are generated at
irregular intervals.

007. Error detection and correction


Errors, or noise, in electronic transmissions or circuits are unavoidable. Measures can be taken to
minimize, or in some cases eliminate errors, because there are forces of nature acting on everything in
our world, including our communications systems. The best we can hope to do is to minimize noise
so that we can transmit information at a readable level at the receiving equipment. Fortunately, there
are ways to effectively minimize the effect imperfect transmission processes have on our
communications abilities. This lesson discusses ways to detect and correct errors that occur in
communication circuits.
1–25

Error-detection techniques
Error detection is an important measure in providing reliable communication. Errors inevitably slip
through, but by using certain techniques, errors can be identified nearly every time. Error detection is
usually established by using one of four methods: (1) vertical redundancy check (VRC), (2)
longitudinal redundancy check (LRC), (3) checksum, and (4) cyclic redundancy check (CRC). We
explain each in the following paragraphs.
Vertical redundancy check (VRC)
As each incoming character is received, vertical redundancy check (VRC) checks it for odd or even
parity. Odd parity means that there is always an odd number of “1” bits in the bit pattern for each
character. Even parity indicates an even number of ones. Outgoing characters are checked before
transmission for the bit parity count. For example, if you are using odd parity and the count is even,
the transmitter inserts a “1” bit in the parity bit position to make the count odd. Thus, all characters
transmitted have an odd bit count. A receiver detects an error when a character contains an even
number of “1” bits.
Longitudinal redundancy check (LRC)
Where VRC checks characters for odd parity, LRC checks an entire horizontal line within a block for
odd or even parity. The count is performed at both the transmitter and receiver. This method is
usually done in addition to VRC to improve error control.
The receiver checks the block horizontally and vertically and creates its own block check character
(BCC) and compares it with the transmitter’s BCC count. When they are equal, the previous block
was error-free. When they are unequal, the receiver requests a retransmission of the previous block.
VRC/LRC is only available with the ASCII character set. VRC when combined with LRC is about 98
percent effective.
Checksum
If the system is using extended ASCII code, there is no parity bit to use for error control. The extra bit
is part of the code. This caused the development of another error detection method called checksum.
This method stacks characters one on top of the other and adds just like any addition problem. The
binary number 255 divides the answer. Now the transmitter does something that may seem a little
strange. After doing all of this division, it disregards the answer. However, it keeps the remainder that
then becomes the checksum.
Cyclic redundancy check (CRC)
The CRC treats the binary ones and zeros in the frame address, control and information fields as a
single long binary number. The CRC uses that number as a dividend in a division problem. The
dividend is divided by the binary divisor represented by a standard binary sequence. This digital
operation results in a binary quotient and a remainder; which, because of the size of the divisor, may
contain up to 16 bits.
This division process takes place in the transmitter at the sending end of the link. The 16-bit
remainder attaches to the frame (packet) as the frame sequence check. When the frame arrives at the
distant end, the BCC is divided by the same divisor used by the transmitting end. The remainder is
computed and the receiver compares it to the BCC received with the data over the link. If they are the
same, the receiver presumes the data is correct. If they differ in any bit, the receiver assumes there
was a transmission error and asks the transmitter to repeat the frame (packet) containing the error.
CRC is more effective than parity or checksum for a data block greater than 512 characters. It applies
to either the hardware or software and is about 99 percent effective in most applications.
1–26

Error-correction techniques
Once an error is detected in a frame, there must be a method for correcting it. There are two methods
available for resolving frame errors: automatic retransmit on request (ARQ) and forward error control
(FEC).
Automatic retransmit on request (ARQ)
The receiver automatically sends a retransmittal request to the sender if it finds an error in a received
frame. This is the most common method.
Forward error control (FEC)
This error-correction method relies on the receiving end to correct the errors. Forward error control
relies on the transmitter to send multiple copies of the same message to the distant end. The intent is
for the receiver to compare all the copies and then reconstruct a message by using the good portions
of the message copies received. FEC is normally used for one-way broadcast circuits or on circuits
with extremely high error rates. It is not nearly as efficient as ARQ when the line has a low error rate.
It is also expensive, since FEC requires storage and processing equipment to contain and process all
the message copies for comparison.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

005. Pulse code modulation (PCM)


1. Define the term digitization.

2. Name the four steps of PCM.

3. State the main purpose of the band-limiting filter.

4. Define sampling.

5. What part of the pulse train is varied using PAM, PWM, and PPM?

6. What is the Nyquist sampling rate?

7. Which step of PCM assigns discrete amplitude values to the sampled amplitude values?

8. Name the two methods of quantization.

9. Which method assigns amplitude values based on an equal amplitude range?


1–27

006. Asynchronous and synchronous transmission


1. With ATM, what is the length of each transmission unit?

2. What part of an ATM transmission tells the receiving device that a character is coming and that
the character has been sent?

3. How does the receiving device determine whether it has received a correct character?

4. What is an advantage of synchronous transmission over asynchronous transmission?

5. How does synchronous transmission differ from asynchronous transmission?

007. Error detection and correction


1. What does VRC check each incoming character for?

2. In LRC, what transmitted character does the receiver use to determine if a transmission was error-
free?

3. When using the checksum method of error detection, what binary number is used to divide the
sum of all the characters in order to derive the checksum?

4. How does the CRC method of error detection determine the dividend when computing the BCC?

5. How effective is CRC at detecting errors in most applications?

6. How does ARQ work?

7. Using forward error control, at which end of the transmission link are errors corrected?
1–28

8. Using forward error control, what is the transmitter’s function in error correction?

9. Using forward error control, what is the receiver’s function in error correction?

1–3. Lightwave Communications


Fiber optic communication systems (FOCS) require at least three basic components: an optical
transmitter, a transmission medium, and a detector or optical receiver (fig. 1–31). The transmitter
includes an electronic drive circuit (modulator) and a light source, such as a light-emitting diode
(LED) or laser diode (LD). The receiver includes a light detector and an output circuit. The
transmitter and detectors are installed in a unit called the modem. The modem interchanges data and
control signals between a piece of electronic equipment and an optical fiber cable. Most modems can
continuously monitor the operational status of the link and make analog and digital loopback tests. If
the cable is extremely long, the link may have a repeater to convert the light signal back to an
electrical pulse, rearrange it, change it back to a light pulse and then retransmit it.
Because FOCS are commonly used today for long-haul, tactical, fixed base, and area networks, we
will start with the optical transmitter in FOCS links.

Figure 1–31. Basic Fiber Optic System.

008. Optical transmitter


The transmitter’s job is to take the two-level digital data signal, vary the intensity of the optical
source according to that data, and deliver the optical signals to the optical fiber. Normally, a
transmitter is comprised of a driver, an optical source, and a FO pigtail. The driver is an electronic
circuit that provides the signal current to operate the optical source. An optical source converts
electrical energy (current) into optical energy (light). Light emitted by an optical source is launched,
or coupled, into an optical fiber for transmission. Fiber optic data link performance depends on the
amount of optical power (light) launched into the optical fiber. Optical sources are typically
semiconductor devices, such as LEDs and LDs. A fiber pigtail provides coupling from the optical
source to an FO cable. The pigtail is a section of optical fiber with one end permanently connected to
the optical device. The other end is either bare fiber or is terminated with an FO connector which we
will discuss later.
Three fundamental types of sources are applicable to fiber-optic waveguides: LED, semiconductor
LDs, and non-semiconductor LDs. We discuss only the first two common types. Non-semiconductors
will be used in the future, but for now, they’re not well suited for drive sources.
1–29

A source has three main requirements. First, its operating speed, or rise time, must be fast enough to
meet the application’s bandwidth requirements. LED rise times range from around 3 nanosecond (ns)
to over 100 ns, while a lasers’ rise time is 1 ns or less. Second, it must provide enough optical power
through a fiber to operate the detector. LED output power is in the microwatt to low-milliwatt range,
while lasers run to several milliwatts. We want to couple as much of this output power as possible
into the fiber to overcome attenuation and loss in the link. We get the most efficient coupling when
the fiber’s numerical aperture (NA) and the emitter’s diameter are the same. Third, it must produce a
wavelength that takes advantage of the low-loss propagation characteristics of the fibers. By this, we
mean the wavelength emitted at peak power should coincide with low-loss bands in the fiber. Glass
fiber propagates light better at 820 nanometer (nm) than at 950 nm, so the source should emit at 820
nm also. Plastic fibers work better at 650 nm—red light. The source should also emit a narrow band
of wavelength, ideally, a single wavelength, to cut down spectral dispersion. Other requirements
include reliability and easy use.
Light-emitting diodes
LEDs are semiconductor devices made of gallium aluminum arsenide (GaAs). The spectral range is in
the 700–900 nm wavelength band. The power output of a typical LED is lower than that of lasers:
from –15 dBm for multimode devices to –27 dBm for single-mode devices. An LED’s modulation
rate is usually limited to about 100 megabits per second (Mb/s) because of the switching
characteristics of the device. LEDs can be used in digital applications at low-to-medium data rates.
They’re ideal for short-to-medium-length spans of 10 to 15 km. The advantages of these sources
include their small size, ruggedness, capability of single and direct modulation, reduced expense and
spectral match with both fiber waveguides and silicon photodetectors.
There are two basic designs of LEDs: the edge emitter (fig. 1–32) and the surface emitter (fig. 1–33).
The surface emitter has an “etched well” created in the n-type GaAs material over the primary light-
emitting area. The optical fiber for coupling is physically inserted into this well and epoxied to the
device, forming a pigtail (fig. 1–34) that’s inserted into the fiber optic modem. The edge-emitter is
based on an architecture known as strip geometry and is embedded in a structure that has an elliptical
reflecting surface formed in a cathode material.
A surface-emitting LED emits light at a very wide angle, or high NA, so only a small part of the total
light is actually emitted into the fiber. Lasers and, to a lesser degree, edge-emitting LEDs have high
directional outputs that are more closely matched to the fiber. But, how do LEDs work?
When LEDs are made, dopants are added to the crystal to create an area of N-material and an area of
P-material. The N-material has extra (negatively charged) free electrons and the P-material has the
positively charged holes (fig. 1–35). The two dissimilar materials meet at the PN junction, which is
void of free electrons and holes. This PN junction is called the depletion region.
When a bias current is passed in the right direction through the crystal, the depletion region breaks
down. The free electrons and holes move toward the junction and across it. When a hole and electron
meet, they recombine and the electrons move from the conduction band to the valence band. In doing
this, energy is emanated from the depletion region as a photon of light. The LED’s electrons give up
differing amounts of energy, so the light emitted is of several wavelengths. An actual LED is; of
course, much more complex than we’ve indicated here, but its operation is essentially the same.
1–30

Figure 1–32. Edge-emmiter diode.

Figure 1–33. Surface-emitter diode.

Figure 1–34. Pigtail source.


1–31

Figure 1–35. How an LED works.

LEDs differ from lasers because their light output is incoherent and the laser’s light output is
coherent. An LED’s incoherent output contains light at many wavelengths (a broad spectral width),
whereas the laser’s coherent output is either concentrated at a single wavelength or at a group of
discrete wavelengths.
Laser diode source
This class of optical sources is limited to the semiconductor injection laser diode (ILD) (fig. 1–36).
LDs are generally the optimum source for dispersion-limited and attenuation-limited systems.
They’re not applicable to fiber-bundle technology as LEDs are. LDs operate at higher data rates,
provide a higher optical power output, require slightly more complex drive and temperature control
circuits, and are available only at a somewhat higher cost. Also, even though the predicted life of a
laser source is moving toward 100,000 hours, it’s still appreciably less than 1 million hours—
commonly considered the life of an LED source.

Figure 1–36. Laser diode.

The laser structure, unlike the structure of an LED, has a resonant optical cavity. The cavity generates
light energy that’s highly monochromatic, of very narrow spectral width (1 to 5 nanometers) and
highly directional. Typical power coupling is in the milliwatt range. Lasers are temperature sensitive
and must be controlled. LDs are described as coherent, or singular in frequency, and they don’t
exhibit the same material dispersion as LEDs.
The modulation speed of an ILD greatly exceeds that of an LED. It’s not uncommon for
semiconductor lasers to have modulation capabilities well into the gigabit per second (Gb/s) range.
1–32

The combination of high modulation and wide bandwidth of a single fiber results in systems
information capacities that can exceed 10,000 voice channels per fiber.
Like the LED, the laser (fig. 1–37) has an area of P and N, as well as a PN junction. However, it has a
center P-area containing the active region where the laser action occurs. As current passes through the
diode, holes and electrons move into the active region. Some recombine, giving off photons of light in
the process. This photon in turn stimulates the emission of a second photon through a high reflective
process in the active region. This takes a strong current. In the laser the energy gap between the
valence band and the conduction band is precisely controlled, so the stimulated photons all have the
same wavelength. Let’s now take a look at how the light from LEDs or the ILDs is received at the far
end of the link.

Figure 1–37. How a laser diode works.

009. Optical receiver


An optical receiver is a device that accepts optical signals from an optical fiber and converts them
into electrical signals. Optical detector’s performance is limited either by dispersion or attenuation.
They must have high sensitivity over the range of wavelength in use, their response time must be fast
enough for the transmission bit rate (BR), their physical dimensions must be compatible with the fiber
core dimension, and their performance variation with temperature must be minimal.
The weakest light they can receive depends on two things: noise floor (noise equivalent power) and
the signal-to-noise ratio (SNR) by the circuit devices. Noise seriously limits a photodiode’s operation.
The two main sources of noise are shot, or quantum, noise and thermal, or Johnson, noise. Shot noise
is related to dark current and arises from the quantum nature of current. Dark current, as it is called,
arises from a photodiode that produces a small current even with no light.Sometimes the noise in a
receiver isn’t constant; the extra noise may interfere with the signal (fig. 1–38). Two terms are used
for this relationship: signal-to-noise ratio (SNR) and bit error rate (BER). SNR is measured in dBs.
SNR expresses how much stronger the signal is than the noise; the higher the SNR, the better. An
SNR of 20 dB means that the signal level is 100 times greater than the noise level. BER, used with
digital systems, relates to the number of bits transmitted to the number of errors that occur. A BER of
10–9 means that only one error occurs for every billion bits transmitted.
1–33

Figure 1–38. Noise in a signal.

Silicon is the most widely used material for semiconductor photodiodes since responsivity is suitable
for the 800–900 nm ranges. The detector’s responsivity is a ratio between the conversion of light to
current. A photodiode has a responsivity of about .6 A/W. In practical terms, that’s 60 μA resulting
from over 100 μW of optical energy striking the diode. Responsivity of the photo detector depends on
the wavelength of light.
The two main types of photodetectors with widespread application in lightwave systems are the
positive intrinsic negative (PIN) diode and the avalanche photodiode (APD). In the PIN detector, the
diode junction is composed of three layers: a P-region and an N-region separated by a lightly doped
intrinsic I-region. No current flows in the load resistor because the PIN photodiode junction (fig. 1–
39) is normally reverse biased. Incident lights with photon energy equal to or greater than the
electron-band-gap energy causes the semiconductor material in the intrinsic region to generate photon
current in the load resistor.

Figure 1–39. PIN photodiode.


1–34

The PIN diode operates with a lower bias voltage, needs no high voltage power supply, is not
sensitive to temperature changes, and is relatively inexpensive to use. However, it does suffer from a
lower performance value in terms of SNR and gain.
APD construction is similar to PINs except that an APD’s junction includes a region of very high
electric field called an avalanche region. The basic difference between APD and PIN photodetectors
is the number of electrons emitted when they’re activated by a similar light source.
The PIN diode converts one photon to one electron. The PIN diode usually needs an amplifier to
produce enough output voltage. An APD internally multiplies the resulting photo current before it
leaves the device. This process is known as the avalanche effect. Because of this multiplication, APD
receivers are usually more sensitive than PINs.
APDs are very fast, turning on and off much faster than a PIN, so they’re ideally suited for highspeed,
long-distance applications. APDs do fall down in some areas though. They’re much harder to apply in
fiber optic systems than PIN diodes. Some APDs require bias voltage over 1,000 volts, making it hard
to control temperature. The PIN and APD detectors are part of the modem receivers used in optic
links.
Other detectors include photo transistors, hybrid photodiode/preamplifiers and integrated
photodiode/preamplifiers (IDP). In the IDP, the signal is amplified before it meets the load resistor
noise, producing a usable voltage output.
The final technological breakthrough needed for optical fibers is the perfection of optical processing
chips that can amplify, modulate and switch light without converting it to an electrical signal first. At
the present the only optical processors are ”hybrid” in that they’re still partly electrical.

010. Repeaters, regenerators, and optical amplifiers


When a communications link must span a larger distance than existing fiber-optic technology is
capable of, the signal must be regenerated at intermediate points in the link by repeaters. By using
opto-electronic repeaters, regenerators or optical amplifiers, these problems have been eliminated.
This lesson briefly looks at each of these types that extend our fiber optic communications link.
Repeaters
To extend the distance and to preserve signal integrity, in some instances, repeaters are required to
regenerate incoming signals. Repeaters are devices that take incoming low-level optical signals,
convert them to the electrical domain, amplify them, then convert them back to optical signals and
pass them on. Repeater applications include analog signaling. These devices are for use along long
fiber runs to extend the usable distance beyond that possible with only an optical transmitter and
receivers.
Regenerators
Regenerators (regenerative repeaters) are used for digital signaling. They receive incoming low-level
dispersed (distorted) digital pulses. After conversion to the electrical domain, the pulses are
regenerated (the leading and falling edges are sharpened and the pulses are amplified). Finally, the
regenerated pulses are retransmitted optically. The signal output from a regenerator is accurately
timed to reestablish the original pulse width and spacing.
Optical amplifiers
Optical amplifiers, such as the external optically pumped erbium doped fiber amplifiers (EDFA), can
provide approximately 20 dB of direct optical gain. These optical amplifiers perform as direct
repeaters without converting to and from the electrical domain for amplification. The EDFA requires
a laser optical source acting as a “pump” signal which provides the amplification mechanism. Now,
let’s look at the fiber optic cable itself.
1–35

011. Fiber optic connectors


Connectors normally terminate optical fibers. They allow you to connect and disconnect fibers from
patch panels, transmitters (sources) or receivers (detectors). Don’t confuse a connector with a splice.
A splice is generally not designed to be repeatedly connected or disconnected. Connectors may be
installed on cables in three different configurations: patch cords, pigtails, and breakout cables.
Patch cords
Patch cords are short sections of single fiber cables that are used to connect circuits within a patch
panel together. A patch cord has a connector at each end.
Pigtails
Pigtails are similar to patch cords except they only have a connector on one end. The other end of the
pigtail is spliced to the fiber optic cable. They are used to connect a fiber cable to a patch panel or end
equipment.
Breakout cables
Breakout cables (fig 1–40) are similar to pigtails except they are multi-fiber cables. Like the pigtail,
each fiber has a connector installed on one end of it. The connector end of the breakout cable is
connected to a patch panel or end equipment.

Figure 1–40. Simple breakout cable.

Types of connectors
There are many different kinds of connectors used with fiber optic cables. Currently, there are five
basic types that you may encounter within the Air Force. They are SMA, Biconic, ST, FC, and SC
connectors. Let’s take a look at each of them.
SMA connectors
SMA (Sub-Miniature, Type A) connectors (fig. 1–41), originally designed by Amphenol, use a
threaded coupling nut without a keying device. The two basic types are the 905 style and 906 style.
The 905 uses a straight ferrule and the 906 has a step-down nose to allow use of plastic alignment
bushings for maximum alignment. Originally designed with a steel ferrule for multimode
applications, they are now available with ceramic ferrules for single-mode applications. The primary
problems that arise with the use of SMA connectors are crashing due to over-tightening the threads,
and repeatability of alignment because of the lack of a keying device.
Biconic connectors
These connectors are named for their conical shape. Many items of equipment installed during the
1980’s still require interfacing using biconic connectors. These were the first connectors used on
single-mode fibers, although, they are available for single-mode or multimode applications. Biconic
connectors were originally not keyed and early problems developed with repeatability and crashing
due to over-tightening. Keyed models were developed later, but more reliable connectors emerged for
use on new equipment refer to (fig. 1–42).
ST connectors
ST (straight tip) connectors (fig. 1–43) were designed by AT&T Bell Laboratories for use with
single-mode or multimode fibers. They use quick-release, keyed, bayonet couplings that are preferred
in situations where severe vibrations are not expected. The ST is probably the most popular and
widely used connector in local area networks, premise wiring, test equipment and similar
1–36

applications. The keying feature ensures that the fiber is always inserted to the mating bushing with
the same orientation. The bayonet coupling prevents crashing due to over-tightening.
FC connectors
Named FC for “field connector,” it was originally devised by Nippon Telephone and Telegraph
(NTT) for telecommunications. It was used by MCI in its fiber optic telephone network in the 1980’s.
The connector has a threaded coupling feature similar to the SMA for use in high-vibration
environments. The threads would be difficult to over tighten because stops have been installed to
obtain repeatable torque. It also offers a keying feature similar to the ST, except that some FC
connectors are “tunable.” The term “tunable” means the keying slot can be rotated to find optimal
alignment and remains in that alignment until moved again. The FC connector is available for single-
mode and multimode applications refer to (fig. 1–44).

Figure 1–41. SMA connector.

Figure 1–42. Biconic connector.

Figure 1–43. ST connector.

Figure 1–44. FC connector.

SC connectors
Named SC from “subscriber connector” (fig. 1–45), it was also developed by NTT and gained
popularity throughout the 1990’s for both single-mode and multimode applications. They use a push-
pull engagement for mating and are designed to be pull-proof so a slight pull on the cable will not
disengage the connection. The SC connector is a strong competitor to the FC and ST connectors due
to the ease in constructing multifiber connectors for duplex configurations. Connectors such as the
FC, ST, and SMA that require twisting are not readily adaptable to multifiber connections in high
density applications because of the space required to allow rotation. Amphenol has already developed
a mini SC for even higher density applications.
1–37

Figure 1–45. SC connector.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

008. Optical transmitter


1. What are the three basic components that make up a fiber optics link?

2. Name the three components that make up the optical transmitter?

3. What are the three main types of light sources for optic waveguides?

4. Give the three main requirements of a light source.

5. What are the advantages of LEDs over lasers?

6. What are the two basic designs of LED?

7. How are lasers different from LEDs?

8. How are lasers’ output power measured?

9. What are lasers sensitive to?

009. Optical receiver


1. What factors limit a detector’s performance?

2. What two factors control the light signal that’s received by the detector?
1–38

3. What terms refer to the relationship between the signal and noise?

4. Upon what does a detector’s responsivity depend?

5. What are the two main types of photodetectors?

6. For what type of application are APDs ideally suited?

010. Repeaters, regenerators, and optical amplifiers


1. What are the purposes of repeaters in an fiber optics communications link?

2. How do regenerators work?

3. Which repeater device does not require converting to and from the electrical domain for
amplification?

011. Fiber optic connectors


1. Describe the differences between a fiber optic patch cord, pigtail and breakout cable.

2. Match the type of fiber optic connector in column B with their characteristics in column A. Items
in column B may be used once, more than once, or not at all.

Column A Column B

____ (1) Have conical shape a. SMA connector.


____ (2) Uses push-pull arrangement for mating. b. Biconic connector
____ (3) Used in high-vibration environments c. ST connector
____ (4) Most widely used fiber optic connector for local area d. FC connector
networks and premise wiring. e. SC connector.
____ (5) Uses threaded coupling nut without a keying device.
____ (6) Use quick-release keyed bayonet couplings.
____ (7) First connectors used on single-mode fiber optic cable.

3. What are “tunable” fiber optic connectors?

4. Why are SC fiber optic connectors better for use in high-density applications than other types of
connectors?
1–39

Answers to Self-Test Questions


001
1. Ease of radiation and channel allocation.
2. The signal must first be converted to analog format.
002
1. The carrier.
2. Carrier frequency, carrier plus the modulating frequency (USB), and carrier minus the modulating
frequency (LSB).
3. The sidebands.
4. The bandwidth required is two times the modulating signal frequency.
5. It refers to the amount of effect or change that the intelligence has on the carrier.
6. The percent of modulation is 62.5 percent—found by dividing 2.5 by 4 and multiplying the result by 100.
7. 81.8 percent.
8. Severe distortion and increases bandwidth of an AM signal.
003
1. The amplitude of the modulating signal.
2. They are directly proportional.
3. The FCC establishes the maximum amount of deviation.
4. A sideband that contains at least 1 percent of the total transmitted power.
5. By dividing the amount of frequency deviation by the frequency of the modulating signal.
6. From the unmodulated carrier.
7. A higher modulation index means more power in the sidebands. It is even possible to have all the power in
the sidebands and none in the carrier. At this point, any further increase in modulation would start taking
power from the sidebands and placing it back in the carrier resulting in a redistribution of power.
004
1. None. The frequency change in PM is incidental.
2. During the positive alternation of the modulating signal, the phase of the carrier lags behind the
unmodulated carrier. During the negative alternation it leads the unmodulated carrier.
3. During the constant amplitude part of the modulating frequency.
4. Amplitude.
5. Frequency.
6. QPSK have faster data rates, doubling the data carrying capability over BPSK.
7. By using smaller phase shifts to allow for more phase shifts and increased capacity as each shift represents
more bits in combination.
005
1. The process of converting analog signals into digital signals.
2. Band limiting, sampling, quantizing, and encoding.
3. It ensures the input to the sampler never exceeds a maximum frequency.
4. Converting a continuous time signal into a discrete time signal.
5. PAM = amplitude; PWM – width or duration; PPM = position.
6. Taking samples at twice the highest frequency in the bandlimited signal.
7. Quantization.
8. Uniform quantizing, non-uniform quantizing.
9. Uniform quantizing.
1–40

006
1. One character in length.
2. The start and stop bits.
3. By summing the 1 bits. If the character arrives with an even number of 1 bits, the device assumes that it has
received a correct character.
4. To reduce the overhead costs of data transmission.
5. Synchronous transmission blocks many characters together for transmission.
007
1. Odd or even parity.
2. BCC.
3. 255.
4. CRC treats the binary ones and zeros in the frame address, control, and information fields as one long
binary number.
5. 99 percent in most applications.
6. The receiver automatically sends a retransmittal request to the sender if it finds an error in a received frame.
7. Receiving end.
8. To transmit multiple copies of the same message to the distant end.
9. To compare all copies of transmitted message, then reconstruct the message using the good portions of the
message copies received.
008
1. Fiber optic communication systems require at least three basic components: an optical transmitter, a
transmission medium, and a detector or optical receiver.
2. Normally, a transmitter is comprised of a driver, an optical source, and an FO pigtail.
3. LED, semiconductor laser diodes (LD), and non-semiconductor laser diodes.
4. (1) Operating speed or rise time must be fast enough to meet the application’s bandwidth requirements.
(2) Must provide enough optical power through the fiber to operate the detector.
(3) It must produce a wavelength that takes advantage of the fiber’s long-loss propagation characteristics.
5. Small size, ruggedness, capability of single and direct modulation, reduced expense, and spectral match
with both fiber waveguides and silicon photodetectors.
6. The edge emitter and the surface emitter.
7. LEDs differ from lasers in that their light output is incoherent and the laser’s is coherent.
8. Milliwatt.
9. Temperature.
009
1. Dispersion or attenuation.
2. Noise floor (noise equivalent power) and the signal-to-noise ratio.
3. Signal-to-noise ratio (SNR) and bit error rate.
4. Wavelength of light.
5. Positive intrinsic negative (PIN) diode and the avalanche photodiode (APD).
6. High-speed, long-distance applications.
010
1. To extend the distance and to preserve signal integrity.
2. Regenerators receive incoming low-level, dispersed (distorted) digital pulses. After conversion to the
electrical domain, the pulses are regenerated (the leading and falling edges are sharpened and the pulses are
amplified). Finally the regenerated pulses are retransmitted optically.
3. Optical amplifier.
1–41

011
1. A fiber optic patch cord is a short section of single fiber cable with a connector on each end. Each end is
connected to a patch panel to connect circuits together. Pigtails are similar to patch cords but they only have
a connector on one end—the other end is spliced to the fiber optic cable. Breakout cables are similar to
pigtails but they are spliced to multifiber cables.
2. (1) b.
(2) e.
(3) d.
(4) c.
(5) a.
(6) c.
(7) b.
3. It means the keying slot in the connector can be rotated to find the optimal alignment.
4. The other types of connectors require room to twist them for alignment. SC connectors use a push-pull
arrangement and do not need this extra room.

Do the unit review exercises before going to the next unit.


1–42

Unit Review Exercises


Note to Student: Consider all choices carefully, select the best answer to each question, and circle
the corresponding letter. When you have completed all unit review exercises, transfer your answers to
the Field-Scoring Answer Sheet.
Do not return your answer sheet to the Air Force Career Development Academy (AFCDA).
1. (001) What is one reason for modulation that involves modulating low frequency signals for
transmission over long distances?
a. Spectrum conservation.
b. Channel allocation.
c. Ease of radiation.
d. Companding.
2. (001) What are the three general categories used to produce modulation in radio frequency (RF)
transmission today?
a. Binary amplitude shift keying, frequency shift keying, and phase shift keying.
b. Pulse amplitude, pulse width, and pulse position.
c. Amplitude, frequency, and phase.
d. Analog, digital and shift keying.
3. (002) When the modulating signal and carrier signal are combined within a modulator, the output
signal contains
a. modulating signal, upper sideband, and lower sideband.
b. carrier, upper sideband, and lower sideband.
c. upper sideband and modulating signal.
d. carrier and modulating signal.
4. (002) If a carrier frequency of 1 MHz and a modulating tone of 10 kHz goes to the modulator, the
output signal include
a. 1.01 MHz, 1 MHz, and 0.99 MHz.
b. 1.1 MHz, 1 MHz, and 0.9 MHz.
c. 1.11 MHz, 1 MHz, 0.99 MHz.
d. 1.111 MHz, 1 MHz, 0.999 MHz.
5. (002) The bandwidth of an amplitude modulated signal is
a. two times the modulating signal.
b. the same as the modulating signal.
c. determined by the modulation index.
d. defined in terms of maximum amount of modulation.
6. (002) Use the formula on page 1–3. If an oscilloscope displays a modulation envelope having an
Emax of 200 volts and an Emin of 20 volts, what is the modulation percentage?
a. 81.8.
b. 85.5.
c. 88.
d. 122.
7. (002) Which statement concerning bandwidth is true?
a. Overmodulating increases bandwidth because the distortion produces harmonics.
b. Undermodulating increases bandwidth because the distortion produces harmonics.
c. Overmodulating increases bandwidth because the output’s increased amplitude.
d. Undermodulating increases bandwidth because the output’s decreased amplitude.
1–43

8. (003) The output of the oscillator in a frequency modulation (FM) modulator increases in
frequency with each
a. positive half cycle of the carrier.
b. negative half cycle of the carrier.
c. positive half cycle of the modulating signal.
d. negative half cycle of the modulating signal.
9. (003) In frequency modulation, a significant sideband contains at least what percentage of the total
transmitted power?
a. 1.
b. 3.
c. 10.
d. 0.1.
10. (003) What is the formula to find the modulating index?
a. Deviation divided by frequency of modulation.
b. Deviation times frequency of modulation.
c. Sideband divide by carrier frequency.
d. Sideband times carrier frequency.
11. (003) A 5 kHz modulating signal has enough peak voltage to cause a deviation of 15 kHz. What is
the modulation index?
a. 5.
b. 0.5.
c. 0.3.
d. 3.
12. (003) What is it called when a narrow frequency band between adjacent stations prevent
frequency modulated sidebands from overlapping?
a. Space.
b. Guard band.
c. Buffer zone.
d. Prevention field.
13. (004) In phase modulation, what part of the carrier signal is varied?
a. Frequency.
b. Amplitude.
c. Height.
d. Phase.
14. (004) In phase modulation, the carrier’s
a. phase is shifted at the rate of the modulating signal.
b. phase shifted with the phase of the modulating signal.
c. amplitude is shifted at the rate of the modulating signal.
d. amplitude is shifted with the phase of the modulating signal.
15. (005) What is the first step in the pulse code modulation process?
a. Discrete amplitudes are assigned to the sampling pulses.
b. A binary code number is assigned to the sample.
c. The quantizer limits the amplitude of the pulses.
d. The analog signal is band limited.
1–44

16. (005) What part of the pulse code modulation process converts a continuous time signal into a
discrete time signal?
a. Sampling.
b. Rectifying.
c. Oscillating.
d. Band limiting.
17. (006) In asynchronous transmissions, what bit is used by the receiving device to verify that the
transmission was received correctly?
a. Control.
b. Parity.
c. Start.
d. Stop.
18. (006) If an error should occur, what transmission is lost in an synchronous transmission?
a. One character.
b. Block of data.
c. A parity bit.
d. A stop bit.
19. (007) When using vertical redundancy check, what significance does the amount of ones have in a
data bit pattern?
a. Determines parity.
b. Determines transmission rate.
c. Determines whether transmission is in ASCII format.
d. Determines whether transmission is synchronous or asynchronous.
20. (007) What error detection method is used for checking a data block greater than 512 and is about
99 percent effective in most applications?
a. Checksum.
b. Cyclic redundancy check
c. Vertical redundancy check.
d. Longitudinal redundancy check.
21. (007) When using forward error control as a method of error correction, where does error
correction take place?
a. Receiving end.
b. In the oscillator.
c. Transmitting end.
d. In the primary buffers.
22. (008) Light sources that are applicable to fiber optic waveguide are light-emitting diodes and
a. photo transistors.
b. hybrid photodiodes.
c. semiconductor laser diodes.
d. integrated photodiode/preamplifiers.
23. (008) What type of light is emitted from a laser?
a. Incoherent.
b. Coherent.
c. Ordinary.
d. Invisible.
1–45

24. (009) What are the two main types of photodetectors?


a. Positive intrinsic negative and avalanche photodiode.
b. Light-emitting diode and avalanche photodiode.
c. Light emitting diode and semiconductor laser.
d. Tunnel and zener diodes.
25. (009) What photo detector converts one photon to one electron?
a. Light emitting diode.
b. Avalanche photodiode.
c. Positive intrinsic negative diode.
d. Integrated photodiode/preamplifier.
26. (010) Which repeater amplifies optical signal without converting to and from the electrical
domain?
a. Receiver.
b. Repeaters.
c. Regenerators.
d. Optical amplifiers.
27. (011) A short section of single fiber cable that has a connector at each end is called a
a. pigtail.
b. jumper.
c. patch cord.
d. breakout cables.
28. (011) Which fiber-optic connector uses quick-release, keyed bayonet couplings that are preferred
in situations where severe vibrations are not expected?
a. Biconic.
b. Field (FC).
c. Straight tip (ST).
d. Sub-miniature, type A (SMA).

Please read the unit menu for unit 2 and continue 


1–46

Student Notes
Unit 2. Protecting Electronic Equipment
2–1. Electromagnetic Pulse .............................................................................................................. 2–1 
012. Electromagnetic pulse principles ....................................................................................................... 2–1 
013. Electromagnetic pulse protection....................................................................................................... 2–3 
2–2. Electromagnetic Interference .................................................................................................. 2–6 
014. Identifying electromagnetic interferences.......................................................................................... 2–6 
015. Spectrum interference reporting ...................................................................................................... 2–10 

T
HROUGHOUT your career in the Air Force you will be expected to become an expert on
specific systems while keeping up-to-date on issues that affect the entire career field. In this
unit, we discuss fundamental topics pertinent to the entire career field. This unit also discusses
the threat and principles of electromagnetic pulse (EMP) and electromagnetic interference (EMI).
Understanding these topics will help you safeguard yourself and your people from unnecessary injury
while protecting your systems from preventable signal interference or damage.

2–1. Electromagnetic Pulse


EMP is a possible threat to almost all sophisticated military electronic systems. The subject of this
section is the threat of EMP—how it came about and how it affects your equipment. Hopefully you
will never have any encounters with EMP, but it is still important that you have a working
understanding of its effects. Let’s begin with a review of EMP before we tackle equipment problems
caused by EMP.

012. Electromagnetic pulse principles


Electromagnetic pulse (EMP) is a product of a nuclear explosion. Initial nuclear radiation, in the form
of gamma rays and neutrons, generates electromagnetic pulse effects. While the EMP lasts only
millionths of a second, it is devastating to all types of electronic equipment. In this lesson, we will
look at origins of EMP and how it affects our equipment. This will give you the background
information you need to understand EMP generation and effects on your equipment systems.
Electromagnetic pulse origins
The initial nuclear radiation, through gamma rays and secondary neutron radiation, produces the
electromagnetic pulse effects. These pulses can easily span continent-sized areas and this radiation
can affect systems on land, sea, and air. The derived environment produces EMP forms, which are
created by detonations at different altitudes and experienced by systems at different locations with
respect to the burst. The signal from such an event extends to the visual horizon as seen from the
burst point. These forms are high-altitude electromagnetic pulse (HEMP), system-generated
electromagnetic pulse (SGEMP), and source region electromagnetic pulse (SREMP).
High-altitude electromagnetic pulse
The nuclear device detonation in or above the earth’s atmosphere (exoatmospheric) produces an
intense, time-varying electromagnetic field. The EMP environment produced by the exoatmospheric
event results from the device energy—chiefly gamma rays—in the atmosphere at altitudes between
20 and 40 kilometers (km). This high-altitude electromagnetic pulse (HEMP) environment then
propagates to all points in the air or the ground within line-of-sight of the burst. A single high altitude
burst can produce high amplitude HEMP fields over millions of square kilometers. For example, a
large device detonated at 400–500 km over Kansas would affect all of CONUS. The HEMP field can
rise to a peak value of approximately 50 kilovolts per meter within a few nanoseconds, then decay
gradually over a period lasting hundreds of seconds.
2–2

High-altitude electromagnetic pulse is a wideband phenomenon involving frequencies in the range


from below 1 hertz through 1 gigahertz (GHz). This wideband threat extends to electronic systems’
operation as the HEMP field induces electrical current and voltage stresses. These HEMP stresses
cause system malfunctions and can ignite electro-explosives or stored fuel. High-altitude
electromagnetic pulse is the only nuclear environment that can impose on many systems
simultaneously with the expenditure of very few weapons. Because of these unique features, we must
take special measures to protect systems against HEMP.
System-generated electromagnetic pulse
SGEMP is a problem for electronic components in satellites and reentry vehicles because they are
directly exposed to nuclear radiation from a high altitude burst. The interactions of nuclear
radiation—particularly gamma rays and x-rays—with solid materials, which contain atoms in
electronic systems generates electric fields. Free atoms compose these electric fields. These free
atoms interact again with the same or other solid materials to release more electrons by ionization; we
now call these secondary electrons. Both surfaces of the material emit electrons. The result is either
surface can generate an electronic field that creates an internal electromagnetic pulse. These
electromagnetic fields generated near the surface walls can induce electric currents in components,
cables, ground wires, and antennas. Large currents and voltages that are capable of causing damage or
disruption can be developed just as with external EMP.
Source region electromagnetic pulse
A nuclear burst at or near the earth’s surface produces a SREMP. There are two distinct regions of
importance for SREMP: the source region where EMP is generated and the radiated region. The
source region is the area within three to five kilometers of the explosion. We call it the source region
because the EMP is produced in this area by the interactions between the weapon products and the
earth’s atmosphere. This area has very intense radial electrical fields, azimuthal magnetic fields, air
and ground currents, and time-varying air conductivity. EMP environments inside the source region
can affect systems such as ICBMs or command centers that have been hardened to withstand nuclear
blasts, thermal energy, and radiation within the source region.
SREMP can also affect systems outside the source region that are connected to any cables passing
through the affected area. The EMP energy is coupled with conducting components by induction,
electrical and magnetic, and direct charge deposition or resistive coupling. A surface burst has fields
radiating outside the source region, with field amplitudes greater than 5 kilovolts per meter out to
ranges of 10 kilometers and more. In this range, the radiated EMP is a principal threat to systems that
respond to very low frequencies or have very large energy collectors such as cable. EMP effects for
these systems are such that special attention must be given to surge protection to make sure the high
currents can be dissipated. Also, the source region contains an electric dipole that creates an
azimuthal magnetic field and a predominately vertical electric field. In the radiated region, the EMP
electric field is primarily vertical. Thus, the fields interact well with vertical structures such as
antennas and boresite feedhorns.
Effects on communications
Like EMP, the initial nuclear radiation creates the atmospheric disturbances that lead to
communications blackout and scintillation effects. These effects would interfere with
communications between the ground station and the spacecraft.
Scintillation effects on RF are the disruptions of the signal. Scintillation is the signal fading or
waveform distortion effects caused by the multiple path propagation through the structured plasma
field. This plasma field is created by the interaction of the earth’s magnetic field with the high-
altitude ionized region caused by the nuclear burst. As for absorption or blackout, layers of charged
electrons are trapped and prevent transmissions through the layer. The large attenuation of the
transmitted signal cause the received signal-to-noise ratio to fall below unity. However, these effects
are frequency dependent. Disruption can last from just seconds to days. The lower radio frequencies
2–3

are affected by absorption and the higher frequencies are affected by scintillation effects. Since EMP
severely disrupts the ionosphere, high-frequency communications may also be disrupted for hours to
days depending upon the angle of reflection of the atmosphere.
There are systems employed to mitigate the adverse condition of a nuclear environment. One such
system is Military Strategic and Tactical Relay (MILSTAR). MILSTAR integrates satellite
communications with an electronic counter-counter measure (ECCM) and balanced nuclear hardening
into ground terminals for installation and operations in communications shelters. It provides
antijamming, scintillation-protected communications to support critical C2 communications. A
crosslink capability allows the constellation to communicate with a satellite that is within the
contaminated areas by way of another satellite.

013. Electromagnetic pulse protection


The purpose of the EMP protection system is to protect critical electrical systems from the effect of
intense electromagnetic (EM) effects caused as a result of an explosion. The frequency spectrum may
extend from below 1 Hz to above 300 MHz. The high-altitude EMP produced by an exoatmospheric
nuclear explosion is the form of EMP commonly of most interest because of the large area covered by
a single bomb. This high-intensity EMP can disrupt or damage critical electronic facilities over an
area as large as the continental United States unless protective measures are taken in the facilities.
The development of such protective measures involves grounding, bonding, and shielding. These will
be mentioned here but more thouroughly discussed later in this volume.
Effects of Electromagnetic pulse on facility systems
EMP interaction with systems may be separated into long-line effects and local effects. Long-line
effects are the currents and voltages induced on long power lines, communication cable links or other
conductors, such as pipelines. Some of these EMP effects may be induced far away and guided to the
facility along the conductor. Local effects are the currents and voltages induced directly on the
facility shield, building structure, wiring, equipment cabinets, and so forth. These local effects are
very difficult to evaluate analytically because of the complexity of the facility structure, the lack of
information on the broadband electrical properties of many of the structural materials and the
extremely large number of interaction paths, facility states, and other complicating factors.
Typical components and installation detail
Shielding a facility is required for EMP survivability. The ideal shielding would be made of steel
and completely enclose the facility. However, this is an unrealistic method because operations
would not be possible. To create an EMP survivability facility, it should be shielded as much as
possible. Furthermore, all openings to the facility need to be filtered and protected. The facility
also needs to be isolated from any external electric EM propagation in the earth. Typical
components and installation details for EMP protection systems are described below.
Shielding
An EMP-induced event can cause a magnetic or electric field to be induced in an electrical circuit that
can cause an error in the response of the circuit. An effective means for the reduction of this
interfering coupling is the use of shields around the circuits and around interconnecting lines
penetrating the EMP barrier and internally within the barrier.
1. The sensitive equipment will typically be located within the shielded barrier provided for
EMP protection. Shielding should be provided for the cabling servicing this equipment by
installing it in rigid metal conduit that is bonded to the entry plate of the shield.
Instrumentation and coaxial cable should be of shielded construction with connectors
providing additional protection as the cable passes through the entry plate into the protected
area.
2. Ensure that shield continuity is maintained at points of entry of signal cables, power
2–4

conductors, utility lines, and ground conductors.


3. Make sure that windows, doors, and ventilation ports are shielded along with the walls. Use
well-bonded screen wire for windows, use metal doors, and apply honeycomb ducts or
appropriate screening over ventilation ports.
4. Equip all power lines supplying shielded areas with power line filters.
5. Securely ground all metal shields.
Bonding
The bonding requirements for EMP protection are necessary to make sure a mechanically strong, low
impedance interconnection between metal objects and to prevent the path thus established from
subsequent deterioration through corrosion or mechanical looseness.
Entry plates
All metallic penetrations entering the protected area should enter at a common location and all
shielded cables, conduits, and pipes should be bonded to an entry plate.
1. Effective transient protection can be provided by peripherally bonding each RF coaxial cable
to a metal bulkhead connector that in turn is peripherally bonded to the building entry plate
and grounded to the earth electrode subsystem. This scheme will route transient currents from
cable shields to earth ground instead of through terminating equipment to ground. Also,
transient surge currents will be shunted to ground before transient energy is cross-coupled to
other equipment lines in the facility. The entry plate must contain the required number of
appropriate coaxial feed through connectors to terminate all applicable incoming lines. The
connectors must also provide a path to ground for connected cable shields. If external and
internal coaxial cables are of a different physical size, the changeover in connector size
should be accomplished by the feed through connectors of the entry plate.
2. The shields of all telephone cables entering a C-E facility must be bonded to each other and
to the earth electrode subsystem through the steel entry plate. This measure eliminates
harmful differences of potential between the various telephone cables entering the facility.
3. Configuration control must be considered during the design phase. Conducting penetrations
must be bonded carefully around the penetration perimeter (360°) to the shield entry plate to
prevent aperture coupling to the facility interior or to inner conductors of shielded cables.
Nonconducting penetrations must be treated as apertures in the shield.
Filters
The majority of the interfering signals will be conductively coupled into the susceptible circuit. The
proper application of filters to both the signal and power lines will reduce this coupling.
1. It is essential to avoid signal penetration by way of power and signal wiring. This demands
that filters achieving adequate insertion loss be installed in all incoming cables; it is fairly
normal to have three-phase power circuits and several hundred signal lines going into a large
enclosure. It is essential that the filters provide the specified attenuation under full-load
conditions at all frequencies. Unless the filter attenuation is maintained at all frequencies and
load currents, the overall shield attenuation will be degraded by the signal penetration by way
of the filters.
2. All power line filter cases shall be directly bonded to the equipment case or enclosure. Filters
shall be bonded to any subassembly enclosure used to maintain shield effectiveness.
3. Filters on power, control, and signal lines shall be installed in a manner that maintains the
integrity of the shield. Power line filters shall be completely shielded with the filter case.
Filters on power control and signal lines shall be placed as close as possible to the point of
penetration of the case in order to avoid long, unprotected paths inside the equipment. Mount
2–5

filters for power and control cables inside the shield and extend the filter input terminals
through the shield.
4. DC power equipment has been found to be a significant electrical noise source that can be
minimized through proper configuration of the facility, the physical and electrical isolation of
the dc power equipment from communications equipment and filtering of the output. This
noise can be minimized by the use of decentralizing filters at or in the equipment.
5. AC line filters can cause currents in the ground system when distributed in various areas of
the facility. This is due to AC current passing through capacitors in the AC line filters when
the lines are filtered to ground. Power line filters should not induce more than 30
milliamperes of current to the fault protection subsystem.
6. Linear filters may also be used as barrier elements on penetrating wires, but at the outer
(facility-level) barrier, filters are always used in combination with surge arresters. On power
lines, for example, the line filter usually cannot tolerate the peak voltages, so a spark-gap
surge arrester is used to limit the voltage and the filter isolates the interior circuits from the
negative dynamic resistance and shock excitation of the spark-gap discharge. The shunt input
capacitance of the filter may also be used to reduce the rate-of-rise of the voltage, so that the
firing voltage of the surge arrester will be lower. A variety of low-pass, bandpass and high-
pass filters are available for power and signal line protection.
All systems of any importance should incorporate these features and techniques to counter
electromagnetic pulse effects. Today there are an increasing number of systems with EMP protection
and many more are becoming EMP-hardened. How can you as a maintenance technician ensure your
equipment can meet this threat? Each time you perform maintenance; keep in mind the effect to your
facility shield. Also, be aware of the hardness maintenance and hardness surveillance plan for your
HEMP-hardened facility.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

012. Electromagnetic pulse principles


1. What activity must occur to produce an EMP?

2. Which two elements produce the EMP effects?

3. What are the electromagnetic pulse forms produced by the derived environment?

4. Why is high-altitude electromagnetic pulse environment protection important for critical, time
urgent mission systems?

5. Why is system-generated electromagnetic pulse a problem for satellites and reentry vehicles?

6. What are the two distinct regions of importance for SREMP?


2–6

7. What effects on communications are there after a nuclear detonation?

8. What capabilities can be employed by MILSTAR to mitigate communications loss within a


contaminated area?

013. Electromagnetic pulse protection


1. Describe the difference between long-line and local effect of EMP on facilities?

2. How can a facility be survivable against EMP?

3. What happens in an EMP induced event?

4. Why is bonding important in EMP protection?

5. Where should all metallic penetrations entering the protected area be located?

6. To reduce coupling into susceptible circuits, what element is installed on both signal and power
lines?

2–2. Electromagnetic Interference


Anyone who has ever tried to hold a conversation in a crowded room knows it can be difficult. If
background conversations become loud enough, you won’t get your message across. Although we
can’t hear most of the frequencies that permeate the airwaves around us, the concept remains the
same. Radio receivers are typically exposed to thousands of radiated signals at different frequencies.
Whether any of these extraneous signals result in electromagnetic interference (EMI) depends on their
amplitude and frequency. This section introduces you to the sources of interferences on
communications and non-communications devices. Lastly, we will review the Air Force and joint
spectrum interference resolution program to resolve EMI. This section is intended to provide
operations and maintenance personnel with a description of some types of interference and basic
procedures to follow in attempting to identify an electromagnetic interference source. The
information included here is not intended to be a comprehensive analysis of all the possible
interference conditions. Let’s start with a discussion on identifying EMI sources.

014. Identifying electromagnetic interferences


What is electromagnetic interference (EMI)? It is any electromagnetic disturbance that interrupts,
obstructs or otherwise degrades or limits the effective performance of electronics and electrical
equipment. It can be induced intentionally, as in some forms of electronic warfare, or unintentionally,
as a result of spurious emissions and responses, intermodulation products, and the like
2–7

Jamming
RF Transmission Systems technicians must be able to recognize jamming. This is not always an easy
task. In the next few paragraphs, we will introduce simple steps to distinguish jamming and internal
and external sources.
Internal or external interference
The two sources of interference are internal and external. If the interference or suspected jamming
can be eliminated or substantially reduced by grounding the radio equipment or disconnecting the
receiver antenna, the source of the disturbance is most likely external to the radio. If the interference
or suspected jamming remains after grounding or disconnecting the antenna, the disturbance is most
likely internal and is caused by a malfunction of the radio. External interference must be checked
further for enemy jamming or unintentional interference.
Jamming or unintentional interference
Unintentional interference may be caused by other radios, some other type of electronic or electric/
electromechanical equipment, or atmospheric conditions. The battlefield is so crowded with radios
and other electronic equipment that some unintentional interference is virtually unavoidable. Also, the
static electricity produced by atmospheric conditions can negatively affect radio communications.
Unintentional interference normally travels only a short distance; a search of the immediate area may
reveal the source of this type of interference. Moving the receiving antenna for short distances may
cause noticeable variations in the strength of the interfering signal. These variations normally indicate
unintentional interference. Conversely, little or no variation normally indicates enemy jamming.
Jamming recognition
The fact that the enemy has intentionally interfered with your communications is important command
and control (C2) intelligence information. Therefore, when you have unusual interference, take
immediate steps to find out if the interference is hostile jamming or if it is local electrical
interference, interference from a friendly station, or perhaps trouble in your own receiver.
Co-channel
Co-channel interference occurs when the center frequency or carrier of an undesired signal falls
within the pass band of a receiver. These signals follow the normal signal path through the effected
receiver and become apparent as interference. The co-channel signal is often from another authorized
user of the affected frequency. Normally, frequency management limits the number of users of the
same frequency assignment within a geographic area so as to preclude electromagnetic interference.
However, during certain atmospheric conditions, it is possible for two distant users who are
occupying the same frequency to interfere with each other. Co-channel interference can be eliminated
through frequency management action. However, the use of continuous tone coded squelch at the
affected receiver can eliminate the effects of the interference.
Adjacent signal or adjacent channel
Adjacent signal interference can occur in one of two ways:
1. When sidebands or broadband transmitter noise from a transmitter (but not the center
frequency or carrier) fall within the pass band of a receiver and are of sufficient amplitude,
the undesired signals are detected by the receiver and become apparent as interference. This
type of interference we also refer to as broadband transmitter noise. Placing a band-pass filter
on the offending transmitter can often eliminate broadband transmitter noise.
2. Adjacent signal interference can also occur when a receiver’s attenuation to the fundamental
frequency of an off frequency transmitter is not sufficient to prevent the receiver from
detecting the off frequency energy; this results in interference. If the transmitter and receiver
frequencies are not too closely spaced, installing a band-pass filter between the affected
receiver and its antenna can eliminate this type of interference.
2–8

Spurious responses
Spurious responses occur when a receiver responds to off-frequency signals. Spurious responses are
actually intermodulation responses that occur between one external signal and an internally generated
signal, usually the fundamental or harmonics of the local oscillator. A spurious response can occur
whenever the mix between the external input signal and an integer multiple of the local oscillator
create an intermodulation product within the bandwidth of the receiver’s intermediate frequency
amplifier and filter.
Spurious responses are normally related to the frequencies of signals generated within the receiver.
The most common spurious response occurs at a receiver’s image frequency. Interference due to an
image response can occur when an undesired signal at a frequency above or below the affected
receiver’s local oscillator mixes with the local oscillator to produce a signal at the affected receiver’s
intermediate frequency. Installing a bandpass filter on the affected receiver can eliminate spurious
response interference.
Spurious emissions
Spurious emissions are discrete, narrow-band signals generated by the oscillators or power amplifiers
within a transmitter. Spurious emissions may be caused by intermodulation products generated in
nonlinear components or from signals generated within the transmitter. The most significant spurious
emissions are generally the harmonics of the transmitter’s tuned frequency or harmonics of the crystal
frequency(s) used to derive the transmitted signal. Transmitters that use rapid tuning methods tend to
generate many spurious emissions. There are methods to determine the frequencies of transmitter
spurious emissions. However, internally generated spurious emissions can be numerous and difficult
to predict. Although less often a problem, receivers can emit spurious emissions that can cause
interference to another nearby receiver. All superheterodyne receivers emit some electromagnetic
energy at their local oscillator frequencies and possibly at harmonics of the local oscillator. Installing
a bandpass filter on the source of the emissions can normally eliminate spurious emissions.
Intermodulation
Intermodulation interference generally occurs in one of three ways. Once intermodulation has been
identified as the interference mechanism, identifying the appropriate corrective action depends on
where the intermodulation products are being generated. We can often determine the most likely
source of the intermodulation interference through a desktop study.
Receiver intermodulation
First, two or more transmitter signals can enter a receiver and combine in the input circuitry, thus
creating signals at other frequencies that we call intermodulation products. The intermodulation
products generated are the sum and difference of integer multiples of the component frequencies. The
new signals created, which fall on or near the tuned frequency of the effected receiver, are of primary
concern because they pass through the receiver with minimum attenuation and result in interference.
Installing a bandpass filter on the affected receiver or installing a notch (band stop) filter on the
receiver can eliminate intermodulation interference generated in a receiver. Tune the notch filter to
the frequency of one of the transmitters involved in creating the intermodulation products.
Transmitter intermodulation
Second, one or more undesired external transmitter signals could enter another transmitter and mix
with the transmitter fundamental frequency. The signal mixing occurs in the final power amplifier of
what we often refer to as the victim transmitter. The intermodulation products generated are the sum
and difference of integer multiples of the component frequencies. This type of intermodulation is
sometimes called transmitter intermodulation, or back or reverse transmitter intermodulation. Filters
between the transmitter’s final output stage and the antenna can significantly reduce transmitter
intermodulation products because the external off-tune source signals must pass through the filters.
2–9

Therefore, the transmitter intermodulation products must pass back through the filter and are further
attenuated.
“Rusty bolt” intermodulation
The third type of intermodulation interference we refer to as the “rusty bolt” phenomenon. Rusty bolt
interference is caused by the nonlinear hysteresis (change of magnetic property) effects in
ferromagnetic metals and by nonlinear impedance caused by oxidized metallic mating surfaces. The
non-linearity in the presence of high amplitude radiated signals generates intermodulation products.
The rusty bolt bandwidth can be very large and cause more severe interference problems than
intermodulation products created in either transmitters or receivers. The severity of this type of
interference can vary significantly with temperature and weather conditions. Unfortunately, we can
not eliminate this problem with a filter. The most effective way to eliminate this type of interference
is to first locate the source, using test equipment, and then to eliminate the problem by removing the
corrosion and bonding together the metal surfaces causing the interference.
Brute force interference
This type of interference occurs when high level radiated signals penetrate and induce a radio
frequency voltage in the internal circuits of a system or equipment. The signal is then detected by
circuit components and passed through the system. This type of interference can occur in many types
of receiving systems as well as audio devices (public address, telephones, intercoms, etc.) and
computers. Brute force interference can be difficult to eliminate and often requires case shielding,
internal suppression devices or relocation of the affected systems to areas where the interference
signal strength is lower.
Man-made noise from non-intentionally radiating devices
Everyone has heard the hum of a ceiling fan or the snap, crackle and pop of a light bulb that is on the
verge of burning-out. These sounds only occupy the audio frequency spectrum. Malfunctioning
motors, relays and even power lines can generate many unwanted radio frequencies. Here is some
information to help you isolate non-intentional EMI.
Power line noise
Power line interference can result from voltage surges, arcing, or corona discharges. The source of
power line interference can be difficult to determine because the power lines act as antennas that
radiate the interference. Typical sources of power line interference are insulators that are cracked or
covered with dust or other airborne material, loose tie wires or connectors, insulation breakdown in
transformers or capacitor banks, intermittent contact between any two metallic objects on a power
pole, poor ground connections and any sharp points on objects connected to the power lines. The
intensity of power line interference can vary significantly with changes in weather, such as
precipitation, wind etc. High frequency and very high frequency amplitude modulation receivers are
particularly susceptible to power line noise. Power line noise and noise from electric arcs are
broadband in nature. Broadband noise sources release energy all across the spectrum. However, the
level of the energy varies with frequency and is generally stronger at lower frequencies. Eliminating
power line noise EMI requires that we identify the specific source and eliminate the arcing
component.
Switching equipment and relays
Switches and relays may cause interference inside receivers because broadband radiated noise occurs
during the opening and closing of the contacts. Motors and generators with commutators and slip
rings produce interference similar to switches. We can minimize the interference when we maintain
brushes and slip rings in good condition. Welders can cause EMI because their arcs are sources of
broadband noise energy. High frequency stabilized arc welders can cause severe interference because
they use high frequency energy to ionize an inert gas between the electrode and the work to start the
arc.
2–10

Lighting
Sodium and mercury vapor lights do not normally cause EMI problems. However, bursts of noise can
be radiated during a short interval after power is applied because of the electrical arc used to start the
bulbs. When the bulbs are no longer working, but power is still applied to the fixture, the fixture’s
starting mechanism repeatedly attempts to start the bulb, resulting in a burst of noise with each
attempt. Installing new bulbs normally eliminates the interference.

015. Spectrum interference reporting


EMI regularly hampers the command and control (C2) of military operators by degrading essential
systems that use the electromagnetic spectrum. EMI impedes operations and hinders mission
accomplishment. Effective EMI management plays a crucial role in assuring critical information is
exchanged timely and accurately in times of war, during operations other than war and during
peacetime. Effective EMI management is crucial to obtaining and maintaining information
superiority, an essential foundation of information operations. Timely and accurate identification,
verification, characterization, reporting, geolocation of the source, analysis and resolution of EMI
during military operations is essential to maintaining C2 of US forces and responding to adversary
electronic warfare (EW) actions.
The formats for reporting this information are the joint spectrum interference report (JSIR) and the
Air Force spectrum interference report (AFSIR). The information provided to higher headquarters in
the JSIR or AFSIR report can be used to destroy the enemy jamming efforts or take other action to
our benefit. Since there are exceptions to every rule, make sure you follow the appropriate guidance
outlined in CJCSM 3320.02, AFI 10–707, Spectrum Interference Resolution Program, and local
operating procedures. This lesson provides guidance that operations and maintenance personnel can
follow in an attempt to identify the source of EMI.
Air Force spectrum interference resolution
The Air Force spectrum interference resolution (AFSIR) program is the official Air Force program
for documenting and resolving spectrum interference problems. Because the focus of the Air Force
spectrum interference resolution program is to resolve EMI at the lowest level, a unit affected by an
EMI incident must begin an investigation to identify the source.
Test your equipment
Again, confirm that the problem is electromagnetic interference and not a malfunction of the affected
system. Remove the receiver’s antenna; if the problem continues, an equipment malfunction may be
indicated. Remove telephone and audio lines, if any are installed. If the problem ceases, you need to
contact telephone maintenance (check for cross talk and improper wiring). If the interference
continues when the antenna and audio lines are removed and internal equipment problems have been
eliminated, check for nearby high powered transmitters which may cause brute force interference due
to case penetration. Check to make sure all equipment covers and shields are properly installed and
that a proper ground has been installed and connected. Check out the affected receiver to make sure it
meets all technical order specifications: proper tuned frequency, sensitivity, selectivity, image
rejection and so forth.
Compare notes
Check with other users of the same frequency and determine whether they receive the
electromagnetic interference (also determine whether they have caused the EMI). For example, if the
air traffic control tower is experiencing EMI then determine if it is also being received at the radar
approach control and ground-to-air transmit and receive site.
Gather data about electromagnetic interference
If the problem still appears to be EMI, the next step is to gather data systematically about the
interference for analysis.
2–11

1. Maintain a log of the EMI. Record the start and stop times, severity, weather conditions, any
conversations, music, call signs, code names or modulation characteristics.
2. Listen to the interference on the affected receiver and note its characteristics. Is it possible to
categorize the EMI as random noise, buzzing sound, tones, music, clear, broken or distorted
voice, or repetitious noise? If possible, use a tape recorder to record the EMI modulation.
When the EMI is not occurring, open the receiver’s squelch to determine if it really is still
present but at a lower level. When the EMI is present, use a tunable receiver to tune above
and below the affected frequency until the EMI is no longer received and then annotate the
frequency limits. If the affected receiver is not tunable, contact other users with nearby
frequency assignments and determine whether they also receive the EMI.
3. Note the intensity of the EMI in reference to desired signals. Does the EMI override most
desired transmissions or does it break the affected receiver’s squelch only when desired
transmissions are not present?
4. If there are any identifying call signs, report them to the Base Spectrum Manager or Area
Frequency Coordinator.
5. If a directional antenna is available, connect it to the affected receiver (or a portable receiver)
and try to determine the azimuth of highest intensity.
6. If the electromagnetic interference sounds like noise, the source may be power lines, welding
equipment, and so forth. Careful attention to the frequency of occurrence and to the operation
of other nearby equipment will help identify the source. If the noise occurs when heater,
furnaces or air conditioners cycle, the problem may be burned or faulty switch or relay
contacts.
7. If the electromagnetic interference appears to be coming from a source inside a building,
locating the source can be severely hampered by the building’s electrical wiring and metallic
structural members. Walking through the building with a portable receiver may isolate the
area where the EMI is the most severe. However, too often, the EMI appears to be severe
near all the building’s ground cables and other metal components. Thermostats, computers,
local area networks and furnace ignition systems are a few of the devices found in buildings
that can cause interference to receivers. An effective first step to isolating noise-like
electromagnetic interference originating from within a building is to monitor the EMI at the
affected receiver or using a portable receiver tuned to the effected frequency while
sequentially turning the building’s circuit breakers on and off. Before attempting this,
coordinate with the facility or building manager and base civil engineer. When you locate the
circuit that appears to be the source of the EMI, work with base civil engineer to isolate and
disconnect the individual loads connected to the circuit until the EMI source is identified.
8. Cable television systems use frequencies that are also used by radio services. If the cable
system’s cable shields, connectors and junction boxes are in good condition the cable system
signals will not radiate to the point where they result in interference to the radio services.
However, where cable shields become broken, interference can occur. Most cable systems
use a specific frequency in addition to the television channels that can be detected by portable
radio receivers. If you suspect that the source of interference is the cable distribution system,
contact the local cable company. The cable system engineer can confirm whether there are
distributing signals on the affected frequency. We can sometimes detect leakage from a cable
distribution system by using a portable battery powered television receiver tuned to a channel
that is not broadcast in the local area. A directional antenna is desirable to isolate the point(s)
of leakage.
9. Computers and other digital devices radiate broadband noise. The Federal Communications
Commission (FCC) specifies the maximum allowable level of these emissions. The FCC
radiated emissions criteria are intended to keep electronic devices from causing widespread
interference to radio receivers. Unfortunately, devices that meet the FCC levels can still cause
2–12

interference to nearby receivers. If the EMI is radiating from the device’s power cord, an
inexpensive power line filter or ferrite choke can provide an effective solution. Unless you
suspect that a device is radiating excessive emission levels, interference to radio receivers
from digital devices is best eliminated or reduced by increasing the separation distance.
If your examination shows that the EMI is suspected to be the result of external sources, take proper
electromagnetic protection (EP) steps and submit an EMI report as directed by AFI 10–707.
Specific reporting information
You, as the operator, must record and report specific information concerning the EMI when it first
occurs and as it continues. Submit an EMI report by electronic message with the following
information:
1. The frequencies of the system experiencing the EMI.
2. The location of the system.
3. The system function, name, nomenclature and manufacturer with model number or other
system description.
4. The operating mode of the system, if applicable (frequency agile, pulse Doppler, search, etc.).
5. The description of the interference (noise, pulsed, continuous, intermittent and so forth).
6. The effect of interference on performance (reduced range, false targets, reduced
intelligibility, data errors, etc.).
7. The dates and times of interference.
8. The location of the interference source (coordinates or line of bearing, if known, otherwise
state as unknown.)
9. A list of other units also receiving the interference (if known) and their location or distance
and bearing from your location. A clear, concise narrative summary on what you know about
the EMI and what local actions you have taken to resolve the problem.
10. Whether you want or expect technical assistance. If you need technical assistance, say what
the security clearance requirements for the team are and what the impact is on the mission.
11. A point of contact, giving name, Defense Switched Network (DSN) and commercial
telephone numbers and duty hours.
Air Force spectrum interfefence report exempt reporting
The AFSIR program pertains to USAF units experiencing interference at any time except when the
unit is under the operational control of a combatant command (COCOM). Report any USAF
interference experienced while supporting joint operations according to joint task force (JTF) or
COCOM procedures. Do not report an incident when any of the following occurs:
1. The interference is transient EMI from natural sources such as rain or lightning.
2. If space weather (e.g., HF fades, Polar Cap Absorption (PCA) events, geomagnetic storming,
etc.) is the suspected, or confirmed, cause of the interference, don’t report it in an AFSIR. For
assistance, notify the local weather unit of the EMI to include date, time, location, system and
frequency impacts. The local weather unit can contact the Air Force Weather Agency, as
appropriate.
2–13

3. Do not submit an AFSIR if interference is only experienced aboard intelligence, surveillance


and reconnaissance (ISR) aircraft aloft and is localized when mission equipment or payload
equipment is turned on/off or used. In these cases, report the interference to the flying unit’s
director of maintenance/maintenance officer and director of operations. The unit’s director of
maintenance/maintenance officer may report the EMI to AFMC; the flying unit’s director of
operations must characterize the operational impact. The unit’s director of maintenance or
maintenance officer must determine if unit maintenance or calibration will resolve the EMI
before notifying AFMC. If notified, AFMC must determine if any field or depot upgrades
meant to resolve the EMI are already scheduled. AFMC is responsible for reporting the EMI
to Air Force Spectrum Management Office (AFSMO) if the EMI can be detected at a distance
of more than 50 meters from the aircraft.
4. If EMI is only affecting frequencies assigned on a noninterference basis for training purposes,
do not submit an AFSIR. Do, however, notify the MAJCOM frequency manager, as that
individual may be able to resolve the problem internally.
Types of reports
There are three types of EMI reports: initial, supplemental or follow-on, and closing. All reports are
submitted through their chain of command, then to their MAJCOM and finally, to the AFSMO.
Initial report
File an EMI report as soon as possible after an EMI incident occurs, preferably within 24 hours of the
occurrence. Include all available data, and send it to the appropriate agencies. You may ask for
frequency management or technical assistance in the initial report.
Supplemental or follow-on reports
Submit supplemental reports when you need to add to or modify information you previously
submitted. Include the date time group of the initial report and any previous supplemental reports and
send them to the same addressees to which you sent the initial report. You may use a supplemental
report to request 85 Engineering and Installation Squadron (EIS) on-site technical assistance.
Closing reports
Issue a closing report when the EMI incident is resolved or requires no further action. The
engineering service agency may complete the closing report when they do a requested on-site
investigation. They include preliminary results of the investigation.
Security classification of electromagnetic interferences reports
Units must evaluate the security sensitivity of the EMI on the affected system and classify the report
accordingly. Security classification of interference incidents and reports is determined principally by
nationality and location of the implied or stated source of the interference and the security sensitivity
of the affected military system. Stations located in combat areas or having a sensitive military mission
generally must classify all interference reports. Guidelines for classifying interference incidents are
contained in CJCSI 3320.02B, Joint Spectrum Interference Resolution (JSIR).
Request assistance
Your actions can be effective in identifying and resolving EMI problems. However, operations and
maintenance personnel often do not have the training or test equipment needed to resolve EMI
problems. When local investigation efforts are not effective in identifying the EMI source, assistance
from highly trained personnel with specialized test equipment is available and should be requested
when filing your report.
The quick fix interference reduction capability (QFIRC) is a fast action service to reduce or eliminate
unintentional (“nonhostile”) EMI associated with Air Force operational equipment. The 85 EIS,
located at Keesler AFB, Mississippi, provides QFIRC to all Air Force units. This unit can provide
2–14

technicians and equipment to perform on-site direction finding and interference problem analysis. The
85 EIS documents corrective actions and gives recommendations for solving EMI problems in a
formal report that is forwarded to the requesting unit and to its parent MAJCOM and AFSMO.
Joint spectrum interference resolution program
The joint spectrum interference resolution (JSIR) program serves the same purpose as the AFSIR
program. The JSIR program was established to address persistent and recurring interference problems
in joint operations, those between civil and DOD systems and those involving space systems. The
program is coordinated and managed for the Joint Staff Command, Control, Communications and
Computer (C4) Systems Directorate (J–6) by the JSC, Annapolis, Maryland. The program itself is
centrally managed; however, the execution process is highly decentralized. Each of the DOD
components shares responsibility for successful execution of the JSIR program.
The objective of the JSIR program is to report and assist with the resolution of EW and
recurring EMI. The three-step resolution process for EMI events includes:
1. Identification, verification, characterization and reporting.
2. Geolocation, analysis, developing courses of action and recommendations.
3. Implementation and notification to user(s) and final closure reporting. Resolution includes
but is not limited to implementation of EMI corrective actions needed to regain use of the
affected spectrum. However, some EMI events cease before corrective action is taken and, in
other cases, the EMI corrections may not be feasible, affordable or may result in regaining the
use of the spectrum.
The JSIR program resolves EMI at the lowest possible level using organic and/or other assets
available to the command. If an EMI event cannot be resolved locally, it must be elevated up the
chain of command with each higher level attempting resolution. If the event cannot be resolved at the
combatant command, JTF, service, defense agency HQor joint staff level, then each may request
joint spectrum center (JSC) JSIR support.
The JSC serves as the center for EMI reporting and resolution and is tasked to provide interference
resolution support to the unified and JTF commanders, as well as to DOD components. The JSC
maintains a central database of EMI cases, resolutions and lessons-learned for all DOD EMI and
provides analytical and on-site assistance in resolving EMI problems. JSIR assistance will be
requested by AFSMO after all Air Force resources are exhausted.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

014. Identifying electromagnetic interference


1. What is electromagnetic interference?

2. How do you differentiate an unintentional interference from jamming signal?

3. What type of EMI occurs when an unwanted carrier frequency follows the normal signal path
through the effected receiver?

4. How can you eliminate broadband transmitter noise?


2–15

5. When do spurious responses occur?

6. What type of equipment often generates many spurious emissions?

7. What intermodulation frequency components can cause EMI in the receiver/transmitter?

8. What type of intermodulation can have a very large bandwidth and cause more severe
interference problems than intermodulation products created in either transmitters or receivers?

9. What type of EMI occurs when high level radiated signals penetrate and induce a radio frequency
voltage in the internal circuits of a system or equipment?

10. What amplitude modulation (AM) receivers are particularly susceptible to power line noise?

11. Relays may cause interference inside receivers because of broadband radiated noise originating
from what?

12. How would you normally eliminate electromagnetic interference caused by sodium and mercury
vapor lights?

015. Spectrum interference reporting


1. What is the purpose of the AFSIR program?

2. Under the AFSIR program, what organizational level must begin an investigation to identify the
source of EMI?

3. Why must you ensure maintenance personnel responsible for the affected equipment check it
before submitting an EMI report?

4. How do you confirm an EMI problem?

5. If your EMI sounds like noise, what are the possible sources of these interferences?
2–16

6. Who is responsible for recording specific information concerning EMI?

7. What types of EMIs are exempt from AFSIR reporting?

8. How should an AFSIR report be submitted?

9. What document contains guidelines for classifying EMI reports?

10. What service provided by the 85 EIS analyzes and recommends corrective actions for reported
EMI problems?

11. What is the objective of the JSIR program?

12. Who maintains a central database of EMI cases, resolutions and lessons-learned for all DOD
EMI?

Answers to Self-Test Questions


012
1. Nuclear explosion.
2. Gamma rays and secondary neutron radiation.
3. HEMP, SGEMP, SREMP.
4. It can affect many systems simultaneously.
5. They are directly exposed to the nuclear radiation from a high-altitude burst.
6. Source and radiated.
7. Scintillation and blackout.
8. Crosslink.
013
1. Long-line effects are the currents and voltages induced on long power lines, communication cable links, or
even other conductors, such as pipelines. Some of these EMP effects may be induced far away and guided
to the facility along the conductor. Local effects are the currents and voltages induced directly on the
facility shield, building structure, wiring, equipment cabinets, and so forth.
2. To create an EMP survivability facility, it should be shielded as much as possible. Furthermore, all
openings to the facility need to be filtered and protected. The facility also needs to be isolated from any
external electric EM propagation in the earth.
3. An EMP-induced event can cause a magnetic or electric field to be induced in an electrical circuit that can
cause an error in the response of the circuit.
4. Bonding requirements for EMP protection are necessary to ensure a mechanically strong, low impedance
interconnection between metal objects.
5. At a common location.
6. Filters.
2–17

014
1. It is any electromagnetic disturbance that interrupts, obstructs, or otherwise degrades or limits the effective
performance of electronics/electrical equipment.
2. Unintentional interference normally travels only a short distance; a search of the immediate area may reveal
the source of this type of interference. Moving the receiving antenna for short distances may cause
noticeable variations in the strength of the interfering signal. These variations normally indicate
unintentional interference. Conversely, little or no variation normally indicates enemy jamming.
3. Co-channel.
4. By using a band pass filter.
5. Receiver responses to off frequency signals.
6. Rapid tuning transmitter.
7. Sum and difference.
8. Rusty bolt.
9. Brute force.
10. High frequency and very-high frequency.
11. The opening and closing of the contacts.
12. Installing a new bulb.
015
1. Document and resolve spectrum interference problems.
2. Lowest unit level.
3. To ensure the equipment is not causing the problem.
4. Check with other users of the same frequency and determine whether they receive the electromagnetic
interference.
5. If the electromagnetic interference sounds like noise, the source may be power lines, welding equipment,
etc.
6. The operator.
7. USAF units under the operational control of a combatant command, transient EMI from natural sources,
suspected or confirmed cause of interference from space weather, interference is only experienced aboard
intelligence, surveillance, and reconnaissance (ISR) aircraft aloft and is localized when mission equipment
or payload equipment is turned on/off or used, and If EMI is only affecting frequencies assigned on a
noninterference basis for training purposes.
8. All reports are submitted through their chain of command, then to their MAJCOM, and finally, to the
AFSMO.
9. CJCSI 3320.02B, Joint Spectrum Interference Resolution (JSIR).
10. Quick fix interference resolution capability (QFIRC).
11. Report and assist with the resolution of EW and recurring EMI.
12. Joint spectrum center.

Do the unit review exercises before going to the next unit.


2–18

Unit Review Exercises


Note to Student: Consider all choices carefully, select the best answer to each question, and circle
the corresponding letter. When you have completed all unit review exercises, transfer your answers to
the Field-Scoring Answer Sheet.
30. (012) High-altitude electromagnetic pulse is a wideband phenomenon that involves which
frequency range?
a. 1 hertz to 1 kilohertz.
b. 1 hertz to 1 gigahertz.
c. 10 hertz to 10 kilohertz.
d. 10 hertz to 10 gigahertz.
31. (012) High-altitude electromagnetic pulse environment protection is important for critical, time
urgent mission systems because it
a. is a problem for electronic components in satellites and reentry vehicles.
b. has a vertical amplitude that decreases inversely with distance, but remains significant for
many kilometers.
c. has very intense radial electrical fields, azimuthal magnetic fields, and time-varying air
conductivity.
d. is the only nuclear environment that may be imposed on many systems simultaneously with the
expenditure of very few weapons.
32. (012) A system-generated electromagnetic pulse is a problem for satellites and reentry vehicles
that are
a. susceptible to the wideband frequency phenomenon.
b. directly exposed to nuclear radiations from a high-altitude burst.
c. affected by the azimuthal magnetic fields and time-varying air conductivity.
d. influenced by the signal fading or waveform distortion caused by the structured plasma field.
33. (012) Which distinct region of the source region electromagnetic pulse is produced by the
interactions between the weapon products and the earth’s atmosphere?
a. Secondary.
b. Radiated.
c. Plasma.
d. Source.
34. (012) Atmospheric disturbances from initial nuclear radiation on higher frequencies affect
communications by
a. blackout effects.
b. absorption effects.
c. scintillation effects.
d. attenuation effects.
35. (013) Currents and voltages induced on power lines, cable links, and pipelines by electromagnetic
pulse are called
a. scintillation.
b. local effects.
c. source region.
d. long-line effects.
2–19

36. (013) Which type of filters are always used in combination with surge arresters?
a. Linear.
b. AC line.
c. DC line.
d. Power line.
37. (014) If the interference can be eliminated by disconnecting the receiving antenna, the source of
the disturbance is most likely
a. line noise.
b. internal to the radio.
c. external to the radio.
d. radio receiver trouble.
38. (014) Little or no variations in the strength of the interfering signal when moving the receiving
antenna for short distance normally indicates
a. enemy jamming.
b. internal interference.
c. radio receiver trouble.
d. unintentional interference.
39. (014) What type of electromagnetic interference occurs when a receiver responds to off-frequency
signals?
a. Rusty bolt.
b. Co-channel.
c. Brute force.
d. Spurious responses.
40. (014) Which type of intermodulation interference varies significantly with temperature and
weather conditions?
a. Transmitter.
b. Rusty-bolt.
c. Spurious.
d. Receiver.
41. (014) What man-made noise results from voltage surges, arcing, or corona discharges?
a. Relays.
b. Lighting.
c. Power line.
d. Switching equipment.
42. (014) How can you normally eliminate electromagnetic interference noise bursts caused by
sodium and mercury vapor lights that have stopped working while the power is still applied?
a. Change the bulb.
b. Ground out the light fixture.
c. Install a noise reject filter on the light.
d. Install a noise reject filter on the receiver.
43. (015) What Air Force program is used to resolve an electromagnetic interference incident?
a. Mishap prevention.
b. Frequency management.
c. Electrostatic discharge control.
d. Spectrum interference resolution.
2–20

44. (015) At what level does the Air Force want electromagnetic interference problems resolved?
a. Affected unit.
b. Affected major command.
c. Electromagnetic environmental effects office.
d. Spectrum interference resolution office.
45. (015) What do you do when you are reasonably sure electromagnetic interference exists?
a. Request engineering assistance.
b. Contact the base frequency manager.
c. Systematically gather data for analysis.
d. Contact the spectrum interference resolution office.
46. (015) Who is initially responsible for reporting an electromagnetic interference?
a. The operator.
b. Major command.
c. The frequency manager.
d. Federal Communications Commission.
47. (015) Who is tasked to provide interference resolution support to the unified commanders?
a. The operator.
b. Joint spectrum center.
c. Installation spectrum manager.
d. Federal Communications Commission.

Please read the unit menu for unit 3 and continue 


Unit 3. General Purpose Instruments
3–1. Multimeter ................................................................................................................................ 3–1 
016. Charateristics of the Fluke 8025A Multimeter .................................................................................. 3–1 
017. Using the Fluke 8025A Multimeter ................................................................................................... 3–3 
3–2. Oscilloscopes ............................................................................................................................. 3–6 
018. Oscilloscope basics ............................................................................................................................ 3–6 
019. Basic oscilloscope probe considerations.......................................................................................... 3–11 
020. Characteristics of a digital oscilloscope........................................................................................... 3–13 
3–3. Bit Error Rate Test Set (BERTS).......................................................................................... 3–21 
021. Purpose and major sections of the bit error rate test set................................................................... 3–21 
022. Error distribution and bit error rate testing ...................................................................................... 3–25 
3–4. Earth Ground Tester .............................................................................................................. 3–31 
023. Why do we test ground? .................................................................................................................. 3–31 
024. Basics of earth ground testing.......................................................................................................... 3–32 
3–5. Protocol analyzer .................................................................................................................... 3–34 
025. Operational characteristics of a protocol analyzer ........................................................................... 3–34 

T
HROUGHOUT your career as a radio frequency (RF) transmission systems technician you
will find yourself becoming very familiar with certain pieces of test equipment. Although most
of these instruments are used for specific tests and measurements, you will find similar pieces
of test equipment in nearly every workcenter to which you are assigned. The sooner you learn how to
use them, the sooner you can start thinking of yourself as a C-E technician. Let’s begin with the
multimeter.

3–1. Multimeter
A multimeter is an electronic measuring instrument that you will likely use throughout your Air Force
career. The multimeter combines several measurement functions that include the ability to measure
voltage, current, and resistance. As a technician, you will use the multimeter to troubleshoot
communication systems, isolate cable faults and maintain radios. Though analog multimeters are still
preferred for some applications, digital instruments are now used for most purposes. This section
relates functions and the operation of the digital multimeter through the operating characteristics of
the 8025A. Keep in mind that methods for measuring voltage, current, and resistance are the same for
digital multimeters as for analog multimeters.

016. Charateristics of the Fluke 8025A Multimeter


The digital multimeter has many more capabilities and features than the analog meter. One example
of a digital multimeter in use today is the Fluke 8025A. In this lesson we will cover characteristics of
the Fluke 8025A multimeter and you will be expected to know basic facts and terms about this
subject.
The Fluke 8025A is a rugged, water- and chemical-resistant multimeter that is suitable for use in
harsh industrial and military environments. Its capabilities include continuity, resistance, current,
voltage, diode, and temperature checks. It combines the performance of a digital meter and higher
input impedance with the speed and dynamic measurement capability of an analog meter.
Physical Features
The Fluke 8025A digital multimeter’s controls and jacks are located on the front panel beneath the
display, as shown in figure 3–1. Use the rotary switch to select various measurement functions. Use
3–2

the RANGE and HOLD push buttons (located above the rotary switch) to select the manual range
mode and the touch-hold mode. The jacks located below the rotary switch provide input connections
for the various types of measurements.
Display
The display section presents four types of meter information.
1. Digital Display—This display shows input data with automatic decimal point positioning.
2. Visual Annuciators—This portion of the display shows information about which meter’s
functions are in use, as well as showing the condition of the battery. Examples include
resistance, touch-hold mode, manual range activation, and low-battery indicators.
3. Analog Bar Graph—The analog bar graph is located just below the digital display. It shows a
31-segment bar graph, which indicates the absolute value of the input.
4. Range Indicator—The range indicators are located between the analog bar graph and the
digital display. The number displayed indicates the range in use for each of the decimal point
positions. No decimal point is displayed in the 1,000V or 3,200 μA range.
Range push button
The range button, which is located above the rotary switch, lets you select the measurement range
manually. Push it once to enter the manual range mode, press again to increment range, and hold it
depressed for 2 seconds to return to the auto range. In auto range, the meter automatically selects the
correct range for the reading taken.
Touch-hold push button
The touch-hold mode lets you watch the probes and the circuit during critical measurements. The
touch-hold mode locks the measurement into the display for viewing and automatically updates the
display when you take a new measurement. Press this button momentarily to select the touch-hold
mode, press it again to manually update the indication and hold it depressed for 2 seconds to exit the
touch-hold mode.
Input jacks
The four input jacks (amperes, milliamps/microamps, volts/ohms/diode, common) below the rotary
switch are lead connections for the various types of measurements.
Audible indicator
The audible indicator is a beeper that can produce beeps, clicks, or continuous tones for audible
indication when you operate a push button in the diode-test mode and when a new reading is
displayed in the touch-hold mode.
Power-up self-test
When you move the function selector rotary switch to any position from the OFF position, the 8025A
performs a power-up self-test. All liquid crystal display segments are switched on during the test
(about 1 second) before the unit commences normal operation. In addition to the power-up self-test, a
battery test cycle is performed at power-up and each time the operator selects a different function
with the rotary switch. If battery voltage is low, a low-battery symbol (annunciator) appears on the
display.
3–3

Figure 3–1. Fluke multimeter.

Reproduced with permission by the Fluke Corporation.

017. Using the Fluke 8025A Multimeter


In this lesson we will cover how to use the Fluke 8025A multimeter and you will be expected to
know basic facts and terms about this subject.
As you can see, multimeters are very useful and reasonably simple to operate. In most cases you set
the function, connect the meter leads to the unit under test and look at the digital display for the
indication.
Direct current voltage measurements
Two ranges are available for measuring DC voltages: volts DC and millivolts DC.
3–4

Volts direct current


You can measure input voltages between −1,000 and +1,000V DC by using the DC volts function.
Millivolts direct current
In the millivolts DC function, you can measure voltages between −320 and +320 mV DC.
Direct current measurements
There are two selections for current measurements: milliamps/amps DC and microamps DC.
Milliamps/amps direct current
There are two different input jacks we use in conjunction with the milliamps/amps position of the
function selector rotary switch. The input jack in use determines whether the 8025A is measuring
milliamps or amps. If the milliamp jack is in use with the function switch in the milliamp/amp
position, measurements between −320 and +320 mA are possible. If the ampere jack is in use,
intermittent display values up to 20A are possible. The meter’s input circuitry limits a maximum
input of 10 amps continuously.
Microamps direct current
Turning the function selector rotary switch to the microamp position selects the microamps DC
function. You then can measure input current between −3,200 and +3,200μA by using the mA/μA
input jack.
Measuring alternate currentvolts
You can measure two ranges of AC voltages with the 8025A: volts and millivolts.
Volts alternate current
With the function selector in this position, take measurements between 0 and 1,000V AC. The red
probe must be in the volts/ohm/diode input jack. The minus sign is disabled in this mode and the
decimal point/range indicators in the display indicate the range in use.
Millivolts alternate current
Only one range is available with the 8025A in this function: 0 to 320.0 millivolts. The minus sign is
disabled in this function also.
Measuring alternate current
Just as when you measure DC current, there are two functions for measuring AC current:
milliamps/amps AC and microamps AC.
Milliamps/amps alternate current
You can use two different input jacks in conjunction with this position of the function selector rotary
switch. The input jack in use determines whether the 8025A is measuring milliamps or amps. With
the milliamp jack in use, measurements between 0 and 320.0 mA are possible. If you use the ampere
jack, intermittent values of 20A are possible. As with the DC current measurements, the meter’s
circuitry is limited to a maximum input of 10 amps continuously.
Microamps alternate current
You can make measurements up to 3,200 μA in this function by using the mA/μA input jack.
Measuring resistance
Turning the function selector rotary switch to the Ω position selects the resistance function. The Ω
annunciator is displayed when the 8025A enters the resistance function and either the k or M
annunciator is displayed when appropriate for the range in use.
3–5

Testing diodes
Turn the function selector rotary switch to the diode test position to measure diodes or continuity.
When testing a diode, placing the leads across a “good” diode produces an “OL” (open) on the
display in one direction while reversing the leads produces a continuous audible tone.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

016. Characteristics of the Fluke 8025A Multimeter


1. Which control do you use to select the meter’s different measurement functions?

2. List the four types of information presented in the multimeter’s display section.

3. Which information in the display section shows the absolute value of the input?

4. Describe the function of the range push button.

5. What is the purpose of the touch-hold push button?

6. List the multimeter’s four input jacks.

7. When does the 8025A perform a power-up self test?

017. Using the Fluke 8025A Multimeter


1. What two ranges of DC voltages can you measure using the 8025A multimeter?

2. What is the maximum amount of continuous AC current that you can measure with the
multimeter?

3. When using the Fluke 8025A, how do you know you’re in the resistance measurement function?

4. How do you determine if you’re testing a “good” diode with the 8025A?
3–6

3–2. Oscilloscopes
Figure 3–2 is a representative oscilloscope. The oscilloscope is one of the most versatile
troubleshooting instruments you have on the bench. You can use the oscilloscope to measure voltage
levels (from DC to microwave), phase differences, signal presence (or absence), logic highs and lows,
frequency response, distortion and complex wave form analysis, to name a few. To help you measure,
most manufactures try to group similar controls together and separate different groups by color or
lines on the front panel, as illustrated in figure 3–3, view A.

018. Oscilloscope basics


In this lesson we will cover oscilloscope basics and you will be expected to know basic facts and
terms about this subject.
The oscilloscope presents a voltage (amplitude) versus (vs) time display of the waveform on a
cathode ray tube. It shows you an actual graphing of the signal’s amplitude as it relates to time
(amplitude vs time). With this display you can tell its waveshape, its frequency, and its amplitude and
whether the signal is relatively pure. Inside the cathode ray tube, an electron beam draws the
waveform on a phosphor-coated screen. For instance; a 1 kHz signal’s cycle takes 1 mS to alternate
from the starting reference, swing to its maximum positive value, swing to its maximum negative
value and then return to its starting reference value. On the screen of the oscilloscope, you’d see the
trace start at some reference value, go up as it moves to the right until it reaches its maximum positive
value, swing down through the reference value to its maximum negative value and finally return to
the reference value. This screen can represent three types of information.
1. Voltage information on vertical, or Y, axis (up-down).
2. Time information on horizontal, or X, axis (left-right).
3. Depth information on the Z axis represented by variations in intensity (in-out).
All oscilloscopes have the X and Y axes; the Z axis is sometimes available. This unit won’t get into Z
axis as deep as it does the X and Y axes. All oscilloscopes have controls to adjust the voltage, time
and intensity information in order to present a meaningful picture of the signal on the cathode ray
tube. Figure 3–4 shows a block diagram of the basic circuits that these controls operate.

Figure 3–2. Oscilloscope.


3–7

Figure 3–3. Oscilloscope panels.


3–8

Figure 3–4. Oscilloscope block diagram.

The vertical (amplitude) axis


As shown in figure 3–4, the input signal to be measured (Y axis, vertical input) is connected to the
vertical input amplifier. The vertical amplifier either attenuates or amplifies the signal for convenient
viewing. The vertical input amplifier and attenuator also produce a “trigger” that’s applied to the
horizontal circuits. This trigger may or may not be used, depending on the technician’s switch
settings.
The next block the incoming signal encounters is the delay line. The delay line allows the sweep
generator circuitry time to start a sweep before the signal reaches the cathode ray tube vertical
deflection plates. This coordination of vertical and horizontal timing by the delay line enables
viewing the leading edge of the signal.
The vertical output amplifier provides the additional amplification that is required by the cathode ray
tube vertical deflection plates. If the input signal is positive or positive-going, the electron beam is
deflected up; if the input signal is negative or negative-going, the electron beam is deflected down. If
the input signal is an AC signal, the electron beam is deflected up and down alternately.
The horizontal (time) axis
The horizontal circuits cause the electron beam to be drawn across the screen from left to right (the
trace is blanked on the return, right to left). This produces a trace on the screen. The horizontal
circuits determine when and how fast the trace is drawn. The when is determined by the triggering
circuits.
Triggering
Triggering determines whether there is a display or not. If the horizontal circuits are triggered, they
produce a sweep voltage that causes the electron beam to be drawn from left to right on the screen.
The trigger controls determine which mode of triggering is used: auto or norm.
3–9

If norm is selected, the trace is blanked until a signal of sufficient strength is received. This is
confusing for the beginning technician because no display or trace is seen on the screen. Once
mastered, this can be the most versatile setting. In the auto setting, the trace is generated whether
there is a signal or not. The trigger controls also control the source of the triggering signal.
 Internal.
 External.
 Line.
In the internal mode, the circuits are triggered from an internally generated trigger that is developed
from the signal under test. In this position, the signal and the trace are synchronized. In the external
mode, the trace can be linked to some related or unrelated event. In the line mode, the sweep is
triggered by a sample of the line voltage. This is useful for troubleshooting problems related to the
power supply or power line “hum.”
Sweep speed
The time/division (sec/div.) control affects the sweep voltage’s waveshape. This controls how fast the
trace is drawn across the screen. If you adjust this control to a setting close to 1 sec/div, you can
actually see the dot moving across the screen. As you decrease the time per division to the
millisecond range, the dot moves so fast it appears as a continuous line.
Intensity
The intensity of the display is controlled by varying the bias of the cathode ray tube grid, which
controls the density of the electron beam. As the voltage applied to the grid goes less negative, more
electrons strike the screen, resulting in a brighter trace or display. As the voltage applied to the grid
goes more negative, fewer electrons strike the screen, resulting in a dimmer trace or display. If the
negative bias is large enough, the cathode ray tube is cut off, eliminating the trace. Intensity is
determined by the intensity control on the front panel.
Vertical controls
The vertical input controls generally consist of an input coupling switch, calibrated attenuator and
position control. A dual trace scope also has switches to select single channel, dual channel or various
combinations.
The input coupling switch
Input coupling switches usually have at least three positions.
1. AC.
2. GND.
3. DC.
The AC and DC positions have high impedance, typically 1 megohm shunted by about 20 picofarad
(pF). A standard 10 to 1 divider probe increases the input impedance to 10 megohms. Some
oscilloscopes include a 50 Ω input impedance input coupling option. The 50 Ω input coupling option
is ideal for monitoring pulse and signal generators or other low-impedance sources.
AC position
The AC position couples the input signal through a DC blocking capacitor, allowing only the AC
component to be viewed. AC coupling can be very useful when you want to measure a small AC
signal superimposed on a large DC voltage. For example, to measure the small AC ripple voltage
from a power supply, you select AC coupling. The blocking capacitor blocks the large DC component
and passes the AC ripple. Do not use the AC position to measure low-frequency digital-type signals.
The internal DC blocking capacitor distorts the waveform, as shown in figure 3–5.
3–10

Figure 3–5. Distorted waveform.

Ground (GND) position


The ground position is useful when you want to set a ground or zero-volts reference level on the
cathode ray tube screen without disturbing the input signal connection. The input signal is internally
disconnected and the vertical amplifier’s input is grounded. This means that you can leave the input
signal connected to your scope. You won’t short it out when you switch to the ground position.
Direct current position
The DC position allows you to view both AC and DC components of the input signal. For example, if
you have set the zero-volts reference level at the center of the screen (using the GND position) and
then switch to DC, the waveform appears, showing the AC component and the signal is offset either
up or down depending on whether the DC component is positive or negative. DC coupling is also
used when you are measuring digital-type signals or square waves.
VOLTS/DIV (VOLTS/CM)
Most modern scopes use a combination of variable attenuation and adjustable vertical amplifier gain
to control input signal levels. High-level signals require more attenuation and less gain so that the
trace is not deflected off the screen. Low-level signals need less attenuation and more gain so the
display is large enough to interpret. The setting of the VOLTS/DIV control indicates the volts that
each vertical division represents. For instance, the display has a signal that is 4 divisions tall. If the
VOLTS/DIV control was set at 5 mV per division, the measured signal would be 20 mV. If the
VOLTS/DIV control was set at 5 V per division, the measured signal would be 20 V. The vernier
portion of the input attenuator provides continuous sensitivity control between the calibrated volts per
division ranges. Whenever you move the vernier out of its détente position, the UNCAL light is on,
letting you know that the steps marked on the VOLTS/DIV dial are not calibrated.
Horizontal controls
The sweep generator, sometimes called the time base generator, produces the sawtooth waveform
that controls the rate the beam is drawn horizontally across the face of the cathode ray tube. The
generator’s most important function is to ensure linear beam movement, meaning the beam moves at
the same rate from start to finish. Without this precise rate, accurate time measurements are not
possible. Another factor of accuracy depends on the delay time. Its function is to delay the vertical
input signal just enough so that the trace being displayed is the signal that started the sweep (fig. 3–6).
3–11

Figure 3–6. Sawtooth sweep.

Sweep speed control


The sweep generator’s sawtooth waveform is controlled by a front panel control called TIME/DIV
(SEC/DIV, TIME/CM, SEC/CM). This calibrated control lets the operator select many different
sweep speeds in order to view waveforms that vary from a few hertz up to the bandwidth limit of the
scope. The control is usually divided into steps covering the ranges of seconds, milliseconds,
microseconds, and nanoseconds. These ranges correlate to how fast the beam is drawn across the
cathode ray tube. The faster the beam is drawn across the cathode ray tube, the faster the time
reference is (i.e., the shorter the scale). For example, if the TIME/DIV control is set for 0.5 seconds
per division, the time reference over all 10 major divisions (vertical graticule on the cathode ray tube
face) is 5 seconds. If it’s set at 5 milliseconds per division, the full-scale time reference is 50
milliseconds. Figure 3–6 shows how the sawtooth waveform produced by the sweep circuit develops
a sine wave pattern on the cathode ray tube.
Part of the TIME/DIV control is a sweep vernier control that provides continuous adjustment of the
sweep speed between the fixed TIME/DIV steps. Whenever you move the vernier out of its détente
CAL position, the UNCAL light illuminates, letting you know that the steps marked on the
TIME/DIV dial are not calibrated.

019. Basic oscilloscope probe considerations


This lesson covers basic oscilloscope probe considerations. You will be expected to know basic facts
and terms about this subject.
3–12

An oscilloscope probe is used to connect the oscilloscope to the circuit under test. Why use a probe at
all? You can’t use simple test leads because they would pick up extraneous noise and signals. Probes
are shielded so that all that is applied into the oscilloscope is the signal that is at the probe tip. Probes
also help to prevent circuit loading. So probes are basically an isolation device for the oscilloscope
input. They allow only the desired signal in.
If the scope is being used for signal tracing or circuit analysis, some type of an isolating device must
be used to prevent the scope from loading the circuit and attenuating the signal. Any voltage probe
loads the circuit you attempt to measure. Choosing the correct probe minimizes this loading. To keep
resistive loading errors below 1 percent, select a probe/scope combination that has an input resistance
(Rin) at least 100 times greater than the source impedance.
Type of probes
The voltage and frequency of the signal you are measuring, along with the source impedance at the
point of measurement, determine which probe to use. The most common types of probes available for
circuit analysis today are as follows:
 Passive probe, 1:1.
 Passive divider, 10:1 or 100:1.
 Active (field effect transistor).
 Current probes.
Passive 1:1 probe
The 1:1 probe is the simplest of all probes. Your oscilloscope may have come with a 1:1 probe (and a
10:1). The 1:1 probe is essentially a shielded piece of wire. It’s often the best choice for
measurements at low frequencies with low- to medium-impedance circuits. Since the typical
oscilloscope has an input impedance of 1 MΩ shunted by 20 pF of capacitance, circuit impedances
less than 10 kΩ can be safely measured with a simple passive 1:1 probe.
Since the 1:1 probe has no multiplication factor, it allows you to use the maximum sensitivity of the
oscilloscope. A scope with 2 mV/div maximum sensitivity can use 2 mV/div, whereas a 10:1 probe
would be able to use only a 20 mV/div.
Passive 10:1 and 100:1 divider probe
Your oscilloscope probably came with a 10:1 probe as standard equipment (and maybe a 1:1). A 10:1
divider probe increases input impedance to 10 MΩ. The 10:1 probe obviously causes less loading
than a 1:1. Another value of the divider probe is that it attenuates noise coming in.
The divider probe also allows higher voltages to be measured. If your scope’s highest range is 5 volts
per division, the highest voltage you can measure with it would be 40 volts (there are 8 divisions on
most scopes—8 × 5 = 40). With a 10:1 probe on the same scope, you may be able to measure up to
400 volts (40 × 10 = 400). Check the technical order (TO) for maximum measurement levels.
Active probe
Active probes amplify or in some other way process the signal before it is applied to the oscilloscope.
Many active probes use a field effect transistor amplifier circuit and therefore require a bias voltage
for the amplifier circuit. The field effect transistor probe has very low input capacitance, which allows
it to be used at very high frequencies. Many field effect transistor amplifier probes are good to
frequencies greater than 500 MHz. Most field effect transistor probes are 1:1 type probes.
Current probe
The current probe shows current instead of voltage. The current probe is the only one mentioned that
does not need to actually make contact with the circuit under test. It’s simply held near the circuit
under test. When using a current probe, the display is in amps per division instead of volts per
3–13

division. The probe is a current-sensing transformer with the circuit under test being the primary and
the probe being the secondary. The current probe is useful in those situations where touching the
circuit with any voltage probe at all changes the circuit’s operation.
Probe rules
When you need to make amplitude measurements, the following rules apply:
1. If you have a choice, select a minimum impedance source. For example, emitter-to-base
impedance of a transistor is generally lower than the collector-to-base impedance.
2. If you have a choice, select a probe with the highest possible input impedance (Zin) at the
frequency of interest.
3. When source impedance is unknown, the probe with the highest Zin usually yields the
greatest accuracy.
4. If the source voltage is totally unknown, it’s wise to start with a 100:1 divider probe to reduce
the possibility of damaging the probe. This will also indicate whether or not there is enough
signal available to capitalize on the relatively low capacitance of 100:1 divider probe. More
often than not, a 100:1 divider probe is not available. If this is the case, use your standard
10:1 divider probe.
020. Characteristics of a digital oscilloscope
This lesson covers characteristics of a digital oscilloscope. You will be expected to know basic facts
and terms about this subject.
A digital storage oscilloscope is a common piece of test equipment found in C-E work centers. This
scope offers features not easily attained with an analog scope. A couple of these capabilities include
the ability to digitize analog input signals, and to store them indefinitely on a variety of electronic
media. How can you take advantage of these capabilities? Let’s examine some common operating
characteristics of the digital storage oscilloscope to find out.
A digital storage oscilloscope performs quite a few functions that are similar to its analog counterpart.
The biggest difference is that a digital storage oscilloscope digitizes the input signal. This allows you
to collect digitized data of an entire waveform for storage or display. Figure 3–7 shows a block
diagram of a digital storage oscilloscope.

Figure 3–7. Digital storage oscilloscope block diagram.

(Copyright © Tektronix. All Rights Reserved. Reprinted with permission.)


3–14

Input section
As with the analog oscilloscope, the input signal is fed to a vertical section that attenuates the signal
to an acceptable level for the digital storage oscilloscope.
Acquisition section
After the input section attenuates the signal, the digital storage oscilloscope’s analog-to-digital
converter samples the signal and converts it to a digital value. A clock in the horizontal section
determines the rate at which the analog-to-digital-converter samples the signal. The sample rate can
be as high as the gigasamples per second (GS/s) range. There are two digital storage oscilloscope
sampling techniques: real-time and repetitive. We discuss these areas a little later.
There are times when high-frequency data must be sampled at a very high rate. Most digital storage
oscilloscopes are unable to directly sample high frequencies because analog-to-digital-converters are
unable to keep up with the demanding sampling rate. Charged coupled devices are used to
compensate for this dilemma. Charged coupled devices accept fast-changing data and send it to the
analog-to-digital-converter at a slower rate—they’re somewhat like a buffer. This allows for a slower,
high-resolution analog-to-digital-converter to be used, providing more accurate data collection.
Shortfalls of charged coupled devices are cell storage size and cell leakage. Once a digitized
waveform is stored in memory, it can be held for a longer period of time without deterioration.
Memory
Sampled data points are stored in the digital storage oscilloscope’s memory as waveform points.
These stored waveform points make up a waveform record. The number of waveform points used to
make up a record is called the record length. The Tektronix TDS 744A has a record length capacity
of 500,000 points. The digital storage oscilloscope’s triggering system sets the start and stop points of
a record. After storage in memory, the record points can be sent to the cathode ray tube (through a
digital-to-analog converter for display or even stored on floppy disk).
Microprocessor
A microprocessor greatly expands the digital storage oscilloscope’s versatility. The microprocessor is
able to take digitized samples and perform numerous manipulations on the data. With the help of the
microprocessor, you can automatically measure rise and fall times, periods, time intervals—and you
can perform math computations. Digitized data is very easy to store on floppy disk or send to a printer
or plotter.
Cathode ray tube
The cathode ray tube receives data from memory for display. Retrieving data from memory allows
the cathode ray tube to receive data at a slower rate as opposed to keeping up with some of the high
frequency signals fired directly on a cathode ray tube in an analog scope. Some cathode ray tubes also
have an option for displaying signals and other data in a variety of colors.
Sampling techniques
The digital storage oscilloscope employs two different methods for sampling analog signals: real-time
and repetitive sampling. Although related to undersampling, aliasing will also be addressed here as
atype of sampling.
Real-time sampling
Real-time sampling is normally used on single-shot or seldom-occurring signals. This technique
samples points on an input signal and fills in the spaces between using a technique known as
interpolation. Interpolation is a process that estimates what the signal looks like between samples,
and “fills in the blanks” between data points. There are two types of interpolation: linear and sine.
Linear interpolation is used when measuring pulse waves, while sine interpolation is used when
measuring sine waves. An example of sine and linear interpolation on a sine wave is shown in figure
3–8.
3–15

Figure 3–8. Sine and linear interpolation.

(Copyright © Tektronix. All Rights Reserved. Reprinted with permission.)

Repetitive sampling
This sampling technique is used for measuring repetitive signals. This method samples repeating
signals numerous times and captures a portion of the signal each sample.

Figure 3–9. Equivalent-time sampling processes.

(Copyright © Tektronix. All Rights Reserved. Reprinted with permission.)

Figure 3–9 shows two types of equivalent-time sampling processes: sequential and random.
Sequential sampling involves taking a sample at a determined time. After the first period triggering,
the oscilloscope samples each period. This sequence continues until there are enough data points to
reconstruct several periods of the input signal. The manufacturer specifications determine the amount
of samples per period. As always, the higher number of sample points collected, the more accurate the
signal. Random sampling takes samples at an undetermined point on a signal, after the trigger, and
then stores them. The digital storage oscilloscope’s clock keeps track of the distance between each
trigger and the momentary sample. The samples are taken at fixed intervals and then reconstructed to
match the input signal. The samples are displayed in what looks like a random order until the entire
data record is completed.
3–16

Aliasing
Viewing a signal that appears to be at a lower frequency than actually measured is caused by
undersampling an analog signal fig.3–10. Most of the time, aliasing is due to having the time base
turned down too low; thus, reducing the sample rate for a frequency that is too high for a given
setting.

Figure 3–10. Aliasing.

(Copyright © Tektronix. All Rights Reserved. Reprinted with permission.)

Digital storage oscilloscope resolution


The accurate recreation of a signal not only depends on the number of bits used to represent a sample,
but also the amount of noise in the system. True, the more bits the digital storage oscilloscope uses
when digitizing an analog signal, and the higher the sampling rate, the more accurate the data sample.
However, you also must contend with noise generation from other components within the digital
storage oscilloscope. One source of noise is from the charged coupled devices. Charged coupled
devices can induce noise into the signal being sampled, slightly altering the digitized value. This can
affect the scope’s overall resolution.
Acquisition modes
The acquisition mode determines how the oscilloscope combines many sample points into one record
point. Below we will discuss three of the acquisition modes.
Sample
The digital storage oscilloscope creates a record point by saving the first sample in a collection of
sample points. This is a standard operating mode for digital storage oscilloscopes.
Peak detect
In this mode the digital storage oscilloscope saves the highest and lowest points in a collection of
samples. This mode can be used to detect glitches.
Averaging
The digital storage oscilloscope shows a record that is a collection of several acquisitions of a
repeated signal that is averaged over time. This mode is useful for eliminating random noise from
signals since the average value of random noise is zero.
Digital storage oscilloscope controls
A number of the controls on a digital storage oscilloscope perform the same function as on an analog
scope. But there are a few storage controls unique to a digital storage oscilloscope. A few of these
particular functions on the Tektronix 2230 digital storage oscilloscope are acquisition, cursor, and
memory and menu controls (fig. 3–11).
3–17

Figure 3–11. Tektronix digital storage oscilloscope storage controls.

(Copyright © Tektronix. All Rights Reserved. Reprinted with permission.)

Acquisition controls
The acquisition controls set the digital storage oscilloscope’s sampling rate. They also determine what
type of processing occurs and the number of record points to be shown.
1K/4K switch
This switch allows you to adjust the acquisition record length (# of record points in a waveform
record) between 1000 (1K) and 4000 (4K) points. Keep in mind that a digital storage oscilloscope has
a finite memory capacity. This means you can acquire a “detailed” view of a signal for a short period
of time, or a “less detailed” view for a longer period of time. Some scopes allow you to add memory
to increase the record length for specialized applications.
PRETRIG/POST TRIG switch
This control sets up the acquisition of a waveform either before (PRETRIG) or at the beginning
(POST TRIG) of the waveform. It is possible to choose other trigger positions with other scope
controls.
ROLL/SCAN switch
This control selects either ROLL or SCAN acquisition and display mode. In the ROLL mode,
SEC/DIV switch settings are from 0.1s to 5s per division. The triggers are disabled for NORM and P-
P AUTO trigger modes, continuously acquiring and displaying signals.
At SEC/DIV switch settings 0.05s per division and faster, the ROLL/SCAN switch is disabled, and
waveform samples require a trigger to complete the acquisition before the display is updated.
When the ROLL/SCAN switch is in the SCAN mode, the A TRIGGER mode controls are functional.
For NORM trigger mode, the pretrigger waveform is updated by the trigger and post trigger scan
from the trigger position to the right. For SGL SWP, the SCAN mode is overridden by
SCAN/ROLL/SCAN. Triggers are disabled in P-P AUTO and TV field trigger modes.
SAVE/CONTINUE switch
When in the SAVE mode, the acquisition and display update in progress stops. In the CONTINUE
mode, the acquisition process restarts. If the SEC/DIV switch setting is set to 0.1s per division or
slower, the SAVE state is entered immediately upon pressing the button. If the SEC/DIV setting is 50
ms per division or faster, and an acquisition has already been triggered, the acquisition is complete
before the SAVE state is entered.
Cursors control
These buttons control all waveforms displayed in the STORE mode. These controls make delta volts,
delta time, one over delta time and delay time measurements in the STORE displays. You can also
adjust the display window during a 4K record length acquisition with the CURSORS position control.
POSITION/CURS/SELECT WAVEFORM switch
This switch establishes the function of the CURSORS position control. When depressed (POSITION
CURSORS mode), the control positions the cursors along the horizontal axis. When the pushbutton is
3–18

released (SELECT WAVEFORM mode), the CURSORS position control or the C1/C2 switch may
be used to position the cursor to the desired waveform(s).
CURSORS position control
This control allows for either horizontal positioning of the active cursor (or active cursors when there
are two waveforms displayed in a display set) or for switching the cursors between waveform display
sets. When cursors are positioned to a new waveform set, they return to the position they had when
last on that signal set. Cursor positioning continues to function during SAVE mode and
measurements can be made on any waveform displayed. When you change an acquisition control, the
cursors return to the acquisition waveform set.
Cursors are placed on all waveforms in a display set. A display set is one or both waveforms from the
following areas: acquisition, channels 1 and 2 and references 1, 2 or 3. Cursors move from the
acquisition waveform if they were on a SAVE REF waveform that is turned off. The acquisition
parameters of the waveform set, in which the cursors are located, are displayed in the cathode ray
tube readout.
When the displayable acquisition length is more than one screen, one screen of the record is
displayed. A 4-bar graph on the screen indicates which portion of the record you are viewing. For
example, a 4K record length displays 1K samples at a time. You would have to look at 4 different
screens to see the entire record length. You then would adjust the cursor control to position between
the different segments of the record.
SELECT C1/C2 (cursor-select) switch
In the CURS mode this switch selects the cursor(s) that can be positioned by the CURSORS position
control. Each press of the C1/C2 button activates the cursors alternately. In SELECT WAVEFORM
mode, pressing the C1/C2 switch moves the cursor set between displayed waveforms.
Memory and menu controls
These switches control the MENU operation while the MENU is displayed. They also control the
storage and display of the SAVE reference waveforms when the MENU is not displayed.
SAVE REF MEMORY CONTROL
When the WAVEFORM REFERENCE/MENU SELECT switch is in the WAVEFORM
REFERENCE position, the MEMORY switches control the SAVE REF memory.
MENU CONTROL
When the WAVEFORM REFERENCE/MENU SELECT switch is in this mode, the MEMORY
switches control menu operation. Waveforms are only displayed with menus when a menu choice
requires a waveform be displayed, in order to perform the selected change. Then MENU mode allows
choices of alternate parameters and functions that override front-panel settings.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

018. Oscilloscope basics


1. What type of display does the oscilloscope present?

2. On which axis of an oscilloscope is voltage, time, and depth information presented?


3–19

3. What is the purpose of the delay line at the input of the vertical amplifier?

4. What position of the input coupling switch is used to view digital-type or square wave signals?

019. Basic oscilloscope probe considerations


1. What is the basic purpose of the oscilloscope probe?

2. List four types of probes.

3. What is the typical input impedance of most oscilloscopes?

4. When making an amplitude measurement and the source impedance is unknown, greatest
accuracy is achieved when the probe’s Zin is what?

020. Characteristics of a digital storage oscilloscope


1. What is the biggest difference between digital storage oscilloscopes and analog scopes?

2. What is the function of charged coupled devices in the digital storage oscilloscope’s signal
sampling process?

3. Define the following waveform storage terms:

a) Waveform points.

b) Waveform record.

c) Record length.

4. What advantage does retrieving data from memory have on the cathode ray tube?
3–20

5. List two digital storage oscilloscope sampling techniques.

6. Define interpolation.

7. What are the two types of interpolation?

8. What is the difference between sequential and random sampling?

9. What is a common cause of aliasing?

10. Describe the three types of acquisition modes.

11. What is the function of the acquisition controls?

12. What is the range of record points that the Tektronix 2230 digital storage oscilloscope can
acquire?

13. Which control sets up acquisition either before a trigger or at the beginning of a waveform?

14. What happens to the NORM and P-P AUTO trigger modes while in ROLL mode?

15. In the NORM trigger mode, how are the pretrigger waveform and post trigger scan updated?

16. In the SAVE mode, what happens to the acquisition and display update in progress?

17. What types of measurements in the STORE mode are made with CURSORS controls?

18. Which switch establishes the function of the CURSORS position control?
3–21

19. Explain how a 4K record length is shown on the display.

20. In the SELECT WAVEFORM mode using the SELECT C1/C2 (cursor-select) switch, what
happens when the C1/C2 switch is pressed?

21. What is the purpose of the MEMORY and menu controls?

3–3. Bit Error Rate Test Set (BERTS)


Data does not always arrive correctly at the receiving site. Noise, distortion, component errors, and
general interference in the received communication or data can disorient the receiving processor
equipment. Most demodulators and signal processors are designed to detect the level changes from
the one (1) state to the zero (0) state. But since noise is random and can reach levels higher than the
desired signal amplitude, it’s easy to see why the demodulator can become confused and assign the
wrong state. Thus, a transmitted one (1) can become a received zero (0). The bit error rate test set is
key in verifying the digital transmission circuit path. Some bit error rate test sets can verify data
transmission on-line; others must perform off-line. In this section we present the purpose of the bit
error rate test set, its major sections, error distribution, and basic bit error testing.

021. Purpose and major sections of the bit error rate test set
This lesson covers major sections of the BERTS. You will be expected to know basic facts and terms
about this subject.
Bit errors are often random and cannot be predicted. To understand why this is a problem, you must
consider some typical performance measurements between two dial-up modems in the United States.
A transmission speed of 1.2 Kbits per second (kbps) has a bit error rate of 1 in 105, moderate quality,
to 1 in 103, very poor quality. This means you can expect one bit to be wrong in every 1K to 100K
bits transmitted.
One answer to this problem is to use the most error-free circuit available. Fiber optic circuits offer
exceptional performance over standard land lines. However, bit errors will still occur, so you must
know how to handle them. The bit error rate test set is the solution.
Purpose
The purpose of the bit error rate test set is to provide a bit error detection system that can determine
the received data quality after traveling over a communications link or through a bit synchronizer.
The bit error rate test set can perform bit error analysis, timing analysis or delay measurements. Also,
the bit error rate test set is a source of pulse code modulation (PCM) data used to test a
communications data link or a bit synchronizer output. Many bit error rate test sets can generate
command test patterns, telemetry PCM formats and bit coding patterns at telemetry data rates. The bit
error rate test set provides this patterned digital test signal for loop testing and computes the bit error
rate by comparing an identical pattern to determine the data quality passing through the loop under
test. In most systems the bit error rate test set is part of the built-in test equipment or BITE. Often a
patch panel provides the data inputs to the bit error rate test set. In other systems, the bite error rate
testing capability is part of another chassis, such as a modulator.
3–22

Major sections
Figure 3–12 shows a typical bit error rate test set simplified block diagram. The three major sections
shown operate independently of each other—except for the 48-bit program and the external data—but
interact at a system level. The three major sections follow:
1. Internal clock generator.
2. Pattern simulator.
3. Pattern synchronizer and error counter.

Figure 3–12. Bit error rate test set block diagram.

Internal clock generator


The internal clock generator provides an internal clock reference usually selectable from the front
panel.
A more detailed view of the internal clock generator section is shown in figure 3–13. The voltage
controlled oscillator (VCO) section is the basic clock generator that provides all clock phases. The
VCO is a free running oscillator; it has periodic frequency calibration by a crystal controlled
oscillator in the automatic calibration (AUTO-CAL) section. The AUTO-CAL section provides an
AUTO-CAL cycle when either of the two most significant digits of the bit rate change.
3–23

Figure 3–13. Internal clock generator block diagram.

In most bit error rate test sets, the bit rate thumbwheel setting on the front panel primarily determines
the voltage controlled oscillator frequency. The voltage controlled oscillator output frequency, called
the voltage controlled oscillator clock, is sent to the countdown section. The countdown section
divides the voltage controlled oscillator clock and provides a frequency equal to the bit rate you
select. The digital-to-analog converter and decode range splitter process the binary coded decimal
information from the bit rate switches. Then the digital-to-analog converter and decode range splitter
use the binary coded decimal information to control the voltage controlled oscillator center frequency,
and to control the range the countdown circuits will decode.
Pattern simulator
The pattern simulator accepts either the external clock or the internal clock and generates either a
pseudo random noise pattern or a 48-bit pattern you program from the front panel. Also you may
select the external data input as the pattern source. Then the pattern you select is buffered and
provided as a system compatible output. In addition, you can vary the data peak-to-peak value or
offset the data and add noise from an external source for a more comprehensive test.
Figure 3–14 shows the pattern simulator section. The data/clock select encoder controls the clock
selector and selects either the external clock or the internal clock for further processing. The pseudo
random noise generator receives the internal clock from the clock selector and generates a 2,047 bit
long pseudo-random pattern. The pseudo random noise-blanking generator, under the control of the
blanking selector switch, provides a blanking period at the start of the 2,047-bit pseudo random noise
frame. The blanking period may be 0, 32, 64, 96 or 128 bits long.
The 48-bit register and 8-bit word display accept six 8-bit words, one word at a time, from the data
entry switches and stores them in a 48-bit register. Most bit error rate test sets have a display that
indicates either the switch data or the data in a particular word location, determined by the word
select switch. The read/enter switch controls this display. The 48-bit transmitter accepts the 48-bit
parallel word, composed of the six 8-bit words, from the register and generates a serial pattern. This
serial pattern, consisting of the 48-bit words, repeats at a bit rate that the selected clock frequency
controls. The transmitter also generates a synchronization (SYNC) pulse at the start of each 48-bit
pattern.
The data/clock select encoder—controlled by the pattern select and clock select switches—generates
a data select code (sent to the data/sync selector and the error/in sync selector) and a clock select code
(sent to the clock selector). The data/sync selector selects either pseudo random noise data, external
data or 48-bit data—depending on which data select code is received. The data/sync selector also
receives the corresponding SYNC signal.
3–24

The code converter, under the control of the code selector switch, converts the non-return zero-level
(NRZ-L) data into the desired coding, such as NRZ-mark (NRZ-M). Many bit error rate test sets have
several different codes available. The clock drivers convert the selected clock signal to a system
compatible level and a buffered transistor-transistor logic (TTL) level. The data drivers convert the
encoded data signal to a system-compatible level and a buffered TTL level. The outputs from the
clock drivers and the data drivers are input to the pattern synchronizer and error counter data/clock
receivers block.
The output amplifier is the last block we cover in the pattern simulator. The output amplifier accepts a
TTL data signal from the code converter, external noise, and external offset. It mixes the three signals
linearly. It can provide a front panel adjustable offset and output level for a low impedance load.

Figure 3–14. Pattern Simulator block diagram.

Pattern synchronizer and error counter


This section receives the pattern you selected (one of the two generated patterns), automatically
synchronizes on the pattern and determines the bit error rate induced after transmission through the
system. When you select the external data mode, the data is received and synchronized with the
original external data by varying the delay between the two signals. This allows you to compare the
signals and measure error.
The pattern synchronizer and error counter are shown in figure 3–15. The data/clock receivers accept
either system-compatible data or buffered TTL data from the clock drivers and the data drivers, and
output a TTL level. The data/clock receivers combine (OR) the system-compatible and TTL signals
together so only one signal is present at one time. In most bit error rate test sets, the front panel has an
inverter switch that can invert the data pattern. The data/clock receivers send the output received
clock and received data to the 48-bit synchronizer/comparator, the pseudo random noise
3–25

synchronizer/comparator, the external data synchronizer/comparator, as well as to the error rate


counter and display sections.
The 48-bit synchronizer/comparator accepts the received clock and data, and synchronizes on the
pattern when it’s equal to the 48-bit parallel input from the 48-bit register and 8-bit word display. It
generates an error pulse each time an error is found in the incoming pattern. The 48-bit
synchronizer/comparator generates an IN SYNC level when no errors occur in the pattern. The
error/IN SYNC Selector section receives the error pulses and IN SYNC level.
The pseudo random noise pattern synchronizer/comparator accepts the received clock and data from
the data/clock receivers and synchronizes with the pattern when it’s equal to its own 2,047-bit pseudo
random noise pattern. This section internally generates its own pseudo random noise pattern. Again,
the pseudo random noise pattern synchronizer/comparator generates an error pulse each time an error
is found. If the incoming pattern is correct, it generates an IN SYNC level. As with the 48-bit
synchronizer/comparator, the error/IN SYNC selector section receives the error pulses and IN SYNC
level.
The external data pattern synchronizer/comparator accepts the external data, delays it under the front
panel control and compares it with the received data from the data/clock receivers. When the
comparison is correct, it generates an IN SYNC level. The external data pattern
synchronizer/comparator generates an error pulse each time an error appears in the incoming pattern.
Once again the error/IN SYNC selector section receives the error pulses and IN SYNC level.
The error/IN SYNC selector, under control of the data select code, selects the correct error and IN
SYNC signals to send to the error rate counter and display. The error rate counter and display
continuously totals the error pulses it receives and provides an error count output to the front panel
display. The output is the sum of the total errors during the count cycle. When you push the reset
switch, the reset pulse resets all sections in the pattern synchronizer and error counter, thereby
stopping the test.

Figure 3–15. Pattern synchronizer and error counter block diagram.

022. Error distribution and bit error rate testing


This lesson covers error distribution and bit error rate testing. You will be expected to know basic
facts and terms about this subject.
Now that you know the purpose and the divisions of the bit error rate test set, you’re ready to move
on to the basic bit error rate test procedure. The primary means of measuring the quality of
3–26

transmitted digital information is to compare the number of bits in error to the total number of bits
transmitted. This ratio is called the bit error rate. This measurement is the basis for your tests.But
before we discuss the test itself, let’s look at some problems caused by the distribution of errors.
Distribution of errors
You should be aware that while the bit error is the basic quality measurement for a transmission
channel, it doesn’t tell the whole story of the channel’s capability to pass a particular type of digital
information. Where and when the error distribution occurs is also a factor. In certain types of
communications or data transmission, a random distribution of errors (i.e., each bit is equally likely to
be in error) may seriously impede the flow of traffic. On the other hand, the same error rate with a
clustered distribution (i.e., the errors occur in clusters) would impair the flow of traffic very little.
However, sometimes a clustered distribution of errors can impede the flow of traffic more than would
the same number of random errors. Let’s look at an example.
In certain secure voice transmission types, you usually cannot detect a single digital error. However, a
number of errors occurring in rapid succession, as in a burst or cluster of errors, produce an
objectionable noise burst in your ear. If the error rate is 1 in 103 and the distribution is random, you
probably will not notice an occasional error. However, if the errors are in bursts of about 350 bits or
more, you can hear a noise burst. If the transmission rate is 50 kbps, you can hear a noise burst as
frequently as once every 7 second seconds.
Of course, you want as little an amount of errors as possible. The bit error rate test tells you when
there is a degradation of communications or data transmission.
Test procedures and parameters
Although you will see the error parameters for the various circuit types in different terms, the basic
error testing procedure is the same. Listed are the five steps.
1. A known bit pattern, usually pseudo-random, is generated at the sending end of the data
channel or equipment under test and is transmitted through the channel.
2. At the receiving end, an identical pattern is generated and synchronized with an incoming
signal.
3. The received signal is compared to a locally generated pattern in the receiving equipment.
4. If any bit in the received signal is not the same as the generated bit, it is assumed the received
bit is in error.
5. An error signal is generated and used to operate a counter.
Although all error measuring sets use the same basic technique for counting errors, they process and
display the information in many different ways. So, you must use and understand the technical
manual for the particular test equipment. Some of the significant differences we discuss here will help
you understand the test procedure and its results. Some significant test equipment parameters are as
follows:
 Bit rate and timing.
 Test patterns.
 Synchronization.
 Interface and data coding.
 Readouts and special features.
Bit rate and timing
Some test sets can operate over a wide range of bit rates but must be supplied with external timing.
Other test sets can operate at a wide range of discrete bit rates using internal timing, but they must be
supplied with external timing at any other bit rate within that range. Still other sets have internal
3–27

timing over a range of low or intermediate bit rates and can operate at higher rates with external
timing.
Test patterns
Most bit error rate test sets generate pseudo-random test patterns. Normally, patterns of the same
length are identical. So, a pattern of a given length generated by one bit error rate test set is identical
to a pattern generated by another bit error rate test set. Some common pseudo random noise patterns
are 63 bits, 511 bits, 2,047 bits, 32,767 bits and 1,048,575 bits. The shorter pattern lengths (like 511
bits) are normally used for testing low to intermediate speed data circuits, 1,200 to 9600 bps. Higher
speed circuits, 50 kbps to 64 kbps, use longer pattern lengths (2,047 or 32,767 bits). Very high speed
T–1 circuits, 1.544 Mbps, use the 1,048,575 bit pattern.
Many bit error rate test sets also offer a variety of fixed patterns in addition to pseudo random noise
patterns, for instance, repeated characters, alternating ones and zeros, or a programmable bit
sequence. But for general use, pseudo random noise patterns are suitable for routine testing. You must
remember that the bit rate of the data circuit determines the pattern length for the test. However, the
recommended bit rate is the highest available to provide maximum output testing. As an example, you
would not test a 50 kbps or 64 kbps data circuit with a bit rate of 9,600 bps, because it would not test
the maximum output of the equipment. A 64 kbps circuit should be tested at 64 kbps or the closest bit
rate available, with a 2,047 or 32,767 bit test pattern.
Synchronization
The pattern generator in the receiving section of the bit error rate test set must be synchronized with
the received pattern from either a bit synchronizer or a distant test set if testing data circuits. One of
the major problems you may encounter in making error tests is due to the detection of an “out-of-
sync” condition by the test set receiver. Some bit error rate test sets have a detection system that
assumes the receiver is out-of-sync when the error rate is high. Other sets have a more elaborate
detection scheme and can operate properly even with a high error rate.
The important factor concerning synchronization is the way the bit error rate test set responds when
detecting an out-of-sync condition. Some sets automatically resynchronize, and may or may not
inhibit the error count during resynchronization. Other sets stop when detecting an out-of-sync
condition and must be manually restarted. These variations and others can affect the data
interpretation. Therefore, it is necessary for you to know how test equipment responds to an out-of
sync condition in order to interpret the data correctly. For example, you need to know if the error
count continues during the resynchronization period, since that portion of the count is incorrect.
Interface and data coding
Many bit error rate test sets have various interfaces and signal coding options. Select the interface and
format options that match the interface and coding of the circuit you are testing. Many test sets are
designed to operate primarily with data modems and have 25-pin connectors that carry the necessary
signal and handshake connections for modem operations. These connectors are usually the Electronic
Industries Association (EIA) RS–232 type and simplify connecting the test equipment to the modem
under test.
You must check the data circuits or equipment under test to determine the interface at the point of
test. Also, check the test equipment to select the proper interface. There are several different digital
interfaces, but some of these are known by different designators, although the interface is the same.
For example, the EIA RS–422 interface is essentially the same as the Federal Standard 1020 interface.
Again, check technical manuals to determine the actual impedance and voltage levels before
connecting the bit error rate test set to the point of test.
3–28

Readouts and special features


There are a variety of readouts and special features available in different bit error rate test sets. The
basic concern is to be able to count the number of errors that occur in a specified number of
transmitted bits. From this information you can calculate the bit error rate. You must be able to use
the readout or special features to obtain this information. Some bit error rate test sets have the ability
to print results, or offer a port to send the results to an external printer you designate. This special
feature not only simplifies the task but in some cases provides information that would be lost. As you
can see, there are a variety of special features that different manufacturers provide. Once again, check
technical manuals to determine how to use the readout and special features properly.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

021. Purpose and major section of the bit error rate test set
1. What is the purpose of the bit error rate test set?

2. What type of test can the bit error rate test set perform?

3. What is the purpose of the internal clock generator?

4. What is the purpose of the voltage controlled oscillator and how is it calibrated?

5. What is the purpose of the countdown section of the internal clock generator?

6. What is the purpose of the digital-to-analog converter and the decode range splitter and for what
purpose does this section use the binary coded decimal information?

7. What is the purpose of the pattern simulator?

8. Which pattern simulator section controls the clock selector and selects either the external clock or
the internal clock for processing?

9. What is the purpose of the pseudo random noise generator?

10. What is the purpose of the pseudo random noise blanking generator and what are the blanking
periods?
3–29

11. What is the purpose of the 48-bit register and 8-bit word display?

12. What controls the data/clock select encoder and what does it generate?

13. Which pattern simulator section selects either pseudo random noise data, external data or 48-bit
data—depending on which data select code is received?

14. What is the purpose of the code converter?

15. What is the purpose of the clock drivers and the data drivers and where are their outputs sent?

16. What is the output from the output amplifier?

17. What is the purpose of the pattern synchronizer and error counter?

18. What are the inputs to the data/clock receivers?

19. Which pattern synchronizer and error counter sections combine (OR) the system compatible and
TTL signals together so only one signal will be present at one time?

20. To what sections are the outputs from the data/clock receivers sent?

21. What is the purpose of the 48-bit synchronizer/comparator?

22. What is the purpose of the pseudo random noise pattern synchronizer/comparator?

23. What is the purpose of the external data pattern synchronizer/comparator?

24. Where does the error/IN SYNC selector send the correct error and IN SYNC signals?
3–30

25. When will the pattern synchronizer and error counter error/IN SYNC section provide an output to
the front panel display and what will interrupt this output?

022. Error distribution and bit error rate testing


1. What is the primary means of measuring the quality of transmitted digital information?

2. What is the basic quality measurement for a transmission channel?

3. What is the difference between random distribution of errors and an error rate with a clustered
distribution?

4. What are the steps in the basic bit error rate testing procedure?

5. Although bit error rate test sets use the same basic technique to count errors, how do you interpret
the results and display information?

6. What are some significant bit error rate test equipment parameters?

7. How can the bit error rate test set, with internal timing for a low or intermediate bit rate range, be
used to operate at higher bit rates?

8. What data circuit speed would normally use the shorter generated bit error rate test pattern
lengths and what is the data circuit bit rate?

9. What data circuit speed would normally use the longer pattern lengths and what is the data circuit
bit rate?

10. What data circuit speed would normally use a 1,048,575 bit pattern length and what is the data
circuit bit rate?

11. What bit rate would you use to test a 9,600 bps data circuit?
3–31

12. What bit rate would be used to perform a bit error rate test on a 64 kbps data circuit?

13. What subsection in the receiving section of the bit error rate test set must be synchronized with
the received pattern from either the bit synchronizer or the distant end?

14. What is one of the major problems encountered in making bit error rate tests?

15. What should you know about synchronization in order to interpret the bit error rate test data
correctly?

16. What type of connectors do test sets, designed to operate with data modems, usually use?

17. What must you check to determine impedance and voltage levels before connecting the bit error
rate test set to the point of test?

18. What information must be available to you from the bit error rate test set readout or special
features?

3–4. Earth Ground Tester


Over time, corrosive soils with high moisture content, high salt content and high temperatures can
degrade ground rods and their connections. So although the ground system when initially installed
had low earth ground resistance values, the resistance of the grounding system can increase if the
ground rods are eaten away. That is why it is highly recommended that all grounds and ground
connections are checked at least annually. In this section, we will examine the basics of a testing
ground.

023. Why do we test ground?


There is generally a ground- resistance specification that the ground rod or grid must meet when
installing a new facility, electrical system, or piece of equipment. An appropriate test provides the
necessary verification. Subsequently, this ground must continue to be periodically tested. Lightning
damage, corrosion, and the rigors of freezing and thawing can disrupt or deteriorate a ground
electrode. Seasonal variations in temperature and moisture can have profound effects upon the
electrical properties of the soil. The water table may fall, so that an electrode that was installed in a
moist, conductive soil may end up dry and isolated. If the system ground has been tied to the water
pipes, repairs that are made with nonconductive materials (plastic pipe and couplers) can render the
system highly resistant to current flow, hence an inadequate ground. Additions and improvements to
the electrical system and equipment in a facility can result in the ground no longer being sufficient.
3–32

Sophisticated electronic equipment can suffer immeasurably from noise and power disturbances. All
of these contingencies and more make ground maintenance an integral part of electrical upkeep.
The purpose of a ground, besides the protection of people and equipment, is to provide a safe path for
the dissipation of fault currents, lightning strikes, static discharges, EMI and RFI signals and
interference. Earth ground testers are indispensable tools to help maintain equipment uptime and
resolve intermittent electrical problems because of poor grounding.

024. Basics of earth ground testing


The difference in methods is determined by the number of points of contact with the soil. The two-
point, three-point, and four-point test are commonly referred to dead earth, fall of potential, and
Wenner method tests, respectively. In the dead earth method, contact is made at just two points: the
ground electrode under test and a convenient reference ground such as the water pipe system or a
metal fence post. In the fall of potential method, a genuine ground tester makes contact using the test
electrode plus the current and potential probes. With the Wenner method, no ground electrode is
involved but rather the independent electrical properties of the soil itself can be measured using a
four-probe setup and a recognized standard procedure. In this lesson, we will limit our scope to the
fall of potential principle.
Most instruments use the fall of potential principle. This test method is used to measure the ability of
an earth ground system or an individual electrode to dissipate energy from a site. It is referred to as
the three-point method because it entails three points of contact with the soil: the test ground plus
voltage and current probes. A fairly large number of resistance readings are taken by moving the
voltage probe at regular intervals (e.g., five or ten feet) between the ground under test and the current
probe. These are plotted on a graph of resistance versus distance and should produce a characteristic
curve. Resistance should rise steadily in the electrical field of the ground under test and also within
the field of the current probe. But in between, there should be a “flat” area in which resistance holds
steady (it may vary by a few digits since soil is not uniform, but the overall graph should be
noticeably horizontal through mid range). This is the value which is to be taken as the sought-after
resistance measurement with the point at 61.8 percent of the total distance representing the value that
is closest to the theoretical true.
How earth ground tests work?
First, the earth electrode under test must be disconnected from its connection to the site. Second, the
tester is connected to the earth electrode. Then, two earth stakes are placed in the soil in a direct
line—away from the earth electrode for the three point test. Spacing of 65 feet is sufficient.
A known current is generated by earth ground tester between the outer earth stake Z (auxiliary current
probe) and the earth ground test electrode X while the drop in voltage potential is measured between
the inner earth stake Y (auxiliary potential probe) and the earth test ground electrode X. Using Ohm’s
Law (V = IR), the tester calculates the resistance of the earth electrode. Figure 3–16 illustrates this
testing principle.
To achieve the highest degree of accuracy when performing a three point ground resistance test, it is
essential that the probe is placed outside the sphere of influence of the ground electrode under test and
the auxiliary earth. If you do not get outside the sphere of influence, the effective areas of resistance
will overlap and invalidate any measurements that you are taking. The table below is a guide for
appropriately setting the auxiliary potential electrode (inner stake Y) and auxiliary current (outer
stake Z).To test the accuracy of the results and to make sure that the ground stakes are outside the
spheres of influence, reposition the inner stake Y three feet in either direction and take a fresh
measurement. If there is a significant change in the reading, say 30 percent, you need to increase the
distance between the ground rod under test X, the inner stake Y and the outer stake (auxiliary ground)
until the measured values remain fairly constant when repositioning the inner stake.
3–33

The goal in ground resistance is to achieve the lowest ground resistance value possible that makes
sense economically and physically. Ideally, a ground should be of zero ohms resistance. However, the
National Fire Protection Agency (NFPA) and Institute of Electrical and Electronics Engineers (IEEE)
have recommended a ground resistance value of 5.0 ohms or less. It is this value that the
telecommunications industry has often used.

Figure 3–16. Ground resistance testing principle.

Approximate Distance to Auxiliary Electrodes


Depth of ground electrode (X) Distance to the inner stake (Y) Distance to the outer stake (Z)
6 ft 45 ft 72 ft
8 ft 50 ft 80 ft
10 ft 55 ft 88 ft
12 ft 60 ft 96 ft
18 ft 71 ft 115 ft
20 ft 74 ft 120 ft
30 ft 86 ft 140 ft

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

023. Why do we test ground?


1. Why do we test ground?

2. What test equipment is used to identify poor ground?

024. Basics of earth ground testing


1. How many points of contacts are used in the fall of potential testing method?
3–34

2. What is each point of contact called in the fall of potential testing method?

3. What distance represents the closest value to the theoretical true resistance measurement?

4. How are the earth stakes placed in the soil?

5. Which law is used to calculate the resistance of the earth electrode automatically with the earth
ground tester?

6. Where are the probes placed in order to achieve the highest degree of accuracy?

7. What is the recommended ground resistance value by the NFPA and IEEE?

3–5. Protocol analyzer


Network test equipment provides a means of checking and controlling the operational characteristics
of equipment that is on a local area network (LAN), metropolitan area network (MAN), or wide area
network (WAN). As an RF Transmission Systems technician, you will most likely use this equipment
to test Prominas. Here, we discuss the general concepts of a protocol/ network analyzer. Your
equipment’s operating characteristics may differ somewhat from those presented here.

025. Operational characteristics of a protocol analyzer


A protocol analyzer provides a capability for digital network diagnostics and for developing
communications software. These analyzers have increased in importance for systems control use over
the years and will continue to do so. A protocol analyzer generates, monitors, and captures data traffic
moving across a network connection. Received data is deciphered according to the rules of the
network protocol and the results are displayed.
General description
The protocol analyzer may be a specialized laptop computer, small handheld unit or software
application operated on a common personal computer. In most cases, it is a portable unit that can be
transported to the physical location of individual network connections. The Agilent Technologies
Advisor, Network Associates Sniffer, Fireberd 6000, and Network Communications Network Probe
are a few of the protocol analyzers that are used in system control facilities (SCF) and network
control centers. These analyzers count total frames, collisions, error frames, broadcasts, and other
statistical data. The data may be displayed in a numerical, graph, or speedometer gauge format. Disk
storage provides the capability for the data to be captured and saved for later viewing. We will
discuss the features of the Agilent Technologies Advisor and other Microsoft Windows based
protocol analyzers.
3–35

Theory of operation
Network devices are connected to a LAN via an internal network interface card (NIC). As data moves
across the network, the NIC only processes broadcast frames and frames with its media access control
(MAC) address. However, the NIC in a protocol analyzer is configured to process all frames. In serial
circuits, the selection of frames for processing is not a concern. All frames that enter a serial interface
are processed. The frames are then sent to the other components of the protocol analyzer for further
processing. These components are as follows:
1. Counter—this component counts bytes, frames and errors as they are processed by the NIC.
2. Filter—the filter discards frames based on the filter definition or activates a trigger based on
the trigger definition.
3. Buffer—this component stores all frames based on the user configuration inputs.
4. Protocol analyzer CPU—this specialized processor processes the frames based on the
selected test and the user configuration inputs.
5. Personal computer CPU—this processor provides the Microsoft Windows operating system
capabilities, user interface and display information based on the user inputs.
The protocol analyzer stores the received frames in a buffer in random-access memory (RAM). The
frames are processed according to the protocol rules that are identified by the data-type field of the
frame. For example, AppleTalk frames are processed using the protocol rules of AppleTalk and High-
level Data Link Control (HDLC) frames are processed using the protocol rules of HDLC.
The operator may configure the protocol analyzer to display the captured frames in a specified
manner using a filter. After the operator defines the filter, unwanted information is suppressed and
only the desired data is displayed. Filtered information remains in the buffer and can be retrieved later
if needed. For ease of use, the hexadecimal data fields may be converted to ASCII text, which is
easier to read. Since network addresses identify the network nodes, the operator may convert them to
the commonly used node names. It is easier to identify the e-mail server by its name than by its
internet protocol (IP) address.
Interfaces
Multiple interfaces are located at the rear or side panel of most protocol analyzers. For some protocol
analyzers, the operator can change the interfaces. The interfaces allow the protocol analyzer to be
connected to different types of packet and serial networks. Depending on the type and model, the
interface for the protocol analyzer may be Ethernet, fiber distributed data interface (FDDI), Token
Ring, Asynchronous Transfer Mode (ATM), Frame Relay or HDLC. These interfaces may have
different types of physical connectors including straight tip (ST) and FDDI for fiber, Registered Jack
type 45 (RJ–45) and 25-pin D-sub size B (DB–25) for twisted pair, and BNC for coaxial cable.
Capture filters
All data that enters the network interface can be analyzed. In most cases, so much data moves across
the network that finding a specific problem is like finding a needle in a haystack. Capture filters
display and store only the data you are interested in analyzing. You may also use a capture filter to
exclude specific types of frames. The types of filters that can be defined in the configuration menu are
as follows:
 Protocol.
 MAC address.
 IP address.
 Virtual local area network (VLAN).
 Frame attributes.
3–36

A filter may be defined to capture only a specific type of protocol frame such as World Wide Web
(WWW) traffic. Create a MAC address filter if you only desire to see the traffic from, to or between
specific hosts. Using the network address of one or more hosts, the IP address filter accomplishes the
same tasks as the MAC address filter. The VLAN filter only captures frames that are transmitted
across a specific VLAN. Only frames with other specific attributes can be captured. These attributes
include runts, collisions, and frames with bad frame check sequence (FCS).
Each filter that is created can be named and saved to a file for later use. When a filter is created, it is
assigned to one or more buffers. A protocol analyzer normally has 10 to 20 separate buffers. The
captured data in the buffers can be saved to a log file. This file can be reviewed at a later time or e-
mailed to another technician for expert analysis.
Modes of operation
In the transmit mode, the protocol analyzer can be configured to generate a constant stream of data, a
single burst of data, or a repetitive burst of data. The data frames may be configured to include
random data, all zeros, all ones, and other patterns. The frame’s data field may be configured to be of
constant or variable length. VLAN tags are generated to send packets within a VLAN.
In the receive mode, the protocol analyzer monitors and captures data frames across the network.
Triggers are configured to stop on frames that contain specific data patterns. Counters are configured
to display the number of specific occurrences such as the following:
 Packets transmitted.
 Cyclic redundancy check (CRC) errors.
 Undersize packets.
 Oversize packets.
 Address Resolution Protocol (ARP) requests.
 Collisions (Coll).
 Bit errors.
Most protocol analyzers are based on a high-performance personal-computer laptop platform with the
latest version of Microsoft Windows. Specialized protocol analysis application software and data
capture and analysis hardware perform the tests. The hardware is connected to the network by
common LAN connectors such as RJ–45 and Attachment Unit Interface (AUI). When connected to
the network, the analyzer may passively monitor all traffic or act as a node generating data traffic. If
the analyzer is connected to a network switch port, the switch must be configured to mirror all
network traffic to the port to which the analyzer is attached.
Menus
After starting the protocol analysis application, the user interface provides drop-down menus and
toolbars that are similar to Windows. Drop-down menus are above the toolbar. The RUN drop-down
menu allows you to start, stop, or pause the collection or generation of data. The VIEW drop-down
menu allows you to specify the desired display. The available displays include protocols and frames.
The data can be displayed in a pie chart or grid chart. The GO TO drop-down menu allows you to
display a specific record. Record number, time stamp or locations in the capture buffer are the ways
that a record may be displayed. The SETUP drop-down menu allows you to configure the protocol
analyzer for the desired operation. The configuration includes the physical interface type, protocols,
line speed, buffer size, filter type and so forth.
After selecting the appropriate toolbar button, the test window appears and lists several test windows:
 Line vital statistics.
 Protocol statistics.
 MAC node statistics.
3–37

 Node discovery.
 Connection statistics.
 Active testing.
Line vital statistics
Line vital statistics provide information concerning the network utilization, total number of frames
transmitted, and frame errors on the entire subnet. Selecting the appropriate button on the toolbar
activates the line vital statistics window. You must start the line vital statistics frame capture to
display the statistics. Current and peak values for each statistic are displayed in a table. An
accompanying graph shows the value of each statistic over a period of time. Some of the statistics that
are displayed are as follows:
 Utilization.
 Frames.
 Local collisions.
 Remote collisions.
 Late collisions.
 Bad FCS.
 Runts.
Protocol statistics
Protocol statistics provide information concerning the network utilization and frame errors that are
related to a specific protocol. Selecting the appropriate button on the toolbar activates the protocol
statistics window. You must start the frame capture to display the statistics. Current and average peak
statistics for each protocol are displayed after being selected. Several substatistics for each protocol
type may be displayed. Listed below are some of these substatistics:
 Utilization.
 Fragment count.
 Time-to-live packets.
 Simple Network Management Protocol (SNMP) packets.
 Bad FCS.
 Collisions.
 Broadcasts.
All statistics will be displayed in a table viewed on the screen. The table shows each protocol and
substatistic. Selecting the appropriate button on the toolbar activates the graph feature.
MAC node statistics
MAC node statistics report errors that occur at the physical layer such as bad FCS, short frames, and
jabbers. These types of problems usually indicate a bad cable, connector, or NIC. Selecting the
appropriate button on the toolbar activates the MAC node statistics window. You must start the frame
capture to display the statistics. Statistics for the top 20 nodes are automatically shown. Selecting a
specific node displays detailed information about the frames for that node. You may create a filter so
that only information concerning specific frames or nodes is displayed.
Node discovery
Node discovery automatically runs in the background when the protocol analyzer application is
activated. Selecting the appropriate button on the toolbar activates the MAC discovery window. The
node discovery window shows the MAC address of each node. A drop-down menu allows you to
3–38

view nodes by IP address, Domain Name Service (DNS) name, protocol, or error events. You may
manually add nodes and edit node names.
Connection statistics
Connection statistics provide information concerning the bandwidth utilization and the number of
connections that are related to specific nodes. Selecting the appropriate button on the toolbar activates
the connection statistics window. You must start the frame capture to display the statistics. The data is
listed in a statistics table, a connection graph, and a pie chart.
The data for each node in the statistics table includes MAC address, IP address, bytes sent and
received, protocols used, network utilization and number of connections. The connection graph
displays the connections between the top 12 nodes. Selecting the appropriate button on the toolbar
activates the connection graph or pie chart within the connection statistics window. The pie chart
shows the number of frames, utilization, frames per second, and bytes per second that are transmitted
by all nodes or a specific node. The tables and graphs help you determine the following:
1. Which users consume the most bandwidth.
2. The number of connections to a particular network or site.
3. The volume of traffic leaving a network, VLAN, or subnet.
4. Which router or switch interface is the busiest.
Active testing
Most protocol analysis measurements passively monitor the network to diagnose problems and report
statistics. Active tests generate data traffic on the network to analyze network performance. Before
generating any traffic, the protocol analyzer must be configured with a valid IP address on the
network. You must activate the GO TO drop-down menu and select “configure analyzer.” At the
configuration window, an IP address and default router must be entered. Now you may select the
appropriate button on the toolbar to activate the active tests window. In the active tests window, you
may configure a filter or select an active test. Some of the active tests are Packet Internetwork Groper
(PING), trace route, and traffic generator.
PING
The PING utility is used to determine if a network node is connected or responding. It sends a series
of Internet Control Message Protocol (ICMP) echo request messages to the target node. The results
identify the target node’s response and the time (delay in milliseconds) it took for the target to receive
the request and respond.
Trace route
The trace route test is used to see the path that packets are traveling and the time (delay in
milliseconds) it takes the packets to complete the trip. Each node that processes the packet responds
with the IP address of its network interface. The results list the IP address of the network interface of
each node the packet is routed through until the packet reaches the target node. The response time
(delay in milliseconds) it took for each node to respond is also displayed.
Traffic generator
The traffic generator allows you to send frames to a specific node. You may send frames that have a
default pattern or create your own pattern. Some of the types of frames you may send are ICMP,
ARP, and File Transfer Protocol (FTP). A separate window displays the response frames from the
receiving node. For stress testing, you may set the traffic generator to send enough frames to use a
specific percentage of the network bandwidth. You may also determine the number of frames for the
generator to send.
3–39

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

025. Operational characteristics of a protocol analyzer


1. What are the primary capabilities provided by a protocol analyzer?

2. Explain the difference between a protocol analyzer NIC and other NICs.

3. What function of the protocol analyzer excludes specific types of frames?

4. List four types of specific occurrences that can be displayed by counters.

5. What menu is best used to report errors that occur at the physical layer such as bad FCS, short
frames, and jabbers?

6. What kind of information does the connection statistics menu provide?

7. Identify the three types of active tests that are conducted with a protocol analyzer.

Answers to Self-Test Questions


016
1. The rotary switch.
2. (1) Digital display, (2) visual annunciators, (3) analog bar graph, and (4) range indicator.
3. Analog bar graph.
4. The range push button lets you manually select the measurement range.
5. The touch-hold push button locks the measurement into the display for viewing and automatically updates
the display when you take a new measurement.
6. (1) amperes, (2) milliamps/microamps, (3) volts/ohms/diode, and (4) common.
7. When you move the rotary switch to any position from the OFF position.
017
1. −1,000 to +1,000 volts DC; −320 to +320 millivolts DC.
2. 10 amps.
3. The Ω annunciator is showing in the display.
4. Placing the leads across a “good” diode produces “OL” in the display while reversing the leads produces a
continuous audible tone.
3–40

018
1. Amplitude vs time.
2. Voltage = vertical axis, time = horizontal axis, and depth = intensity.
3. It allows the sweep generator circuitry time to start a sweep before the signal reaches the cathode ray tube
vertical deflection plates. This enables you to view the leading edge of the signal waveform.
4. DC coupling.
019
1. To provide isolation for scope inputs and prevent circuit loading.
2. The high resistance, passive divider and x1 probes, active (field effect transistor), and current probes.
3. One megaohm shunted by 20 pF of capacitance.
4. Highest (maximum).
020
1. Digital storage oscilloscopes digitize the input signal for storage or display.
2. Charged coupled devices accept fast-changing data and send it to the analog-to-digital-converter at a slower
rate for more accurate data collection.
3. (a) Sampled data points stored in the digital storage oscilloscopes memory. (b) Stored waveform points that
make up the waveform record. (c) Number of waveform points used to make the waveform record.
4. It allows the cathode ray tube to receive data at a slower rate as opposed to keeping up with some of the
high frequency signals fired directly on a cathode ray tube in an analog scope.
5. Real-time and repetitive-sampling.
6. This is a process that estimates what the signal will look like between samples and “fills in the blanks”
between data points.
7. Linear and sine.
8. Sequential sampling takes a sample at a predetermined time after the trigger in a progressive order until
there are enough data points to reconstruct several periods of the signal. Random sampling takes samples at
an undetermined point on a signal and stores them.
9. Undersampling due to having the time base turned down too low, thus reducing the sample rate for a
frequency that is too high for a given setting.
10. (1) In sample mode, the digital storage oscilloscope creates a record point by saving the first sample in a
collection of sample points. (2) In peak detect mode, the digital storage oscilloscope saves the highest and
lowest points in a collection of samples. (3) In averaging mode, the digital storage oscilloscope shows a
record that is a collection of several acquisitions of a repeated signal that is averaged over time.
11. To set the digital storage oscilloscope’s sampling rate, determine what type of processing will occur and
how many record points can be shown.
12. 1000 (1K) to 4000 (4K) points.
13. PRETRIG/POST TRIG switch.
14. These triggers are disabled allowing the digital storage oscilloscope to continuously acquire and display
incoming signals.
15. The pretrigger waveform is updated by the trigger and post trigger scan from the trigger position to the
right.
16. It stops.
17. Delta volts, delta time, one over delta time, and delay time measurements.
18. POSITION/CURS/SELECT WAVEFORM switch.
19. A 4-bar graph on the screen indicates which portion of the record you are viewing. A 4K record length
displays 1K samples at a time. You have to look at four different screens to see the entire record length.
20. It moves the cursor set between displayed waveforms.
21. They control the MENU operation while the MENU is displayed, and they control the storage and display
of the SAVE reference waveforms when the MENU is not displayed.
3–41

021
1. To provide a bit error detection system that can determine the received data quality.
2. Bit error analysis, timing analysis, or delay measurements. Also, the bit error rate test set is a source of
pulse code modulation (PCM) data used to test a communications data link or a bit synchronizer output.
Many bit error rate test set can generate command test patterns, telemetry PCM formats, and bit coding
patterns at telemetry data rates.
3. To provide an internal clock reference—usually selectable from the front panel.
4. As the basic clock generator, it provides all clock phases. It is periodically frequency calibrated by a
crystal-controlled oscillator in the AUTO-CAL section.
5. To divide the voltage controlled oscillator clock and provide a frequency equal to the bit rate selected.
6. To use the binary coded decimal information to control the voltage controlled oscillator center frequency,
and to control the range the countdown circuits will decode. They use the binary coded decimal information
to control the voltage controlled oscillator center frequency, and to control the range the countdown circuits
will decode.
7. To accept either the external or internal clock and generate either a pseudo-random noise pattern or a 48-bit
pattern you program from the front panel.
8. Data/clock select encoder.
9. To receive the internal clock from the clock selector and generate a 2,047 bit long pseudo-random pattern.
10. To provide a blanking period at the start of the 2,047 bit pseudo random noise frame (under the control of
the blanking selector switch). The blanking period can be 0, 32, 64, 96, or 128 bits long.
11. To accept six 8-bit words, one word at a time, from the data entry switches and store them in a 48-bit
register.
12. The pattern select and clock select switches; it generates a data select code and a clock select code.
13. Data/sync selector.
14. To convert the non-return zero-level (NRZ-L) data into the desired coding, such as NRZ-mark (NRZ-M).
15. The clock drivers convert the selected clock signal to a system compatible level and a buffered transistor-
transistor logic (TTL) level, and the data drivers convert the encoded data signal to a system compatible
level and a buffered TTL level. The outputs are input to the pattern synchronizer and error counter
data/clock receivers block.
16. After mixing the three input signals linearly, it provides a front panel adjustable offset and output level for
a low impedance load.
17. To receive the selected pattern, automatically synchronizes on the pattern, and determine the bit error rate
induced after transmission through the system.
18. Either system-compatible data or buffered TTL data from the clock drivers and the data.
19. Data/clock receivers.
20. To the 48-bit synchronizer/comparator, the pseudo random noise synchronizer/comparator, the external
data synchronizer/comparator, and the error rate counter and display sections.
21. To accept the received clock and data, and synchronize on the pattern when it’s equal to the 48-bit parallel
input.
22. To accept the received clock and data, from the data/clock receivers, and will synchronize with the pattern
when it’s equal to its own 2,047 bit pseudo random noise pattern.
23. To accept the external data, delay it under the front panel control, and compare it with the received data
from the data/clock receivers.
24. To the error rate counter and display.
25. At the end of the count cycle or when you push the reset switch. A reset pulse will interrupt this output.
022
1. To compare the number of bits in error to the total number of bits transmitted. The ratio of the number of
bits in error to total number of bits transmitted.
2. The bit error rate.
3–42

3. In a random distribution, each bit is equally likely to be in error. In a clustered distribution, the errors occur
in clusters.
4. (1) A known bit pattern, usually pseudo random, is generated at the sending end of the data channel or
equipment under test, and is transmitted through the channel.
(2) At the receiving end, an identical pattern is generated and synchronized with the incoming signal.
(3) The received signal is compared to the locally generated pattern in the receiving equipment.
(4) If any bit in the received signal is not the same as the generated bit, it is assumed the received bit is in
error.
(5) An error signal is generated and used to operate a counter.
5. In many different ways. So, you must use and understand the technical manual for the particular test
equipment.
6. (1) Bit rate and timing.
(2) Test patterns.
(3) Synchronization.
(4) Interface and data coding.
(5) Readouts and special features.
7. With external timing.
8. Low to intermediate; 1,200 to 9,600 bps.
9. Higher speed circuits; 50 kbps to 64 kbps.
10. Very high speed T–1 circuits; 1.544 Mbps.
11. A bit rate of 9,600 bps, or the closet bit rate available, to test the maximum output of the equipment.
12. 64 kbps.
13. The pattern generator.
14. The detection of an ”out-of-sync” condition by the test set receiver.
15. You need to know how test equipment responds to the out-of-sync condition.
16. 25-pin connectors, usually the Electronic Industries Association (EIA) RS–232 type.
17. The technical manuals.
18. Information to count the number of errors that occur in a specified number of transmitted bits.
023
1. To verify the ground-resistance specification that the ground rod or grid must meet.
2. Earth ground tester.
024
1. 3.
2. The test ground, voltage probe, and current probe.
3. 61.8 percent of the total distance.
4. In a direct line away from the earth electrode.
5. Ohm’s Law.
6. Placed outside the sphere of influence of the ground electrode under test and the auxiliary earth.
7. 5 ohms or less.
025
1. Digital network diagnostics and software development.
2. The NIC in a protocol analyzer is configured to process all frames.
3. Capture filters.
4. Any four of the following.
(1) Packets transmitted.
(2) CRC errors.
3–43

(3) Undersize packets.


(4) Oversize packets.
(5) ARP requests.
(6) Collisions.
(7) Bit errors.
5. MAC node statistics.
6. Connection statistics provide information concerning the bandwidth utilization and the number of
connections that are related to specific nodes.
7. (1) PING.
(2) Trace route.
(3) Traffic generator.

Do the unit review exercises before going to the next unit.


3–44

Unit Review Exercises


Note to Student: Consider all choices carefully, select the best answer to each question, and circle
the corresponding letter. When you have completed all unit review exercises, transfer your answers to
the Field-Scoring Answer Sheet.
48. (016) Which feature of the Fluke 8025A multimeter do you use to select various measurement
functions?
a. Circuit jacks.
b. Rotary switch.
c. Range push button.
d. Touch-hold button.
49. (016) Which part of the Fluke 8025A’s display section indicates the absolute value of the input?
a. Digital.
b. Annunciator.
c. Range indicator.
d. Analog bar graph.
50. (016) Which feature of the Fluke 8025A locks the measurement into the display for viewing and
automatically updates the display when you take a new measurement?
a. Rotary switch.
b. Range push button.
c. Power-up self-test.
d. Touch-hold push button.
51. (017) When you use the Fluke 8025A, which voltage range do you select to measure 50 volts DC?
a. Volts AC.
b. Volts DC.
c. Millivolts AC.
d. Millivolts DC.
52. (017) When testing a diode with the 8025A, the leads across a “good” diode produces an “OL”
(open) on the display in one direction while reversing the leads produces
a. a continuous audible tone.
b. “SL” (short) on the display.
c. a wavering audible tone.
d. three short beeps.
53. (018) How are voltage, time, and depth represented on the oscilloscope display?
a. Voltage = intensity, time = vertical axis, and depth = horizontal axis.
b. Voltage = intensity, time = horizontal axis, and depth = vertical axis.
c. Voltage = horizontal axis, time = vertical axis, and depth = intensity.
d. Voltage = vertical axis, time = horizontal axis, and depth = intensity.
54. (018) Which oscilloscope’s input coupling position allows you to view both AC and DC
components of the input signal?
a. DC.
b. AC.
c. Norm.
d. Ground.
3–45

55. (019) Which oscilloscope probe is essentially just a shielded piece of wire?
a. Current.
b. Passive 1:1.
c. Passive divider, 1:10.
d. Active field effect transistor.
56. (019) Which probe lets you use the oscilloscope to measure higher voltage levels, raises the input
impedance, does not require a bias voltage, and attenuates noise?
a. Current.
b. Passive 1:1.
c. Active (FET).
d. Passive divider, 10:1.
57. (019) Which probe does not need to actually make contact with the circuit under test?
a. Current.
b. Passive 1:1.
c. Active (FET).
d. Passive divider, 10:1.
58. (020) Sampled data points are stored in the memory of a digital storage oscilloscope as
a. sequential records.
b. waveform points.
c. record lengths.
d. record points.
59. (020) What area of a digital storage oscilloscope takes digitized samples and performs numerous
manipulations on the data including measuring rise and fall times, periods, time intervals, and
math computations?
a. Microprocessors.
b. Acquisition.
c. Memory.
d. Input.
60. (020) What type of sampling does a digital storage oscilloscope normally use on single-shot or
seldom-occurring signals?
a. Sequential.
b. Repetitive.
c. Real-time.
d. Random.
61. (020) In real-time sampling, what type of interpolation is used when measuring pulse waves?
a. Sine.
b. Linear.
c. Non-uniform.
d. Companding.
62. (020) In acquiring a 4K record length on a digital storage oscilloscope, how many samples can
you view at one time?
a. 1,000.
b. 2,000.
c. 3,000.
d. 4,000.
3–46

63. (021) Which pattern simulator section of a bit error rate test set controls the clock selector and
selects either the external clock or the internal clock for processing?
a. Clock drivers.
b. Code converter.
c. Data/sync selector.
d. Data/clock select encoder.
64. (021) What pattern simulator section of the bit error rate test set accepts a 48-bit parallel word and
generates a serial pattern?
a. 48-bit transmitter only.
b. 48-bit register and 8-bit word display.
c. Pseudo random noise (PRN) generator.
d. Pseudo random noise (PRN) comparator.
65. (022) Which is the primary means of measuring the quality of transmitted digital information, and
how is this means derived?
a. Bit error rate (BER), the ratio of the number of bits in error to total number of bits transmitted.
b. Bit error rate (BER), the difference between the number of bits transmitted and the total
number of bits received.
c. Distribution of errors, the ratio of the number of bits in error to total number of bits transmitted.
d. Distribution of errors, the difference between the number of bits transmitted and the total
number of bits received.
66. (022) Which of these bit rates do you use for a bit error rate test on a 64 kbps data circuit?
a. 511 bps.
b. 9600 bps.
c. 64 kbps.
d. 1.544 Mbps.
67. (022) On the bit error rate test set, the 25-pin connectors are designed to operate with data
modems to
a. simplify connection of test equipment to the modem under test and to transmit the test pattern.
b. simplify connection of test equipment to the modem under test and to carry the necessary
signaling and handshake signals for modem operations.
c. isolate test equipment from the data circuit under test and to transmit the test pattern.
d. isolate test equipment from the data circuit under test and to carry the necessary signaling and
handshake signals for modem operations.
68. (023) How often must you continue to test system grounds?
a. Randomly.
b. Periodically.
c. Every 10 years.
d. Just one time only.
69. (023) What type of equipment helps resolve intermittent electrical problems because of poor
grounding?
a. Earth ground tester.
b. Frequency counter.
c. Spectrum analyzer.
d. Oscilloscope.
3–47

70. (024) Resistance reading on the ground under test is plotted on a graph of
a. resistance versus distance.
b. current versus distance.
c. frequency versus time.
d. amplitude versus time.
71. (024) A known current is generated by the earth ground tester between the earth ground electrode
and the
a. grounding point.
b. inner earth stake Y.
c. auxiliary current probe.
d. auxiliary potential probe.
72. (025) What component of a protocol analyzer discards frames based on its definition or activates a
trigger based on the trigger definition?
a. Filter.
b. Buffer.
c. Counter.
d. Central processing unit.
73. (025) What protocol analyzer test window provides information concerning the network
utilization and frame error that are related to a specific protocol?
a. Media Access Control (MAC) node statistics.
b. Connection statistics.
c. Protocol statistics.
d. Node discovery.
74. (025) What protocol analyzer test window automatically runs in the background when the
protocol analyzer application is activated?
a. Media Access Control (MAC) node statistics.
b. Connection statistics.
c. Protocol statistics.
d. Node discovery.
75. (025) What protocol analyzer test window provides information concerning the bandwidth
utilization and the number of connections that are related to specific nodes?
a. Media Access Control (MAC) node statistics.
b. Connection statistics.
c. Protocol statistics.
d. Node discovery.

Please read the unit menu for unit 4 and continue 


3–48

Student Notes
Unit 4. Radio Frequency Test Equipment
4–1. Radio Frequency Signal Generators ....................................................................................... 4–1 
026. Characteristics and applications of a radio frequency signal generator ............................................. 4–1 
4–2. Measuring Power ...................................................................................................................... 4–3 
027. Defining the terms dB and dBm ........................................................................................................ 4–3 
028. Power meter—features and controls .................................................................................................. 4–5 
029. Wattmeter—features and controls ..................................................................................................... 4–7 
4–3. Measuring Frequency ............................................................................................................ 4–14 
030. Functions of a spectrum analyzer .................................................................................................... 4–14 
031. Functions of an electronic counter ................................................................................................... 4–15 
4–4. Communication Analyzer ...................................................................................................... 4–17 
032. Characteristics of the communication service monitor .................................................................... 4–18 
033. Operational capabilities of the communication service monitor...................................................... 4–19 

T HERE IS AN EXTENSIVE NEED in C-E for precision radio frequency generation and
measurement equipment. Test systems, transmitters, receivers, and subcarrier discriminators
rely heavily on radio frequency test equipment. As a skilled systems troubleshooter, you’ll be
required to get accurate and reliable readings from your test equipment. This unit gives you a general
overview of radio frequency test equipment that technicians use.

4–1. Radio Frequency Signal Generators


The specific output of a signal generator can be used to test many different pieces of equipment, such
as a receiver or an antenna feedhorn. You’ll find a signal generator in every electronic maintenance
work center. In this section we discuss the characteristics and applications of a radio frequency (RF)
signal generator. Common names for this piece of equipment include radio frequency generator, RF
frequency generator, signal generator, or any combination of the previously mentioned names.

026. Characteristics and applications of a radio frequency signal generator


This lesson covers characteristics and applications of a radio frequency signal generator. You will be
expected to know basic facts and terms about this subject.
Characteristics
Radio frequency signal generators provide a stable source of frequency ranges much higher than
audio generators. For example, the HP 8640B radio frequency generator covers a range of 500 kHz to
512 MHz (450 kHz to 550 MHz with over-range). With a doubler, the output can extend to 11,000
MHz. With a variable audio oscillator, the 8640B can extend the output range of the generator down
to 20 Hz. This, together with a calibrated output and modulation, permits complete radio frequency
and intermediate frequency performance tests on virtually any type of high frequency, very high
frequency, or ultra high frequency receiver.
Figure 4–1 is a block diagram of a representative radio frequency signal generator. In addition to the
power supply, a typical radio frequency generator contains three sections: an oscillator, a modulator,
and an attenuator.
4–2

Figure 4–1. Radio frequency signal generator block diagram.

The oscillator section’s function is to produce a signal that can be set accurately in frequency at any
point within the generator’s range. The modulator’s function is to produce an audio modulating signal
to superimpose on the radio frequency signal produced in the oscillator. An audio oscillator may
provide the modulating signal within the generator or from an external source, known as internal
modulation and external modulation, respectively. Some signal generators use either of these two
modulation methods. In addition, a means of disabling the modulator section is available so the pure
unmodulated signal from the oscillator is available for use.
The type of modulation selected depends on the particular signal generator application. The
modulating signal may be a sine wave, square wave, or pulses of varying duration. Some special
generators make provisions for pulses covering a wide range of repetition rates and widths.
In addition to the modulator, most radio frequency generators have connections through which you
can apply external modulation sources of any desirable waveform generator. Metal shielding
surrounds the unit to keep signals in the oscillator from affecting the circuit under test.
The output circuit of the radio frequency signal generator usually contains a calibrated attenuator and
an output level meter. The attenuator lets you select the amount of output required. The output level
meter provides an indication and permits control of the generator output voltage.
Applications
You can use a radio frequency generator to test and troubleshoot transmitters, receivers, antenna
systems, or ground stations. Here are some applications for the radio frequency generator.
1. Verify transmission within designated frequency ranges by comparing transmitter outputs
with known radio frequencies.
2. Align telemetry receivers by injecting the system with range-standard modulated radio
frequency.
3. Check transmission lines and antenna systems for proper operation.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

026. Characteristics and applications of a radio frequency signal generator


1. What is the frequency range of the HP 8640B radio frequency signal generator?

2. What extends the HP 8640B range from 20 Hz to ultra high frequencies?


4–3

3. What is the function of the oscillator section?

4. What is the function of the modulator?

5. What types of modulated signals are possible with the radio frequency generator?

6. What does the output circuit of the radio frequency signal generator usually contain and what are
their functions?

7. List some applications of the radio frequency generator.

4–2. Measuring Power


Achieving proper output power level is extremely important to system operation. To operate
effectively, each component in the system must receive the proper signal level from the previous
component and pass the proper level to the succeeding component. If the power level becomes too
low, the signal becomes obscured in noise. Distortion results if the power level gets too high.
Therefore, the measurement of power is critical at every level—from the output of the overall system
down to the fundamental subassemblies.
In this section we’ll discuss the features and controls of a power meter and the terms related to power
measurement. Knowledge and operation of the power meter is very important, because as a technician
you’ll frequently need to measure the gain and/or power of a device. You must also be able to
determine quickly and accurately the power levels of a signal or noise at any required test point. To
learn the power measurement, we’ve chosen the Hewlett Packard 436A Power Meter and the Bird
4391M multi-purpose thruline wattmeter as examples.

027. Defining the terms dB and dBm


In this lesson we will cover dB and dBm terms. You will be expected to know basic facts and terms
about this subject.
Trying to use a power meter without understanding the displayed units would be like driving a car
without first knowing how to read road signs. Sooner or later you’ll realize the importance. One of the
first things you must understand in learning how to use a power meter is the terminology. The two
terms—decibel (dB) and decibels referenced to a 1-milliwatt standard (dBm) are the most frequently
misunderstood terms associated with power meters and power measurement. In this lesson we’ll
cover these terms by providing practical examples of their use.
Decibel (dB)
You can take power measurements in watts, but that means you must deal with very long numbers
with many decimal places. This is clumsy and time consuming, especially when you’re looking for
quick and accurate results. A simpler method of dealing with signal and power levels and ratios is
needed. The decibel and measurements associated with the decibel are the answer to this problem.
The decibel doesn’t (in itself) indicate power; instead, it indicates a ratio or comparison between two
4–4

power levels. Actually, the decibel is part of a larger unit called the bel. As originally used, the bel
represented a power ratio of 10 to 1, or 10:1, between the strength of two sounds. For you to better
understand the bel, consider three sounds of unequal power intensity. If the power intensity of a
second sound is 10 times the power intensity of the first, its power level is said to be 1 bel above the
first. If the third sound has a power intensity, which is 10 times the second, its level is 1 bel above the
second. But, since the third sound is 100 times as intense as the first, its level is 2 bels above the first.
Thus, a power ratio of 100:1 is represented by 2 bels; a power ratio of 1,000:1, by 3 bels; a power
ratio of 10,000:1 by 4 bels; etc. You can readily see that the concept of a bel represents a logarithmic
relationship, since the logarithm of 100 to the base 10 equals 2 (corresponding to 2 bels), the
logarithm of 1,000 to the base 10 equals 3 (corresponding to 3 bels), etc.
This logarithmic characteristic of the bel makes it a very convenient means for expressing power
ratios, but it’s still inconvenient, since the bel is a rather large unit of measure. Usually a smaller
unit—the decibel (dB) is used. Ten decibels equal 1 bel. A 10:1 power ratio represented by 1 bel can
also be represented by 10 decibels (10 dB). A 100:1 ratio (2 bels) is represented by 20 dB, and a
1,000:1 ratio (3 bels) is represented by 30 dB, etc.
Decibels referenced to a 1-milliwatt standard (dBm)
As we said earlier, you should understand clearly that in itself the term decibel doesn’t indicate
power. Instead, it’s a ratio of, or comparison between, two power values. In C-E it’s very often
desirable to express a single value or quantity of power in decibels. This can be done by using a fixed
power level as a reference. To simplify calculations, several standard reference levels have been
adopted, the most widely used being the 1-milliwatt standard.
When 1 milliwatt is used as a reference level, the ratio between the actual power and the 1 milliwatt
reference is expressed in dBm. The abbreviation dBm indicates decibels relative to a 1-milliwatt
standard.
Correct usage of dB and dBm
Now that the terms dB and dBm have been explained, let’s discuss the correct use of these terms.
Using the term dB
Remember, the decibel is a measure of the ratio of the power levels of two different sources or of two
different signals; thus, it’s a measure of comparison. When we say one signal is 2 or 3 or 10 decibels
stronger or weaker than another we don’t indicate how much power either signal has. Instead, the dB
difference between two signals indicates the degree to which one power level exceeds another. For
example, you could say “This signal is 20 dB stronger today than yesterday,” but you couldn’t say,
“The signal strength is 20 dB.” The phrase “20 dB” correctly compares two signals; that is, the signal
today compared to yesterday. Let’s look at another example of correct usage of the term.
Since the dB is a term of comparative signal power, it can be used to describe an amplifier or another
device that changes signal power. In this case, the power gain of the device, in dB, indicates the ratio
of the output power to the input power. A correct statement could be, “The power gain of my stereo
amplifier is 60 dB.” It would be incorrect to say, “The power output of my stereo amplifier is 60 dB.”
Let’s apply this information to a real world situation. As an example we’ll use a transmitter with three
stages of amplification. During troubleshooting, you can take power measurements at the inputs and
outputs of any or all of the stages to determine the gain. For instance, the gain of the driver must be
between 34.5 and 41 dB. If it isn’t, the driver must be replaced. Figure 4–2 is a quick reference for dB
values and the power ratios they represent. First, remember a 2 to 1 power ratio is about 3 dB.
Therefore, we can say, “when we double our transmitter power, it has increased by 3 dB,” or “if the
input to a device is 1 watt and the output is 2 watts, then the gain of the device is 3 dB.” Another
example is that signal A is 3 dB stronger than signal B. If signal B is 2 watts, then signal A must be
4–5

twice that, or 4 watts. Also, signal B is 3 dB weaker than signal A, meaning that it’s half the power of
signal A.

Figure 4–2. Values of dB by powers of 2 and 10.

Using the term dBm


The term dBm indicates a measure of power above or below a standard of 1-milliwatt. The advantage
of this term is that it reflects both gain and power level because it’s referenced to the standard of 1-
milliwatt. The term is used much the same as dB, except that one of the two signals involved always
has a strength of 1 milliwatt and so the term dBm can reflect a power level while the term dB can’t.
For instance, if the output is 2 milliwatts, it would now be correct to say “this device has an output
power level of 3 dBm.” The same device would have an output power level of 6 dBm if the output
power were 4 milliwatts.

028. Power meter—features and controls


This lesson covers features and controls of the power meter. You will be expected to know basic facts
and terms about this subject.
The Hewlett-Packard 436A Power Meter in figure 4–3 is a typical general purpose, average-power,
power meter intended for manual and automatic measurement of radio frequency and microwave
power levels. It’s used to measure either absolute or relative power. It can be operated either
manually or through the use of an interface bus and software, and it can be programmed to operate
fully automated. Let’s look at the meter’s operating features.
4–6

Figure 4–3 Hewlett-Packard power meter front panel.

Digital display
This particular power meter has a four-digit, seven-segment, light emitting diode (LED) display, with
large digits that are easy to see, even in a high-glare environment.
Choice of display units
Input power to the 436A can be displayed in any of these units shown in the table below:

UNITS ABBREVIATION
Watts W
Milliwatts mW
Microwatts µW
Nanowatts nW
Decibels referenced to a 1-milliwatt standard dBm
Relative decibels dB (REL)

The meter displays absolute power in either watts or dBm, and relative power in dB.
Relative power measurement
Power measurements relative to a user determined power reference level are made possible by
pressing the spring-loaded dB (REF) reference switch. Pressing dB REF zeros the display of any
input power and causes the digital readout to be stored as a dB reference. After this point, any change
in input power from this reference is displayed in the dB relative (REL) mode as TDB, in increments
of 0.001 dB.
Auxiliary meter
The 436A has an analog meter that complements the digital display by showing fast changes in power
level. It’s ideal for monitoring “peaking” output from a transmitter or other variable power devices.
Frequency and power range
The frequency and power range of the power meter/power sensor system are determined by the
particular power sensor selected for use. With the HP 8480 series power sensors available for the HP
436A, the overall frequency range of the system is 100 kHz to 50 GHz, and the overall power range is
4–7

–70 dBm to +44 dBm (100 pW to 25 W).


Autorange
This power meter automatically switches through its 5 ranges to provide “hands free” operation. The
RANGE HOLD switch locks the power meter in the selected range when autoranging isn’t desired.
UNDERRANGE and OVERRANGE indicators
There’s an UNDERRANGE indicator light in the display to tell you if the input power is too small to
be measured on the selected range, if autorange is disabled. Also, there’s an OVERRANGE indicator
light to tell you if the input power is too large to be measured on the selected range, if autorange is
disabled. There’s a 20 percent overrange capability in all ranges.
Power units and mode annunciator
The 436A’s annunciator provides an error-free display by showing power units when it’s in the
WATT mode. The mode annunciator indicates the mode of operation: that is, dBm, dB(REL), ZERO
or REMOTE.
Automatic sensor recognition
The power meter continually decodes the sensitivity of the connected power sensor. This information
is then used to control the digital display decimal point location automatically. Also selecting the
WATT mode operation lights the appropriate power units annunciator.
Auto zero
You zero the meter by merely depressing the SENSOR ZERO switch and waiting until the display
shows all zeros before releasing it. The meter is ready to make measurements as soon as the ZERO
light in the mode annunciator goes off.
Radio frequency blanking output
A low signal from this output constitutes a blanking signal when the SENSOR ZERO switch is
engaged. This is a good way to remove the radio frequency input signal to the power sensor during
the automatic zeroing operation.
Recorder output
This provides a linear output voltage with respect to the input power level. This output can be used to
drive a graphing device for printing a graph of the power levels. It can be connected to a digital
multimeter to give a DC voltage readout corresponding to the power levels or the recorder output can
be returned to the input power source for stabilization.
Calibration accuracy
A 1.00 mW, 50-MHz reference output is available on the front panel for calibrating the power meter
and power sensor as a system. Calibration is done using the calibration adjustment (CAL ADJ) and
CAL FACTOR % controls. The CAL ADJ control compensates for slight sensitivity associated with
a particular type of power sensor. The CAL FACTOR % control compensates for mismatch losses
over the frequency range of the power sensor.
Interface connector
This feature provides a computer connection for automated tests. In this configuration, the 436A
Power Meter would be controlled by the computer’s software commands.

029. Wattmeter—features and controls


In this lesson we will cover features and control of the wattmeter. You will be expected to know basic
facts and terms about this subject.
4–8

The 4391M RF Power Analyst is an RF Directional Thruline Wattmeter designed to measure RF


power in 50 ohm coaxial transmision lines. When operated in this manner, it is used to determine
power flow, load mismatch, and amplitude modulation. It will measure continuous wave (CW), AM,
FM, single-sideband (SSB), TV and pulse modulation. It reads in peak envelope power (PEP), or CW
power in milliwatts, watts and kilowatts depending on the plug-in elements used. Frequency range is
450 kHz to 2.3 GHz. It has internal NiCad batteries for portable use or AC power can be connected
for in-house use. Let’s take a look at the general description of this instrument.
General description
The 4391M wattmeter as shown in figure 4–4 is an insertion type digital wattmeter placed between
the transmitter and dummy load or antenna. A dummy load is a device substituted for the antenna to
provide normal operation while not actually transmitting. Transmitting through the antenna might
adversely affect other receivers.The plug-in elements are selected based on the frequency and power
to be measured. The highly-sensitive elements contain coupling circuits which sample the traveling
RF waves. It is important that the arrows on the plug-in elements flow according to the traveling RF
waves. Otherwise, the RF waves traveling in the opposite direction of the arrow cancel each other out.
Two switches on the front panel of the instrument are set by the user to correspond to the power range
of the forward element. In any of the modes of operation described, the instrument can recall from
memory the lowest or highest reading taken or tell the operator whether the newest reading is less
than, equal to, or greater than the previous reading.

Figure 4–4. 4391M RF Power Analyst


4–9

Connections
The 4391M wattmeter is normally supplied with two female N-type connectors. Once the wattmeter
is installed in the coaxial line between the transmitter and dummy load or antenna, a plug-in element
must be selected which correspond to the frequency and power levels to be measured.
In order to take full advantage of the wattmeter’s capabilities, two elements in a 10:1 ratio of power
range should be used. If only one element is used, the other socket should be filled with a dust plug or
a higher power element. Also, for greatest accuracy, the element(s) should be chosen having the
lowest possible power range that will not result in over-ranging. The table below lists elements
required for each mode of operation.

Mode Forward Reflected


FWD CW X
RFL CW X
SWR X X*
FWD PEP X
RFL PEP X
% MOD X
FWD dBm X
RFL dBm X
RTN Loss X X*

* Reflected element must have a nominal power range one tenth that of the forward element.
The higher power element is placed in the socket marked forward and its arrow pointed in the
direction of forward power flow (toward antenna or load). The lower power element is placed in the
socket marked reflected and is normally pointed in the direction opposite to forward power flow
(toward transmitter). The elements are clamped in place by the hold-down catches on the face of the
line section. These catches must be used to avoid error because the element is not contacting the
bottom or seating plane of the socket. With the element(s) in place, set the range switches to
correspond with the nominal power range of the elements. For example, if the forward element is a 5
watt element, the switches are set at 5 and x1. For a 250 watt element they are set at 2.5 and x100.
Sometimes it is necessary to use milliwatts or kilowatts as the unit of measure. In other words, 1 watt
becomes 10 x 100 milliwatts and 2500 watts becomes 2.5 x 1 kilowatts.
Operating features
The 4391M wattmeter has nine modes of operation which are selected by momentarily pressing the
mode keys. In addition, each mode has three output options selected by pressing the modifier keys
which are the MIN, MAX, and delta keys. See figure 4–5 to illustrate the location of each key.

Figure 4–5. Modes and output options


4–10

Reading forward power


Install the meter and correct element for the item under test based on frequency and power and move
the power switch to LINE or BAT depending on the power source. When powered up, the 4391M
wattmeter defaults into the forward CW power mode. If the unit is already operating, the forward CW
power mode is selected by momentarily pressing the FWD CW key. Here, the wattmeter measures
peak and minimum square root of power and combines them to formulate CW power. If the applied
power exceeds 120 percent of the range, two right facing arrow heads will be displayed.
Reading reflected CW power
The operation of the reflected CW power mode is identical to that for forward CW power described
earlier with two exceptions: the readings are taken from the element in the socket marked “reflected,”
and the range of the element is assumed to be 1/10 the range indicated by the range switches.
Reading standing wave ratio
Two elements are required for this mode and they must have a 10 to 1 power range ratio. Standing
wave ratio (SWR) power will be displayed if the forward power is between 10 percent and 120
percent of the full scale and the reflected power is less than 120 percent of the reflected element
range. Two arrows pointing to the right indicate over-range while two left pointing arrows indicate
under-range or too little power.
Measuring peak envelope power
PEP measurements are made in the same manner as CW power readings described earlier, except that
the FWD PEP and RFL PEP buttons are pressed and the readings are displayed directly as peak
power.
Measuring amplitude modulation
Modulation is displayed directly in percent. It will be displayed as long as the average signal is about
10 percent and the peak incident power of the signal is below 400 percent of the element’s range. For
better accuracy, measure the average CW power levels at greater than one third of element’s full
scale. Over-modulation will be indicated as 99.9 percent.
Measuring power in dBm
Operation of the forward and reflected dBm modes is identical to the forward and reflected CW
power modes except that the resulting reading is converted to dBm. It should be noted that in doing
this conversion, the range set on the slide switches is assumed to be watts rather than kilowatts or
milliwatts. If not, 30 must be added to all dBm readings when the range is in kilowatts or subtracted
from all readings when it is in milliwatts. An error message is displayed if CW power is more than 24
dB below, or peak power is more than 6 dB above the nominal element range.
Measuring return loss
The measurement of return loss is the same as that of SWR except that the result is displayed in dB.
In other words, a reading of 21.6 indicates that reflected power is 21.6 dB down from forward power.
Monitoring maximum and minimum readings
While operating in any of the modes described above, the wattmeter will continuously keep track of
the highest and lowest reading obtained. To recall the maximum reading, hold the MAX or MIN key
down. When these keys are released, the meter goes back to displaying the current value of the
parameter being measured. Recalling maximum or minimum does not stop the meter from continuing
to monitor the current value and updating the minimum and maximum registers. To clear the
minimum and maximum register, the mode key must be pressed again or a new mode key must be
selected. For example, if CW power deviations are to be monitored, the wattmeter is installed, turned
on and allowed to stabilize. Once the system has stabilized, the FWD CW key is pressed to clear the
MAX and MIN registers. At any time during test the MAX and MIN keys can be used to recall the
4–11

maximum and minimum values without affecting the test. However, pressing FWD CW or changing
modes will clear the registers.
Using the peaking aid
The peaking aid is useful for making adjustments to optimize any of the parameters which the
wattmeter measures. After the mode is selected, the delta (Δ) key is pressed momentarily. This blanks
the least significant digit of the display, and replaces it with a right facing arrow head if the measured
quantity is increasing and a left facing arrow head if it is decreasing. If there is no change, the digit is
left blank. To find a peak, begin making the adjustment in which ever direction produces a right
facing arrow head and continue slowly in that direction until the arrow head turns around. At this
point the peak has been reached. To ensure the peak has not been passed, press the MAX key to read
the highest value read and release it to read the current value. The two should be the same. Desired
minimum levels such as reflected power or SWR are found in similar manner.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

027. Defining the terms dB and dBm


1. The decibel is part of what larger unit of measure?

2. A power ratio of 10,000:1 can be represented by how many bels?

3. What’s the most commonly used industry standard power reference level?

4. How is the answer to question number 3 expressed?

5. Rewrite these sentences so they use the term dB correctly.

(a) This TWT has 25 dB of output power.

(b) The final amp is pumping out a 30 dB package.

6. What will every 3 dB increase in gain do to the power level?

7. How many dB of gain does an amplifier have if there is 10 watts in and 80 watts out?
4–12

8. You have installed an in-line 3 dB attenuator between a power source and the power sensor of a
power meter. How will this affect the power level displayed by the power meter?

9. What two power characteristics are reflected by the term dBm?

10. How do the terms dB and dBm differ?

11. How much more power is available at 6 dBm than at 3 dBm?

12. What power levels are indicated by these measurements?


(a) 3 dBm.

(b) 9 dBm.

(c) 12 dBm.

(d) 36 dBm.

028. Power meter—features and controls


1. List at least three of the units that can be displayed by the HP 436A power meter.

2. Briefly explain how to use the relative power measurement mode for frequency response testing.

3. Match the power meter characteristics in column A with the HP 436A features in column B. Each
feature in column B can be used once, more than once, or not at all.
4–13

Column A Column B
____ (1) For receiving commands from a computer. a. Auxiliary meter.
____ (2) Determines the frequency range. b. Power sensor.
____ (3) Ideal for monitoring peaking power. c. Autorange.
____ (4) Provides hands free operation. d. Auto zero.
____ (5) Press SENSOR ZERO to activate. e. Recorder output.
____ (6) Can be used to stabilize the power source. f. Interface connector.
____ (7) Determines power measurement range. g. Calibrator output.
____ (8) Provides a stable 1.00mW, 50 MHz output h. Radio frequency blanking
____ (9) Output can be used to print graphs. output.

029. Wattmeter—features and controls


1. What is the 4391M RF Power Analyst designed to measure?

2. What are the two switches on the front panel of the instrument used for?

3. If only the forward element is used, what is filled in the other socket?

4. Why are the elements clamped into place by the hold-down catches on the face of the line
section?

5. When powered up, which mode does the 4391M wattmeter default to?

6. When reading forward power, what will be displayed if the applied power exceeds 120 percent of
the range?

7. Where are the reflected CW readings taken?

8. Between what power ranges will a SWR be displayed?

9. What is the difference in readings CW and PEP?

10. How is over-modulation displayed on the wattmeter?


4–14

11. What must be added to the dBm reading when the range is in kilowatts?

12. When measuring return loss, what does a reading of 21.6 indicate?

13. In order to recall the maximum reading, what key do you press?

14. How do you clear the minimum and maximum register?

15. Which mode is useful for making adjustments to optimize any of the parameters which the
wattmeter measures?

4–3. Measuring Frequency


The traditional way to view electrical signals is in the time domain with an oscilloscope. The time
domain is used to recover relative timing and phase information that is needed to characterize circuit
behavior. However, not all circuits can be characterized uniquely from its time domain information
alone. Circuit elements such as amplifiers, subcarrier oscillators, mixers, modulators, detectors and
filters are best characterized by their frequency response information. This frequency information is
best determined by viewing electrical signals in the frequency domain with a spectrum analyzer.

030. Functions of a spectrum analyzer


This lesson covers the functions of a spectrum analyzer. You will be expected to know basic facts and
terms about this subject.
The spectrum analyzer has a number of uses for the technician working with frequency generating
circuits. Of course, its main function is to display frequency information in frequency domain.
Display information in the frequency domain
A major difference between the oscilloscope and the spectrum analyzer is that the oscilloscope
displays frequency information in the time domain, whereas the spectrum analyzer displays frequency
information in the frequency domain. The frequency domain is a graphical representation of signal
amplitude as a function of frequency. Information is shown with respect to frequency along the X-
axis of the spectrum analyzer. On the oscilloscope’s time domain display, all frequency components
of a signal are seen summed together. On the spectrum analyzer’s frequency domain display, complex
signals (i.e., signals composed of more than one frequency) are separated into their frequency
components and the power level of each frequency is displayed.
The display of information in the frequency domain requires equipment that can discriminate between
frequencies while still measuring each power level. The spectrum analyzer was designed for this
purpose. This instrument graphically displays a voltage or power as a function of frequency across the
X-axis of its cathode ray tube (CRT). In other words, frequency—not time—is the measuring unit of
the CRT’s X-axis. This is analyzing signals in the frequency domain.
4–15

Plot range of frequencies


You’ll be using spectrum analyzers to plot a complete range of frequencies within a desired spectrum
directly on a CRT. The display on the screen is a plot of amplitude versus frequency. A spectrum
analyzer does these things effectively:
1. Locate and identify signals over a wide frequency spectrum.
2. Magnify parts of the spectrum for detailed analysis with stable calibrated sweeps and
resolution.
3. Minimize display clutter for spurious responses within itself.
4. Furnish wide dynamic range and flat frequency response.
Measure swept frequency and distortion
The swept frequency response of a filter or amplifier and the swept distortion measurement of a tuned
oscillator are examples of swept frequency measurements possible with a spectrum analyzer. In
addition, you can easily characterize frequency conversion devices with parameters from the display,
such as conversion loss, isolation, and distortion.
Perform other functions
The frequency domain contains information not found in the time domain and, therefore, the
spectrum analyzer has certain advantages not available with an oscilloscope. The spectrum analyzer is
more sensitive to low-level distortion than an oscilloscope. Its sensitivity and wide dynamic range are
useful for measuring low-level modulation. It can also be used to measure:
 Amplitude modulation (AM)
 Frequency modulation (FM)
 Pulsed radio frequency.
 Long- and short-term frequency stability.
 Parameters such as subcarrier oscillator outputs and complex signals.
 Frequency drift during system warm-up.

031. Functions of an electronic counter


This lesson covers the functions of an electronic counter. You will be expected to know basic facts
and terms about this subject.
The electronic counter combines state-of-the-art processing systems and highly stable quartz crystals
to measure time and frequency. It provides a greater accuracy for time and frequency measurement
than for any other electrical parameter. Electronic counters measure frequencies and totalize events
from low audio frequencies up to 550 MHz. Some methods are available to extend the frequency
range of counters to more than 20 GHz.
Measure frequency
An electronic counter measures frequency by totaling (or counting) the number of input cycles, or
events, over a precise period of time. The results of this count are proportional to the unknown
frequency and are computed to indicate the input frequency. A quartz-crystal oscillator inside the
counter provides the time reference for the precise timing.
Measure period
As you remember, the inverse of frequency provides signal width information. In a digital system, the
period measurement represents the average bit-to-bit time of an input signal.
4–16

Totalize
This measurement is like measuring frequency; but the operator controls the time over which the
measurement takes place. This measurement is very important in digital systems to count the number
of bits in a pulse train.
Measure ratio
With some counters, you can measure the ratio between two input frequencies. One application where
this ratio may be measured is with harmonically related signals.
Scaling
Some counters offer this special capability. A digital output signal is converted to a scaled or divided
version of the input frequency.
Measure time interval
Time interval measurements are of major importance in many applications. Time interval is the
measurement between two points or the time between two points, of a common event, such as the
time between two pulses.
Perform other measurements
Although rise and fall time measurements are normally a function of the oscilloscope, some electronic
counters have this feature. Phase measurements are also possible. Some counters also have numerical
keypads for mathematical manipulation of incoming frequencies. Electronic counters fall into four
classes: frequency counters, universal counters, microwave counters, and reciprocal counters. We are
more concerned with the electronic counter as a frequency counter.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

030. Functions of a spectrum analyzer


1. What is a major difference between the oscilloscope and the spectrum analyzer?

2. Describe how the spectrum analyzer displays information in the frequency domain.

3. What functions does the spectrum analyzer perform?

4. What advantage do the spectrum analyzer’s sensitivity and wide dynamic range provide?

5. List some uses of the spectrum analyzer.


4–17

031. Functions of an electronic counter


1. What is the frequency range of an electronic frequency counter?

2. Match the description in column B with the correct function in column A. Items can only be used
once.
Column A Column B
_____ 1. Measurement totaling the number of input cycles over a period of time. a. Ratio.
_____ 2. The average bit-to-bit time of the input signal in a digital system. b. Period.
_____ 3. Measures the time between two points of a common event. c. Scaling.

_____ 4. Provides a divided version of the input frequency. d. Totalize.

_____ 5. Counts the number of bits in a pulse train. e. Frequency.

_____ 6. Measures harmonically related signals. f. Time interval.

3. What provides the time reference for the precise timing and where does this function take place?

4–4. Communication Analyzer


One of the most versatile pieces of test equipment used today is the communication analyzer. It is a
multi-function instrument designed to meet the challenges of most radio maintenance problems. A
communication analyzer can be used to maintain SINCGARS, HAVE QUICK radios. It can be used
to verify radio performance characteristics and to diagnose failures or to troubleshoot
communications systems. This section covers functions of a communication analyzer. Since each
communication analyzer is tailored to its own unique models, we will use the Aeroflex
Communication Service Monitor TS–4317, see figure 4–6, as our example. You will be expected to
know basic facts and terms about this subject.

Figure 4–6. Communication Service Monitor.


4–18

032. Characteristics of the communication service monitor


The communications service monitor (CSM) performs a broad range of testing activities. With this
instrument, you have the capabilities of 16 different pieces of test equipment in one convenient
package. With the CSM, you no longer have to lug multiple pieces of test equipment to maintain,
install, or troubleshoot your radio equipment. The CSM can be used in the field or workcenter. In this
lesson, we will briefly describe the equipment and its capabilities.
Equipment description
The CSM is a microprocessor controlled, digitally synthesized test set which combines the operations
of many different test instruments into a single, compact unit. The CSM uses an alphanumeric
keypad, dedicated function keys, multi-task soft function keys and a high-resolution color display, to
enter and edit functions and data. Testing with the CSM can be done by remote control or manually
by front panel control. Tests performed with the CSM can use pre-programmed setups or formats
determined at the time of the test. Microprocessor controlled memory allows storage and recall of
parameters for each of the testing modes as well as storage and recall of oscilloscope and spectrum
analyzer traces for signal comparison.
Meter
The CSM provides meters for use as independent test functions as well as synthesizing them into the
major mode functions. As independent functions, the meters provide a bar graph display and digital
data. On the major operations screens, these meters display as bar graph display and digital data or
just as digital data, depending on the oscilloscope/spectrum analyzer display size. The meters can be
used to monitor both internal and external inputs.
Data entry and display
Unless the CSM is configured for remote testing, all data received by the operator is in the form of
screens and menus. Each major test operation has a dedicated operation screen with subordinate setup
menus. The microprocessor edits operation screens to reflect changes in parameters imposed by the
operator or reflects changes in data delivered by the unit under test.
Individual meters also have dedicated operation screens and subordinate setup menus. The meter
operation screens can be accessed through the mode operation screen that is being supported by that
specific meter operation or through the meter menu.
Once a specific mode of operation is selected, the operation screen appears on the LCD or CRT. The
parameters shown reflect the parameters last entered in that operation. This specific operation screen
is accessed with one of six dedicated MODE keys.
Setup menus on all mode operation screens are accessed from the specific operation screen by
pressing the SET UP key. Operator entry and edit of data is performed on the operation screen or on
the setup menu.
The operation screen and menus have active cursors that the operator can use to access a specific
parameter. Once the parameter is accessed, data can be selected with DATA SCROLL spinner or
DATA SCROLL keys or by using the alphanumeric DATA ENTRY keypad.
Multi-task “soft” function keys are also provided to perform set up, edit and entry. Each operation
screen defines soft function keys to fit the specific needs for that operation. These definitions are
displayed on the screen above the defined key. Each operation screen may have several definitions for
each soft function key or make a definition unavailable depending on the parameters of the operation.
Various functions performed by these multi-task keys include toggling between two values, selecting
connectors for access, entering data or selecting a field for edit.
Setup storage and recall functions are available for each of the MODE screens. This allows the user to
store up to nine setups in each MODE screen for future recall. Selecting the auxiliary function (AUX)
4–19

menu and selecting the store function may also save the system configuration. This allows the user to
store and recall up to nine system configurations.
The receiver and generator have a frequency list capability. This allows the user to store up to 100
frequencies into a list for use when performing tasks that require the same instrument setup to be used
on many different frequencies.
Functions
The CSM is a general-purpose communication test set for testing radios and related equipment. The
CSM combines the operations of 16 different test instruments into a single, compact unit. The CSM is
capable of performing the following functions:
 RF generator.
 Audio frequency (AF) generator.
 Oscilloscope.
 Deviation (peak) meter.
 SINAD (signal plus noise and distortion) meter.
 Frequency error meter.
 Modulation meter.
 Bit error rate meter.
 Deviation (RMS) meter.
 Cable fault detector.
 Receiver.
 Spectrum analyzer.
 Digital multimeter (DMM).
 Distortion meter.
 AF meter.
 RF power meter.
 Signal strength meter.
 Phase meter.
 Tracking generator.
 Analog/Digital Radio Paging.

033. Operational capabilities of the communication service monitor


CSM supports a variety of radio communications systems such as air traffic control, wireless
communications, and tactical radios. It has multitudes of operations to test your radio equipment. This
lesson examines just a few of its operational capabilities most likely used in the average work center.
Receiver operation
The receiver operation is capable of receiving modulated and unmodulated signals from 250 kHz to
999.9999 MHz (in 100 Hz steps). The CSM receives AM, FM, PM, and SSB modulated signals. The
metering functions include RF power, modulation, deviation, distortion, frequency error, AF
frequency, signal strength and SINAD meters, and counters. The oscilloscope and spectrum analyzer
are also available for use on the receiver operation screen. The receiver is capable of executing user
defined frequency scan and frequency list operations as well as single frequency operations. The
receiver accesses “off the air” signals through the ANTENNA IN connector or higher-powered
signals are accessed through the T/R connector.
4–20

Generator operation
The generator is capable of generating modulated or unmodulated carrier signals from 500 kHz to
999.9999 MHz (in 100 Hz steps). The output level is variable from –137 dBm to 0 dBm. Modulation
types include AM, FM and PM. Modulation can also be applied from an external source. The RF
generator also functions as a tracking generator for swept measurements or as an offset generator in
duplex mode. Metering functions available for this operation mode include SINAD, DMM and
distortion, and AF level meters. The oscilloscope and spectrum analyzer are also available for use
with the RF generator operation screen. The generator is capable of executing user defined frequency
scan and frequency list operations as well as single frequency operations.
Duplex operation
The CSM combines the independent RF generator and receiver from 500 kHz to 999.9999 MHz for
duplex operation. The operator has the option of monitoring on screen the transmitter under test,
receiver under test or both. All features available with the RF generator and receiver are available in
duplex as well as the ability to use an offset frequency from –999.7499 to +999.7499 MHz to test
communication equipment capable of generating and receiving simultaneously on different
frequencies.
Function generator operation
There are two audio function generators with a range of 10 Hz through 40 kHz. Available waveforms
include sine, square, triangle, ramp, pulse, and DC levels. The CSM is also capable of creating digital
data bit patterns of 100 to 100,000 bits. Data rates include 75, 150, 300, 600, 1200, 2400, 4800 and
16000 bps. Pattern types include random, fixed, and user defined.
Spectrum analyzer operation
The spectrum analyzer monitors internal and external signals from 250 kHz to 999.9999 MHz. Scan
widths range from 1 kHz/div to 100 MHz/div as well as zero scan capability. Available log scales are
2 and 10 dB/div. Memory functions for the spectrum analyzer include store and recall of a trace,
compare a stored trace to a live trace and peak hold. External signals can be displayed “off-the-air”
through the ANTENNA IN connector or applied through the T/R connector. The spectrum analyzer
also has a tracking generator function with a variable level from –137 to 0 dBm. The spectrum
analyzer is available for display alone or in all major operation modes except where the duplex
transmit and receive functions are shown simultaneously. Size options for views in other modes are
full-size and quarter-size.
Scope operation
The CSM includes a 1 MHz single trace oscilloscope as one of its major test capabilities. The
oscilloscope allows input from 2 mV to 50 V per division with a maximum input voltage of 200
VDC.
Sweep rates range from 1 μs/div to 100 ms/div in a 1–2–5 sequence. The oscilloscope has a store and
recall ability of up to nine signals. The recalled trace can be shown simultaneously with a live trace,
giving the operator the opportunity to compare the two traces. Signals available for use include AC,
DC, GND, receiver IF, demodulated audio, function generators, external audio, SINAD/BER and RF
power. The oscilloscope can be used as a stand-alone instrument or in combination with all major
operation modes except where duplex transmitter and receiver are shown simultaneously. Size
options for views in other modes are full-size and quarter-size.
4–21

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

032. Characteristics of the communication service monitor


1. What is a communication service monitor?

2. How are meters displayed on the major operations screens?

3. What part of the CSM edits the operation screens to reflect changes in parameters imposed by the
operator?

4. How can the meter operation screens be accessed?

5. When a specific mode of operation is selected, what operational parameters appear on the screen?

6. Which screens can an operator use for entry and edit of data?

7. Once the parameter is accessed, how is data selected?

8. What is a multi-task soft function keys?

9. How many system configurations can the user store?

10. How many frequencies are allowed to be stored into a list for use when performing tasks that
require the same instrument setup to be utilized on many different frequencies?

11. Name several functions or operations of the communication service monitor?


4–22

033. Operational capabilities of the communication service monitor


1. Which receiver access input point would you use for higher-powered signals?

2. Match the type of operation in column B with the capability in column A. Items can only be used
once.

Column A Column B
____ (1) Includes square, triangle or ramp waveform a. Receiver.
____ (2) View demodulated audio. b. Generator.
____ (3) Generates modulation type output level between –137 to 0 dBm. c. Duplex.
____ (4) Receives AM, FM, PM and SSB modulated signals. d. Function generator.
____ (5) Monitors transmit and receive test. e. Spectrum analyzer.
____ (6) Has zero scan capability. f. Scope.

Answers to Self-Test Questions


026
1. 500 kHz to 512 MHz (450 kHz to 550 MHz with over-range).
2. A variable audio oscillator can extend the output range of the generator down to 20 Hz and a doubler can
extend it to 11,000 MHz. This, together with a calibrated output and modulation, permits complete radio
frequency and intermediate frequency performance tests on virtually any type of high frequency, very high
frequency, or ultra high frequency receiver.
3. To produce a signal that can be set accurately in frequency at any point within the range of the generator.
4. To produce an audio modulating signal to superimpose on the radio frequency signal produced in the
oscillator.
5. Either sine waves, square waves, or pulses of varying duration.
6. A calibrated attenuator and an output level meter. The attenuator lets you select the amount of output
required. The output level meter provides an indication and permits control of the generator output voltage.
7. (1) Verify transmission within designated frequency ranges by comparing transmitter outputs with known
radio frequencies.
(2) Align a receiver by injecting the system with range-standard modulated radio frequency.
(3) Check transmission lines and antenna systems for proper operation.
027
1. Bel.
2. Four.
3. The 1-milliwatt standard.
4. As decibels referenced to a 1-milliwatt standard or dBm.
5. (a) This TWT has a power gain of 25 dB; (b) The final amp is pumping out a package that’s 30 dB more
than its input.
6. Roughly double it.
7. There’s a gain of 9dB.
8. It will cut it in half.
9. Gain and power level.
4–23

10. Because dBm is always referenced to a 1-milliwatt standard, it can be used to reflect a power level, in
addition to gain or loss. The term dB can’t reflect a power level; instead, it can only reflect gain or loss
because it’s purely a ratio.
11. Twice as much.
12. (a) 2 mW, (b) 8 mW, (c) 16 mW, (d) 4,096 mW (approx 4.1 watts).
028
1. Watts, milliwatts, microwatts, nanowatts, dBm, and relative dB.
2. After the input is connected, press dB(REF) to lock in this frequency level as the reference, then tune the
transmitter to other frequencies and observe any changes up or down in the power level.
3. (1) f, (2) b, (3) a, (4) c, (5) d, (6) e, (7) b, (8) g, (9) e.
029
1. The 4391M RF Power Analyst is an RF Directional Thruline Wattmeter designed to measure power flow,
load match and AM in 50 ohm coaxial transmision lines.
2. Two switches on the front panel of the instrument are set by the user to correspond to the power range of
the forward element.
3. If only one element is used, the other socket should be filled with a dust plug or a higher power element.
4. These catches must be used to avoid error due to the element not contacting the bottom or seating plane of
the socket.
5. When powered up, the 4391M wattmeter always goes into the forward CW power mode.
6. If the applied power exceeds 120% of the range, two right facing arrow heads will be displayed.
7. Readings are taken from the element in the socket marked “reflected.”
8. SWR will be displayed if the average forward power is between 10% and 120% of the full scale and the
average reflected power is less than 120% of the reflected element range.
9. Readings are displayed directly as peak power in PEP. To formulate CW power, the wattmeter measures
peak and minimum square root of power and combines them.
10. Over-modulation will be indicated as 99.9%.
11. 30 must be added to all dBm readings when the range is in kilowatts.
12. The measurement of return loss reading of 21.6 indicates that reflected power is 21.6 dB down from
forward power.
13. To recall the maximum reading, hold the MAX or MIN key down.
14. To clear the minimum and maximum register, the mode key must be pressed again or a new selected.
15. Peaking aid.
030
1. The oscilloscope displays frequency information in the time domain, whereas the spectrum analyzer
displays frequency information in the frequency domain.
2. By complex signals (i.e., signals composed of more than one frequency) that are separated into their
frequency components, and the power level of each frequency is displayed.
3. (1) Locate and identify signals over a wide frequency spectrum.
(2) Magnify parts of the spectrum for detailed analysis with stable, calibrated sweeps and resolution.
(3) Minimize display clutter for spurious responses within itself.
(4) Furnish wide dynamic range and flat frequency response.
4. A way to measure low-level modulation.
5. (1) Measure AM, FM and pulsed radio frequency.
(2) Measure long- and short-term frequency stability.
(3) Measure parameters such as subcarrier oscillator outputs and channels of complex signals.
(4) Measure frequency drift during system warm-up.
4–24

031
1. Low audio frequencies up to 550 MHz. Some methods are available to extend the frequency range of
counters to more than 20 GHz.
2. (1) e., (2) b., (3) f., (4) c., (5) d., (6) a.
3. A quartz-crystal oscillator inside the counter.
032
1. The CSM is a microprocessor controlled, digitally synthesized test set which combines the operations of
many different test instruments into a single, compact unit.
2. On the major operations screens, these meters display as bar graph display and digital data or just as digital
data, depending on the oscilloscope/spectrum analyzer display size.
3. Microprocessor.
4. The meter operation screens can be accessed through the mode operation screen that is being supported by
that specific meter operation or through the meter menu.
5. Once a specific mode of operation is selected, the parameters shown reflect the parameters last entered in
that operation.
6. Operator entry and edit of data is performed on the operation screen or on the setup menu.
7. Once the parameter is accessed, data can be selected with DATA SCROLL spinner or DATA SCROLL
keys or by using the alphanumeric DATA ENTRY keypad.
8. Multi-task “soft” function keys perform set up, edit and entry. Each operation screen defines soft function
keys to fit the specific needs for that operation.
9. Allows the user to store and recall up to nine system configurations.
10. 100.
11. RF generator, AF generator, oscilloscope, deviation (peak) meter, SINAD meter, frequency error meter,
modulation meter, bit error rate meter, deviation (RMS) meter, cable fault detector, receiver, spectrum
analyzer, digital multimeter, distortion meter, AF meter, RF power meter, signal strength meter, phase
meter and tracking generator.
033
1. T/R connector.
2. (1) d., (2) f., (3) b., (4) a., (5) c., (6) e.
4–25

Unit Review Exercises


Note to Student: Consider all choices carefully, select the best answer to each question, and circle
the corresponding letter. When you have completed all unit review exercises, transfer your answers to
the Field-Scoring Answer Sheet.
76. (026) Which one of these is not an application of a radio frequency generator?
a. Verifying transmitter frequencies.
b. Troubleshooting receivers.
c. Checking antenna systems.
d. Aligning galvanometers.
77. (027) A power reading that uses the abbreviation dBm indicates the
a. ratio of decibels relative to a 1-milliwatt standard.
b. ratio of decibels relative to a 1-megawatt standard.
c. actual power level expressed in milliwatts.
d. actual power level expressed in megawatts.
78. (028) The measurable frequency and power ranges of the HP 436A power meter are determined
by the
a. digital display.
b. power sensor.
c. power meter.
d. interface.
79. (028) During what operation is the radio frequency blanking output used on the HP 436A power
meter?
a. While in WATT mode operation.
b. While in REMOTE mode operation.
c. During the automatic zeroing operation.
d. During calibration adjustment operation.
80. (029) The two switches on the front panel of the wattmeter correspond to the
a. power range of the reverse element.
b. power range of the forward element.
c. frequency range of the reverse element.
d. frequency range of the forward element.
81. (029) What feature is useful for making adjustments to optimize any of the parameters which the
wattmeter measures?
a. Peaking aid.
b. Minimum reading.
c. Maximum reading.
d. Standing wave ratio.
82. (030) Which measurement is not made with a spectrum analyzer?
a. Frequency stability.
b. Peak-peak voltage.
c. Amplitude modulation.
d. Subcarrier oscillator outputs.
4–26

83. (031) Which electronic counter measurement represents the average bit-to-bit time of an input
signal?
a. Period.
b. Scaling.
c. Frequency.
d. Time interval.
84. (031) Which are the correct four classes of electronic counters?
a. Frequency counters, time counters, microwave counters, and reciprocal counters.
b. Frequency counters, universal counters, microwave counters, and reciprocal counters.
c. Frequency counters, universal counters, period counters, and reciprocal counters.
d. Scaling counters, universal counters, microwave counters, and reciprocal counters.
85. (032) What function is not part of the communication service monitor?
a. Radio frequency generator.
b. Analog multimeter.
c. Oscilloscope.
d. Receiver.
86. (033) What operational capability of the communication service monitor receives amplitude and
frequency modulated signals?
a. Scope.
b. Receiver.
c. Generator.
d. Function generator.
87. (033) What operational capability of the communication service monitor produce sine, square and
triangle waveforms?
a. Scope.
b. Duplex.
c. Receiver.
d. Function generator.

Please read the unit menu for unit 5 and continue 


Unit 5. Standard Practices
5–1. Installation ................................................................................................................................ 5–1 
034. Standard installation practices ........................................................................................................... 5–1 
035. Inspection standards........................................................................................................................... 5–5 
5–2. Grounding, Bonding, and Shielding Systems......................................................................... 5–7 
036. Grounding .......................................................................................................................................... 5–7 
037. Bonding ............................................................................................................................................. 5–9 
038. Shielding systems ............................................................................................................................ 5–10 
5–3. Powering Electrical Equipment ............................................................................................ 5–13 
039. Batteries ........................................................................................................................................... 5–13 
040. Uninterruptible power supply (UPS) ............................................................................................... 5–15 
041. Fixed generators and transfer switches ............................................................................................ 5–18 

S
TANDARD MAINTENANCE PRACTICES of communications-electronics (C-E) technicians
are the normal procedures used to support information operations. No matter what unit you are
assigned to, what base you are stationed to or what Air Force specialty code (AFSC) you are in,
there should be a common set of standards that all C-E technicians should follow.

5–1. Installation
It is important to make sure the communications we install are as reliable as possible. However if the
communications do happen to fail after the installation team has left the area, it is imperative that the
common standards were followed to ensure effective troubleshooting can occur. From using the
appropriate labels to ensuring the proper wire color, standards must be adhered to.

034. Standard installation practices


This lesson covers facts about installation as it relates to standard maintenance practices of the C-E
technician. The installation of Air Force (C-E) equipment such as new installations or modifications
of control towers, command post, local area network (LAN) networks, antennas, antenna towers,
radio, radar, intrusion detection, CRYPTO, or technical control facilities, to name a few, are covered
by a set of rules or standards. These installation standards are established by Standard Installation
Practices Technical Orders (SIPTO).
The Air Force stresses uniformity and safety. The use of SIPTOs throughout the Air Force ensures
that multimillion-dollar communications systems are installed the same way using the same set of
standards regardless of the location. SIPTO guidance also helps to ensure that installation projects are
accomplished safely with special emphasis on personnel and equipment safety.
The table below shows a partial list of SIPTOs you may use on-the-job.

Electronic SIPTO Technical Orders


TO Title
TO 31–10–2 FANNING AND FORMING CONDUCTORS FOR FIXED C-E EQUIPMENT.
TO 31–10–3 OUTSIDE PLANT INSTALLATION.
TO 31–10–6 CABLE RACKS, TROUGHS, AND THEIR SUPPORTS.
TO 31–10–7 TERMINATING AND SOLDERING ELECTRICAL CONNECTIONS.
TO 31–10–9 MARKING SITE LAYOUT.
TO 31–10–10 ANCHORING DEVICES FOR GROUND C-E EQUIPMENT.
TO 31–10–11 CROSS CONNECTIONS.
5–2

TO 31–10–12 METAL DUCTS AND CONDUIT.


TO 31–10–13 CABLING OF FIXED GROUND C-E EQUIPMENT.
TO 31–10–14 RADIO FREQUENCY CABLES AND CONNECTORS.
TO 31–10–16 STRAPPING OF FIXED C-E COMPONENTS.
TO 31–10–24 GROUNDING PRACTICES FOR FIXED C-E EQUIPMENT.
TO 31–10–27 EQUIPMENT DESIGNATIONS.
TO 31–10–29 ERECTION AND ASSEMBLY OF GROUND C-E EQUIPMENT..
TO 31–10–34 FIBER OPTIC COMMUNICATIOS CABLES AND CONNECTIONS.

As you can tell from this partial list of SIPTOs, a wide range of installation guidance is covered. Now
let’s get into cable labeling and installation.
Labeling and documentation
Labeling cables properly is an important part of site documentation not only during the installation
phase but also as an aid to operator and maintenance personnel during troubleshooting and
maintenance. It’s also important for engineering personnel for system upgrade during the engineering
process.
Much of the cabling you will encounter as a C-E technician will not be easily accessible since it will
be running through a plenum or on a cable trough with many other cables. This collection of cables
can make it difficult to troubleshoot an individual cable. This is why cable labeling can aid the
technician in troubleshooting. By having the cable accurately marked, it is easier to determine the
originating and terminating ends of the cable. Since the route of the cable does not have to be traced,
the technician will only need to connect the proper test equipment to verify connectivity and proper
operation of the cable.
Connectorized cables will be marked at both ends with two separate identification markings. The first
marker (label) will be placed nearest to the connector (within 12 inches) and will be marked “TO”
and will indicate the bay, equipment shelf, terminal block, and plug or jack to which the connector
will mate. The second marker will be placed within two inches of the first and will be marked
“FROM.” It will indicate the same information such as the bay, equipment shelf, and jack or plug at
the originating source.
Permanent markings will be applied by either heat transfer method or adhesive labels. Heat transfer
marking is accomplished by using a tool that electrically heats the print head. The heated print head is
applied either to a special foil, which transfers the marking to the cable sheath, or to precut pieces of
heat shrink tubing. Adhesive labels have the markings machine printed with indelible ink.
Installation documentation is of the utmost importance, yet it is most often neglected during an
installation. Installation documentation covers a wide range of paperwork. Make sure all drawings,
sketches and specifications are properly annotated to show the actual, current as installed, condition.
The updated records are critical to maintain and troubleshoot the equipment.
Also, make sure equipment test documentation is annotated, which could include test data, final
facility alignment, data recordings, and oscilloscope photographs.
Color coding
Wires are coated with insulation of different colors to help identify each wire, or pair of wires, within
a cable. Cables often contain many wires. Cables come in many different sizes from 25 pair to
thousands of pairs. The wires within the cable jacket are grouped together by a wrapping called a
binder. The binder is generally a thin plastic colored tape that spirals around a specific number of
pairs that replicate in color in each separate binder. Within the binders are the individual pairs (two
wires each) which are twisted together. The individual wires are called “tip” or “primary” and the
other “ring” or “secondary”.
5–3

Now that we know how the wires are grouped, we can establish the color coding standard. Looking at
the table below, you can see your “tip” wire group orders is: white, red, black, yellow, and violet.
Additionally, you see the “ring” colors repeat in each tip group in the order of blue, orange, green,
brown and slate. This establishes the order for the first 25 pairs. Looking under the Super Unit
column shows the additional breakdown of binder colors in order to increase your pair count. Note
your binder color scheme’s relationship to your tip and ring colors.

CABLE and WIRE COLOR CODE


COLOR PAIR SUPER UNIT PAIR
Tip (+) Ring (-) NUMBER BINDER COLOR COUNT
White Blue 1 1 White - Blue 1 - 25
“ Orange 2 2 White - Orange 26 - 50
“ Green 3 3 White - Green 51 - 75
“ Brown 4 4 White - Brown 76 - 100
White Slate 5 5 White - Slate 101 - 125
Red Blue 6 6 Red - Blue 126 - 150
“ Orange 7 7 Red - Orange 151 - 175
“ Green 8 8 Red - Green 176 - 200
“ Brown 9 9 Red - Brown 201 - 225
Red Slate 10 10 Red - Slate 226 - 250
Black Blue 11 11 Black - Blue 251 - 275
“ Orange 12 12 Black - Orange 276 - 300
“ Green 13 13 Black - Green 301 - 325
“ Brown 14 14 Black - Brown 326 - 350
Black Slate 15 15 Black - Slate 351 - 375
Yellow Blue 16 16 Yellow - Blue 376 - 400
“ Orange 17 17 Yellow -Orange 401 - 425
“ Green 18 18 Yellow - Green 426 - 450
“ Brown 19 19 Yellow - Brown 451 - 475
Yellow Slate 20 20 Yellow - Slate 476 - 500
Violet Blue 21 21 Violet - Blue 501 - 525
“ Orange 22 22 Violet - Orange 526 - 550
“ Green 23 23 Violet - Green 551 - 575
“ Brown 24 24 Violet - Brown 576 - 600
Violet Slate 25 25

The columns Tip and Ring are holdovers from the original telephone systems. Today they are used to
identify polarity of the wire pair. Although reverse polarity may not affect current telephone voice
communication systems, it can slow down data rates for modem communications.
Figure 5-1 shows two wiring color schemes for RJ45 network (also called Ethernet) connections; the
kind generally used for connecting a computer or printer to a network hub or switch. This type of
connection is referred to as a straight through connection, since both ends of the cable use the same
color code. Notice that the color schemes use the same color codes, but the pin assignments are
different from those on a 25-pair cable. The color schemes shown below are specifically used to
achieve high data rate on twisted pair CAT-5 cable. The T568A and T568B represent two different
5–4

color standards. Most newly installed equipment follows the T568B standard. Regardless of what
standard you are using, the key is to keep it consistant throughout your system.

Figure 5–1. Example of a coloring scheme.

BL: blue W-BL: white – blue


BR: brown W-BR: white – brown
G: green W-G: white – green
O: orange W-O: white – orange

In addition to the straight through cable, you may come across a network crossover cable. These
cables are used used to operate in a peer-to-peer fashion without a hub/switch. The table below
depicts the color code standard for each end of a T568B crossover cable. Note the transmit and
receive signals cross at each end, hence the name.

T568B Crossover Cable


RJ45 Pin# RJ45 Pin#
Wire Color Signal Wire Color Signal
(End 1) (End 2)
1 White/Orange Transmit+ 1 White/Green Transmit+
2 Orange Transmit- 2 Green Transmit-
3 White/Green Receive+ 3 White/Orange Receive+
4 Blue Unused 4 White/Brown Unused
5 White/Blue Unused 5 Brown Unused
6 Green Receive- 6 Orange Receive-
7 White/Brown Unused 7 Blue Unused
8 Brown Unused 8 White/Blue Unused

Fiber optic installation


Methods for installing optical fiber cable are essentially the same as used for placing conventional
copper cable. However, optical fiber cable is a high capacity transmission medium, whose qualities
and characteristics can be degraded if it is subjected to excessive pulling, tension, sharp bends, and
crushing bends.
The number of splices in a fiber cable should be controlled to minimize transmission loss and to
reduce the installation time and expenses generated due to splices.
In areas of high potential damage such as congested, highly used computer room false floors, the use
of flexible conduit for additional crush and mechanical protection is highly recommended.
5–5

When installing fiber optic cable in cable trays or ladder racks, install fiber optic to minimize
potential damage when additional cables are installed. Secure off to one side with cable ties. Do not
create a bed of cables spanning the width of the cable tray or ladder rack.

035. Inspection standards


Inspections are frequently performed on equipment by communications technicians. It is important to
know what kind of checklists and/or checksheets you must use and how they are organized. This
lesson covers Quality Control Checksheets and Communications Special Instructions. Let’s get a
better understanding of inspections by first looking at Air Force Communications Quality Control
Checksheets (AFCQCC).
Air Force Communications Quality Control Checksheets
AFCQCCs are guides used primarily by Quality Assurance (QA) to help determine equipment
condition, maintenance quality, and maintenance management effectiveness. Work center and staff
functions may use AFCQCCs when performing self-inspections. AFCQCCs have a standardized
format and can be found on the Air Force Network Integration Center’s (AFNIC) Cyberspace Support
Maintenance Arena Community of Practice (CoP). AFCQCCs are not directive. Conflicts are
resolved between AFCQCCs and TOs or other official Air Force publications in favor of the higher-
level publication. You will not use AFCQCCs to operate, maintain (i.e., tuning, aligning, adjusting,
etc.), or troubleshoot equipment. When equipment or management evaluations are done, they are not
bound by the AFCQCCs. Other checks may be added to ensure a thorough evaluation. MAJCOMs
may add any additional checks necessary to make sure unique mission requirements are met.
AFCQCCs are organized into numbered and titled series designator groups. The following table
shows the AFCQCCs series designator numbers and major functional or equipment categories titles.

SERIES TITLE
100 Mandatory Post-upgrade Personnel/Self-Inspection/IG Preparation Evaluations.
200 Other QA Evaluations.
400 Ground Radar Equipment.
500 Ground Radio Equipment.
600 Teletype/Cryptographic Equipment.
700 Telephone/Wire Equipment.
800 Computer/Display Equipment.
900 Television/Imagery/Intrusion Detection Equipment.
1000 Systems Control Equipment.
1100 Instrumentation and Telemetry Equipment.
1200 Other Cyberspace Support Equipment/Systems.
1300 Electric Power Generation Equipment and Vehicles.
1400 Satellite, Wideband, and Telemetry Equipment.
1500 Tactical Communications Equipment.

NOTE: For air traffic control and landing systems (ATCALS), there will be no equipment related
AFCQCCs. ATCALS work centers perform self inspection evaluations using the Air Traffic Systems
Evaluation Program (ATSEP) checklist published according to AFI 13–204, Volume 2. The ATSEP
checklist can be accessed on the Air Force Flight Standards Agency (AFFSA) ATSEP CoP.
Local communications quality control checksheets (LCQCC)
The flight commander/chief authorizes the use of LCQCCs and they are conspicuously marked as
such. If an AFCQCC or MAJCOM CQCC is published on the same item of equipment, grouping of
equipment, or management function, then do not retain or use your LCQCC.
5–6

Air Force Communications-Special Instructions (AFCSI)


AFCSIs provide a means to issue inspection and servicing requirements, operational performance
checks, and special maintenance instructions related to standard communications equipment for
which formal TO procedures are not published. They may also provide the means to issue optional or
temporary modifications on communications equipment. AFCSIs will only be published for
equipment that is applicable to more than one MAJCOM.
AFCSIs are published only after determining that the supporting ALC or contracted function cannot
produce the required technical data or implement a permanent modification.
Preventive maintenance inspection (PMI) type procedures may be formatted as TO–6WC workcards
(to fit six-hole binders) or as full page documents.
AFCSIs are numbered in three series. The 100-series are for instructions of a general nature, 200-
series for special maintenance instructions, and temporary modifications, and 300-series for
inspection, servicing and lubrication requirements, and operational performance checks.
NOTE: The Air Force is currently in the process of eliminating AFCSIs completely. In the future any
valid material will be incorporated into the appropriate technical order.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.
034. Standard installation practices
1. What does SIPTO guidance help to ensure?

2. Within how many inches is the first label placed on the cable from the connector and how will it
be marked?

3. Within how many inches from the first label would the second label be placed and how will it be
marked?

4. What is the purpose of the binder in a telecommunications cable?

5. What is the color code for the “tip” or “primary” color groups?

6. What is the color code for the “ring” or “secondary” cable pairs?

7. What colors are the following pairs of wires?


(1) Pair 4.

(2) Pair 7.

(3) Pair 10.


5–7

(4) Pair 15.

(5) Pair 23.

8. What are the colors of the two pairs that are reversed in a T568B network crossover cable?

9. Why should you control the number of splices in a fiber cable?

035. Inspection standards


1. What are AFCQCCs?

2. Why are there no ATCALS equipment AFCQCCs?

3. What are AFCSIs?

5–2. Grounding, Bonding, and Shielding Systems


To obtain optimum performance of electronic equipment and personnel safety, good grounding,
bonding, and shielding is essential. A sound grounding system will provide adequate protection
against power system faults, electromagnetic pulse (EMP) and lightning strikes. Bonding of metals
must be carefully considered on grounding systems. Shielding is needed throughout a building for
personnel safety and equipment interference control. For a new facility, requirements in each of these
areas are defined to assure that the necessary measures are incorporated into the final structure and
equipment installation. This section briefly introduces requirements for each.

036. Grounding
This lesson covers the function of grounding and you are expected to know basic facts and terms
about this subject.
Grounding is the provision of a low-impedance conductive path between the earth and the grounded
object. Grounding has three main functions: personnel safety, equipment protection, and noise
reduction.
a. Personnel safety is provided by grounding electrical equipment, buildings, and large metal objects that
could conduct dangerous currents. Grounding such items diverts dangerous currents to earth and
activates protective devices (such as fuses and circuit breakers) that shut down the affected circuit.
b. Equipment is protected by grounding power sources, equipment racks, building structures, and other
conductive objects. This prevents transient voltages and currents from reaching levels that could damage
equipment.
c. Electrical noise is reduced by reducing the difference in voltage potential between equipment and by
connecting and lowering the impedance between the signal reference subsystem and the earth electrode
subsystem. Noise control may also require other measures such as shielding or filtering of local noise
sources.
5–8

Facility ground system


The facility ground system provides a low-impedance conductive path between the earth and the
grounded facility. This system is composed of the following four subsystems: earth electrode,
lightning protection, fault protection, and signal reference.
The earth electrode subsystem
This subsystem provides the connection between the earth and the other subsystems in the facility
ground system. All communications facilities are connected to the earth by capacitive coupling
(resulting from physical displacement of equipment in relation to the actual ground plane), accidental
contact, or intentional connection. The intentional connections make up the earth electrode
subsystem. A typical earth electrode subsystem consists of a network of buried conductors (earth
electrodes) such as ground rods, metal straps, wires, tubes, grids, plates, sheets, water pipes, and well
casings.
The lightning protection subsystem
This subsystem provides a safe path to earth for lightning that strikes a facility. A “safe” path means
one that bypasses personnel and equipment or objects that would otherwise be injured or damaged.
The lightning protection subsystem also indirectly protects the facility from transient voltages
propagated on communications and power lines that enter the facility.
Components of the lightning protection subsystem include the following:
 Air terminals (lightning rods), or overhead ground wires, are used to protect structures from
direct strikes.
 Down conductors to provide a low impedance path from air terminals to the earth electrode
subsystem. This minimizes lightning currents in equipment or structures that are vulnerable to
damage from such currents.
 Surge arresters to protect the facility from transient voltages propagated along power and
communications lines. Such transients are caused by lightning, EMP, and switching
operations at power distribution stations.
The fact that tall structures attract lightning is used to protect shorter objects in the immediate area. In
effect, a tall structure establishes “zones of protection” around itself (fig 5–2). At communications
sites with an antenna tower, the equipment and power buildings often fall completely within the
“cone of protection” of the tower and require no additional protection against lightning. Corners of
large buildings, however, may fall outside the cone of protection. In such cases, air terminals 25 to 61
cm (10 to 24 in) high and spaced not more than 6 m (20 ft) apart will be installed on the unprotected
portion of the roof. Power feeds, communications terminals, storage sheds, administrative buildings,
storage tanks, intrusion alarms, and perimeter or security lighting may also fall outside a cone of
protection. Such structures will be protected with air terminals as noted above or with overhead
ground wires.
The fault protection subsystem
This subsystem protects personnel and equipment from voltages caused by faults in electrical circuits.
It is sometimes called “the safety ground.” The fault protection subsystem provides grounding
conductors that enable protective devices (e.g., fuses, circuit breakers, or ground-fault interrupters) to
operate when a circuit fault occurs. Fault protection conductors will be electrically separate from the
phase and neutral conductors of the power distribution system. They should also be physically
separated from signal conductors and the signal reference subsystem, but should be installed in the
same conduit as other power conductors.
5–9

Figure 5–2. Typical zones of protection.

The signal reference subsystem


This subsystem is often called, simply, the “signal ground.” It establishes a voltage reference for
signal circuits and controls electrical noise and static in a facility. Within a piece of equipment, a
signal ground may be a bus bar or any conductor that serves as a reference plane of the signal circuits
within the equipment. Between pieces of equipment, the signal ground will be a grid or a network of
wires, straps, grounded equipment racks, and other conductors. Signal grounds can serve several
circuits within a piece of equipment or several pieces of equipment within a facility, as well as
interconnecting all C-E equipment within a facility. The signal reference network can be grounded at
one point within the facility or at multiple points, depending on the types of equipment and
operational frequencies involved.
In modern facilities, the vast majority of signal reference subsystems are multipoint systems. The
most common method of establishing a multipoint signal reference subsystem is to bond equipment
cabinets, racks, conduit, duct, bus bars, wires and other conductors to a common interior metal sheet
or grid. The resulting network is known as an “equipotential plane” since all points in the network are
at the same potential. The term “equipotential plane” can also be used to refer simply to the sheet or
grid (to which the other conductors are connected). Such sheets or grids are usually installed in or on
the floor or ceiling of a facility and are connected to the earth electrode subsystem at multiple points.

037. Bonding
This lesson covers the functions of bonding and you are expected to know basic facts and terms about
this subject.
Bonding occurs when a low-impedance path between two metal surfaces is established. A good bond
is also mechanically strong and resists corrosion and stress. Good bonding is essential in all circuits—
5–10

both communications and power—as well as in the lightning protection subsystem—where


conductors must be joined. Bonds should be made so the mechanical and electrical properties of a
path are characterized by the bonds, not merely the junction of the metal surfaces (the junction would
have a much higher impedance). Bonds can be made in many ways: by welding, brazing, soldering or
by compression using bolts, nuts, clamp, and special straps.
The term “bond” refers to both the mechanical interface between the joined conductors and to
bonding straps used to interconnect two separate conductors. The mechanical bond must be strong
and the bonding strap must resist deterioration through corrosion or looseness (see fig. 5–3).
Bonds in the earth electrode subsystem normally are direct permanent bonds and do not exceed a
resistance of 1 milliohm. All bonds in the earth electrode subsystem must be protected against
corrosion and mechanical damage.
Bonds of the lighting protection subsystem normally are welded or brazed and Underwriters
Laboratories (UL) approved high compression clamping devices may be used. However, exothermic
welds or brazing will be used on all bonds not readily accessible for inspection and maintenance.
Hardware used in the lightning protection subsystem should be made of metals that do not corrode
easily, such as zinc, copper, cadmium, aluminum, and bronze. Do not use soft solder.
Bonds in the fault protection subsystem normally are exothermic welds. Semi-permanent mechanical
bonds such as crimps and bolts or bonding straps may be used if exothermic welds cannot be made
and will not exceed a resistance of 12 milliohms. Soft solder will not be used in the fault protection
subsystem.
Signal reference bonds normally are direct mechanical bonds such as crimps, connectors, bolts, or
soft solder. Exothermic welding will be used to attach the earth electrode subsystem.

Figure 5–3. Example of bonding.

038. Shielding systems


Shielding serves two basic functions: to prevent interference between multiple emanating systems
(electrical/communications) and to minimize compromising emanations as covered in the
EMSEC/TEMPEST discussion in your common core CDCs.
The effectiveness of a shielded enclosure (or cabinet) is a measure of its ability to attenuate
electromagnetic energy. Shielding effectiveness is expressed in decibels (dB) and is determined by
measuring the field strength in volts/meters with and without the shield. The overall effectiveness of a
shielded enclosure is its ability to prevent the unwanted passage of radiated and conducted
5–11

electromagnetic signals. Openings such as doors, filters, and air vents must be specially designed and
installed to prevent loss of shielding effectiveness.
Shielding the equipment, room, or facility is an excellent way to contain compromising emanations
within the inspectable space. Because this method is very expensive, its use must be validated by a
certified TEMPEST technical authority. Shielding is considered in hostile environments where it is
known compromising emanations are exploited. Also, consider using shielding in other, less hostile
environments where US Government control is extremely limited. Shielding is applied directly to
equipment (encapsulation), rooms (shielded enclosure), or facilities (global). There are two standards
for shielding effectiveness:
1. The Air Force standard for shielding effectiveness (attenuation) for shielded enclosures and
global shielding is 50 dB (plus an initial 10-dB allowance for life-time degradation). Do not
install cryptographic equipment or radio equipment in a shielded enclosure.
2. The standard for equipment is National Security Telecommunications and Information
Systems Security Advisory Memorandum (NSTISSAM)/TEMPEST 1–92, Level I.
Cable shields generally consist of braid, foil, metal wrap, or conduit. A wire twisted around an
insulated conductor can be used as a shield. You must be careful during installation to maintain the
integrity of the shield. The effectiveness of a shield is a function of the conductivity of the metal,
strand size, percentage of coverage, and size of openings. Multiple layers of shielding are more
effective than a single layer. Separating the shields with dielectric material or using special strip braid
can further increase effectiveness. Although power leaked by a cable may be a very small percentage
of transmitted power, this percentage could cause significant interference to sensitive circuits.
Containing conducted compromising emanations
Evaluating all conductive paths for the escape of compromising emanations is a difficult task because
there are so many. There are obvious paths like telephone lines and other signal lines. However, less
obvious paths are the power lines (and the electrical metallic tubing [emt] they run in), extension
cords, signal ground wires, alarm signal lines, fire suppression piping, air conditioning and heating
ducts, water pipes, building metallic members, and so forth. Each of these areas must be evaluated
based on a number of factors like the threat level, sensitivity of the information, location of the
facility on the base relative to the boundary of the inspectable space, and so forth.
RED and BLACK concept
The RED and BLACK concept, by definition, establishes areas for the placement of equipment
processing classified information (RED) that are separate and unique from the areas for equipment
processing unclassified information (BLACK). The concept has two parts:
1. Physical Separation—All equipment, wire lines, components, and systems that process
classified information are considered RED. All equipment, wire lines, components, and
systems that process encrypted classified information or unclassified information are
considered BLACK. The RED and BLACK concept is to establish minimum guidance for
physical separation to decrease the probability that electromagnetic emissions from RED
devices will couple to BLACK systems.
2. Electrical Separation—Electrical separation ensures that every signal conductor from a RED
device is routed to another RED device, or is encrypted before connection to a BLACK
device. Electrical separation addresses signal distribution, power distribution, and grounding.
Switches and other devices used to interface between RED and BLACK circuits and
equipment should exhibit the following port-to-port isolation characteristics:
 100 dB over the baseband audio frequency range between 0.3 and 15 kilohertz.
 80 dB over the baseband video frequency range of 0-to–5 megahertz (MHz).
 60 dB over the frequency range from 1 times the basic data rate to 10 times the basic
data rate of the digital signals processed.
5–12

RED and BLACK equipment


Equipment is comprised of machines which process information in electronic, electrical, or
electromechanical form. As information is processed, current flows in wires and traces in the
equipment, electronic circuits change state, switches open and close, solenoids are energized and de-
energized, images are created on screens with electron streams and many other similar actions occur.
When these actions take place, emanations are generated. If classified information is processed, then
these are compromising emanations. When processing classified information, there are two types of
equipment:
1. RED equipment processes clear-text classified information.
2. BLACK equipment processes encrypted or unclassified information.
Designating equipment as RED or BLACK further ensures the physical and electrical separation
requirement. All RED equipment is connected only to other RED equipment or the output is
encrypted before connecting to BLACK equipment.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.

036. Grounding
1. What are the three main functions of grounding?

2. What are the four subsystems of the facility ground system?

3. What facility ground subsystem is also known as the safety ground?

037. Bonding
1. What is bonding?

2. What does the term bond refer to?

3. What is the maximum resistance for bonds in the earth electrode subsystem?

4. What type of bond is not allowed for the lighting protection and fault protection subsystems?

038. Shielding systems


1. What are the two standards for shielding effectiveness?

2. Name at least three less obvious conductive paths for the escape of compromising emanations.
5–13

3. What is considered red equipment?

4. What is considered black equipment?

5–3. Powering Electrical Equipment


Have you ever been watching television or listening to the radio when the power failed? Have you
ever come home to find the clock on your microwave and stove flashing? Have you ever stumbled in
the dark because the power failed and the lights were out? If you have, you know what an
inconvenience a power failure is. Now, consider the aftermath of a power failure on a sensitive
command and control system or tactical system. Clearly, such events could have grave consequences
especially if these systems support a critical mission. Batteries and uninterruptible power supplies
offer an effective solution to these problems.

039. Batteries
Batteries are reliable and portable emergency power sources. Batteries today are becoming more and
more specialized. They vary not only in size and shape but in performance characteristics. Batteries
are designed for intermittent or constant use, for light or heavy current drain, for long-term voltage
stability, for hazardous atmosphere use, and for other specific applications. Because batteries are an
expensive source of power, the proper choice depends on intended use. Batteries are divided in two
categories: primary and secondary cells. Each category contains several different types. A general
familiarity with batteries enables you to decide whether the type you are using or choosing is the best
to meet your mission requirement.
Primary cells (non-rechargeable)
Carbon zinc and alkaline are primary cells. This variety is also called a dry cell because of its moist
paste electrolyte. A look at the operational characteristics of these batteries will show the difference.
Carbon zinc
Carbon zinc is the familiar flashlight battery with a nominal output of 1.5 volts. The performance of
this cell is seriously degraded by low temperatures but its shelf life can be extended by storing them
at temperatures in the 40° range.
Alkaline
Where high current drain is required, the alkaline battery is the choice. The cell has a nominal voltage
of 1.5 volts, a very low internal resistance and a high service capacity. The ampere-hour capacity is
relatively consistent over a wide range of discharge rates and current drains. They cover the entire
range of all flashlight cells and several others. Alkaline cells cost three times as much as carbon zinc
cells but cost considerably less than mercury type. The cell provides 10 times the service of carbon
zinc. Alkaline batteries are used to power many devices such as digital cameras, CD players, motors
in models, and toys, and so forth. They easily handle current drain that would strain the capacity of
other types of batteries. Alkaline cells have good, high and low temperature performance, a flatter
voltage response than carbon zinc and a fairly good shelf life. Unfortunately, alkaline cells contain a
very corrosive electrolyte that will destroy sensitive equipment should the cell leak.
Secondary cells (rechargeable)
Secondary cells are rechargeable cells. Lead-acid, nickel cadmium, and lithium are common types.
5–14

Lead-acid
The automobile storage battery is probably the best known application of the lead-acid cell. The ratio
of charging power to discharge power is a big advantage of this cell. Discharge power, along with low
initial costs, makes it preferable where economy is important. It comes in a variety of shapes, sizes,
capacities, and qualities. The cost of the best quality can be 10 times that of the cheapest on the
market. Quality is usually commensurate with cost. Practically, the lead-acid battery cannot be sealed.
The recharge cycle generates gases that must be vented. It has a highly corrosive electrolyte. It cannot
be recharged at low temperatures and will freeze if left in the discharged condition at freezing
temperatures. The cells nominal voltage is 2.0 volts per cell. Highly reactive sponge lead makes up
the negative electrode, lead dioxide forms the positive material and a sulfuric acid solution forms the
electrolyte.
Nickel cadmium
Nickel cadmium (NiCd) batteries were little known in the country until 30 years ago, although a half
century of successful use in Europe proved them an outstanding electrochemical device. They have
excellent charge retention properties and may be recharged many times without degradation. This
type of battery is noted for its high-rate and short-term discharge capability and good low temperature
performance. Larger sizes are used for emergency lighting, marine power systems, railroad signaling,
diesel engine, starting, and similar applications. NiCd batteries are now hermetically (air tight seal)
sealed for most applications. Applications previously uneconomical because of high energy
requirements are now practical and possible with this complete line of rechargeable batteries. Portable
communications equipment, ranging from laptop computers to satellites, is adaptable to this
rechargeable sealed cell. The sealed cell is free of all the usual maintenance routines associated with
wet rechargeable batteries. The nominal voltage of either a sealed or unsealed nickel cadmium cell is
1.28 volts. However, there are some significant drawbacks to using nickel cadmium.
There are four disadvantages to using nickel cadmium. First, there is an explosion hazard. Short
circuiting the negative and positive terminals can cause an explosion and so can overcharging.
Second, a thin conductive crystal may develop causing internal short circuits and premature battery
failure called dendrites. Third, the environmental consequences of disposing these batteries can cause
substantial pollution when disposed of in a landfill or when incinerated. It is best to recycle NiCd
batteries when no longer used. Lastly, there are recharging problems associated with NiCd batteries.
In regards to the so-called “lazy battery effect,” it is the result of overcharging. The battery appears to
be fully charged but discharges shortly after it is put into use. Another effect this battery can
experience is called the, “memory effect.” NiCd batteries tend to remember the point in its charge
cycle where recharging occurs. Upon subsequent use, a sudden drop in voltage happens at that same
point as if the battery had been discharged. Lithium batteries, however, do not suffer from the
memory effect.
Lithium
Lithium batteries are commonly used in many devices such as laptops, digital cameras, phones, MP3
players, and other consumer electronic devices. The cells nominal voltage is about 3.0 volts per cell,
but it can vary from about 2.7 volts (discharged) to about 4.23 volts (fully charged). The lithium
battery is light weight, has a high power capacity, no memory effect, and has a long shelf life.
Because of its ideal features, it can be used in many critical devices such as pace makers or other
implantable electronic medical devices. Unfortunately, there are some disadvantages.
There have been numerous safety recalls for lithium batteries. Lithium batteries have been known to
rupture, ignite, or explode when exposed to high temperatures. Just like the NiCd batteries, if the
terminals are shorted together an explosion may occur. Safety devices are used to monitor this battery
to prevent harm to the user. The safety devices could include a shut-down separator for over
temperature, tear away tab for internal pressure, vent for pressure relief, and thermal interrupt for
overcharging. If used, these safety mechanisms cause irreversible damage to the battery and take up
space.
5–15

This sample of primary and secondary batteries is a random choice based on popularity. It by no
means covers the field. What we have shown so far is that each battery has advantages and
disadvantages that you must consider in battery selection. Simply comparing voltage and physical
size will not prove satisfactory to determine a suitable replacement. Final selection is a trade-off of
desirable and undesirable characteristics and is normally an optimum balance of size and weight
against life and operating costs. To choose a battery, you need to know the variety available and the
job each type can do. The search for the best is continuing and the batteries in the laboratory today
will replace some types in present use.
040. Uninterruptible power supply
UPS systems exist for only one reason, to provide an alternate source of clean alternating current
(AC) power in an emergency. The UPS is a combination battery charger and inverter. It’s used to
power critical loads that cannot tolerate power fluctuation or loss. Before we introduce the UPS, we’ll
clarify a few “power-problem” definitions that a UPS is designed to handle.
Defining power problems
You can sum power line faults into five basic categories: sags, blackouts, spikes, surges, and noise.
Sag
Sags are short term decreases in voltage levels, also known as brownouts. Sags are the most common
power problem. They are caused by the start-up power demands of many electrical devices (including
motors, compressors, large computer systems, shop tools, etc.). Sags also are an electric company’s
means of coping with extraordinary power demands. In a procedure known as “rolling brownouts,”
the utility will systematically lower voltage levels in certain areas for hours or days at a time. Hot
summer days, when air conditioning requirements are at their peak, will often prompt rolling
brownouts. Sags can “starve” a computer of the power it needs to function, causing frozen keyboards
and unexpected system crashes with the end result being the loss or corruption of data.
Blackout
A blackout signals total loss of utility power. Excessive demand on power grids, lightning storms,
downed power lines, car accidents, earthquakes, and so forth, are primary causes. Suffer a blackout
and you will lose the work you didn’t save to the hard drive.
Spike
A spike is an instantaneous, dramatic increase in voltage, also called an impulse. Akin to the force of
a tidal wave, a spike can enter electronic equipment and damage or completely destroy components.
Major causes of spikes are from nearby lightning strikes. Spikes also can occur if utility power lines
are downed in a storm or as the result of a car accident. A single spike can produce catastrophic
damage to hardware and loss of data.
Surge
Surges are short term increases in voltage, typically lasting at least 1/120 of a second. High-powered
electrical motors such as air conditioners and household appliances produce surges. When this
equipment is switched off, the extra voltage is dissipated through the power line. Computers and
similar sensitive electronic devices are designed to receive power in a certain voltage range. Anything
outside the expected peak and average voltage levels will stress delicate components and cause
premature failure.
Noise
Noise is more technically referred to as electro-magnetic interference (EMI) and radio frequency
interference (RFI). Electrical noise disrupts the smooth sine wave you should expect from utility
power. Electrical noise is caused by many factors and phenomena, including lightning, load
switching, generators, radio transmitters, and industrial equipment. It may be intermittent or chronic.
Noise introduces glitches and errors into executable programs and data files.
5–16

Uninterruptible power supply construction and theory of operation


An important part in protection against power problems are UPS systems. In short term outages the
UPS system provides backup power to mask the outage. During long-term outages the UPS ensures
systems are shut down properly or, in situations where a generator is available, the UPS ensures the
protected loads continue to receive stable, regulated power throughout the outage. UPS systems also
perform an equally valuable function by protecting equipment from fluctuations in power quality.
How the UPS handles incoming power—the UPS topology—determines how effective the UPS can
be in shielding sensitive electronics from power fluctuations. It also determines how frequently the
UPS has to go to battery to protect the load, impacting battery life and performance.
In 1999, the International Electrotechnical Commission (IEC), released standards to make sure
consistency in how UPS topologies are identified. The IEC defined three types of UPS:
1. Passive standby.
2. Line interactive.
3. Double conversion.
There are substantive differences in how each of these three topologies handle incoming power and
these differences have a direct impact on UPS performance.
Passive standby
Passive Standby is the term the IEC adopted to more accurately describe the topology that has
traditionally been referred to as “offline.” Figure 5–4 shows the topology of the passive standby UPS.
The IEC defines passive standby as follows: In normal mode of operation, the load is supplied with
AC input power via the UPS switch. When the AC input supply is out of UPS preset tolerances, the
unit enters stored energy mode of operation (battery power) by activating the inverter and the load is
transferred to the inverter directly or via the UPS switch. The battery-inverter combination maintains
continuity of load power for the duration of the stored energy time or until an acceptable AC input
supply returns to within UPS preset tolerances and the load is transferred back. This is a relatively
inexpensive UPS to produce, but due to limitations in how it conditions incoming power, it is
typically limited to applications that are not mission critical and are smaller than 3 kilovolt ampere
(kVA).

Figure 5–4. Passive standby topology.


(Reprinted with permission of Liebert Corporation)

Line interactive
Line interactive systems use a transformer or inductor between the utility power source and the load
to correct or filter some variations in input power. Figure 5–5 shows the topology of the line
interactive UPS.
The IEC defines the operation of a line interactive system as follows: In normal mode of operation,
the load is supplied with conditioned power by a parallel connection of the AC input and the UPS
inverter. The inverter or the power interface is operating to provide output voltage conditioning
5–17

and/or battery charging. The output frequency is dependent upon the AC input frequency. When the
AC input supply voltage is out of the UPS preset tolerances, the inverter and the battery maintain
continuity of load power in stored energy mode of operation and the power interface disconnects the
AC input supply to prevent back feed from the inverter. The unit runs in stored energy mode for the
duration of the stored energy time or until the AC input supply returns within UPS design tolerances,
whichever occurs sooner. In the event of a UPS functional unit failure, the load may be transferred to
bypass. This topology provides some degree of power conditioning, but does not effectively isolate
the load from major power problems, such as input frequency transients and certain types of input
faults. In addition, it is much more dependent on battery power for conditioning than a double
conversion topology.

Figure 5–5. Line interactive topology.


(Reprinted with permission of Liebert Corporation)

Double conversion
What was traditionally referred to as an “online” topology was redefined by the IEC as a double
conversion UPS to more accurately distinguish between this topology and the line interactive
approach, which some manufacturers tried to label as “online.” Figure 5–6 shows the topology of the
double conversion UPS.
The IEC defines the double conversion topology as follows: In normal mode of operation, the load is
continuously supplied by the rectifier/inverter combination. When the AC input supply is out of UPS
preset tolerances, the unit enters stored energy mode of operation, where the battery/inverter
combination continues to support the load for the duration of the stored energy time or until the AC
input returns to UPS design tolerances, whichever is sooner. In the event of a rectifier/ inverter failure
or the load current becoming excessive, either transiently or continuously, the unit enters bypass
mode where the load is temporarily supplied via the bypass line from primary or secondary power.
Because of the isolation and degree of power conditioning this topology delivers, it is preferred for
applications that have low tolerance for downtime and those that utilize redundant UPS systems and a
backup generator. It is sometimes referred to as “online double conversion” because many people still
know this topology by the term that was commonly used for it prior to the creation of the IEC
standards.

Figure 5–6. Double conversion topology.


(Reprinted with permission of Liebert Corporation)
5–18

Topology and uninterruptible power supply performance


There are significant differences in performance between each of these topologies. The passive
standby topology provides limited, if any, power conditioning capabilities and is generally used only
in non-critical desktop applications.
The line interactive topology does provide some degree of power conditioning and consequently has
been promoted as a solution for mission critical applications by some UPS manufacturers. However,
the difference between a line interactive and double conversion system is as significant as the
difference between a passive standby and line interactive system. The double conversion system
provides significantly greater protection.
Double conversion systems provide the highest level of protection. The double conversion topology
more effectively isolates the load from power problems, including frequency transients and input AC
source faults, does not have to go to battery as frequently and is better able to handle the sags and
surges that occur during transition to generator power. Consequently, double conversion UPS systems
are better able to handle the extreme conditions that occur before, during, and after a power outage.

041. Fixed generators and transfer switches


Generator sets produce either single or three phase power. Choose a single phase set if you do not
have any motors above five horsepower. Three-phase power is better for motor starting and running.
Most homeowners will require single phase, whereas, industrial or commercial applications usually
require three phase power. Three phase generators are set up to produce 120/208 or 277/480 volts.
Single-phase sets are 120 or 120/240.
In order for critical sites to continue their mission when the primary electrical source has been
disconnected, they must use what is known as an automatic transfer switch (ATS). Automatic transfer
switches are an integral part of the power generation process, allowing smooth and immediate transfer
of electrical current between multiple sources and the load. When the generator is operating, the
transfer switch prevents dangerous feedback of current to the utility’s system. It also ensures that
different power sources are fully synchronized before their power is combined or loads are
transferred, which is imperative for safe operation.
The ATS senses when utility power is interrupted and starts up the generator if the utility power
remains absent. In about five to ten seconds, when the generator is producing full power, the transfer
switch disconnects the load from the utility and connects it to the generator, restoring electricity to the
load. The transfer switch continues to monitor utility power, and when it is restored, switches the load
from the generator back to the utility. Once the generator is disconnected, it goes through a cool-
down routine and is automatically shut down.

Self-Test Questions
After you complete these questions, you may check your answers at the end of the unit.
039. Batteries
1. What are the two categories of batteries?

2. What is another name for primary cell batteries?


5–19

3. What primary battery cell has a nominal voltage of 1.5 volts, a very low internal resistance, and a
high service capacity?

4. What secondary cell battery is known for its light weight, high power capacity, no memory effect
and long shelf life?

040. Uninterruptible power supply (UPS)


1. What is the primary reason for an UPS?

2. What are the five basic categories of power line faults?

3. What power line fault is analogous to a tidal wave and can damage or completely destroy
components?

4. What are the three types of UPS defined by the IEC?

5. What UPS topology has been traditionally referred to as “offline?”

6. What UPS topology has been traditionally referred to as “online?”

041. Fixed generators and transfer switches


1. How many phases are preferred in a generator set for motors and industrial settings?

2. What are the power specification outputs for the two types of generator sets?

3. What is the purpose of an ATS?


5–20

Answers to Self-Test Questions


034
1. Installation projects are accomplished safely with special emphasis on personnel and equipment safety.
2. Within 12 inches and will be marked “TO” and will indicate the bay, equipment shelf, terminal block, and
plug or jack to which the connector will mate.
3. The second marker will be placed within two inches of the first and will be marked “FROM.”
4. To group by spiraling around a specific number of pairs that replicate in color multiple times throughout a
cable.
5. White, red, black, yellow, and violet.
6. Blue, orange, green, brown, and slate.
7. (1) White –Brown.
(2) Red–Orange.
(3) Red–Slate.
(4) Black–Slate.
(5) Violet–Green.
8. White/orange and orange; white/green and green.
9. To minimize transmission loss and to reduce the installation time and expenses generated due to splices.
035
1. Guides used primarily by quality assurance (QA) to help determine equipment condition, maintenance
quality, and maintenance management effectiveness.
2. ATCALS work centers perform self inspection evaluations using the Air Traffic Systems Evaluation
Program (ATSEP) checklist published in accordance with AFI 13-204, Volume 2.
3. Provide a means to issue inspection and servicing requirements, operational performance checks, and
special maintenance instructions related to standard communications equipment for which formal TO
procedures are not published. They may also provide the means to issue optional or temporary
modifications on communications equipment.
036
1. Personnel safety, equipment protection, and noise reduction.
2. The earth electrode subsystem, lightning protection subsystem, fault protection subsystem, and the signal
reference subsystem.
3. The fault protection subsystem.
037
1. The establishment of a low-impedance path between two metal surfaces.
2. Both the mechanical interface between the joined conductors and to bonding straps used to interconnect
two separate conductors.
3. It should not exceed a resistance of 1 milliohm.
4. Soft solder.
038
1. For shielded enclosures and global shielding is 50 dB. The standard for equipment is National Security
Telecommunications and Information Systems Security Advisory Memorandum (NSTISSAM)/TEMPEST
1–92, Level I.
2. The power lines (and the electrical metallic tubing [emt] they run in), extension cords, signal ground wires,
alarm signal lines, fire suppression piping, air conditioning and heating ducts, water pipes, building metallic
members, and so forth.
3. Equipment that processes clear-text classified information.
4. Equipment that processes encrypted or unclassified information.
5–21

039
1. Primary and secondary cells.
2. It is also called a dry cell because of its moist paste electrolyte.
3. Alkaline.
4. Lithium battery.
040
1. To provide an alternate source of clean AC power in an emergency.
2. Sags, blackouts, spikes, surges, and noise.
3. A spike.
4. Passive standby, line interactive, and double conversion.
5. Passive standby.
6. Double conversion.
041
1. Three-phase power.
2. Three phase generators are set up to produce 120/208 or 277/480 volts. Single-phase sets are 120 or
120/240.
3. It allows for smooth and immediate transfer of electrical current between multiple sources and the load.

Do the unit review exercises before going to the next unit.


5–22

Unit Review Exercises


Note to Student: Consider all choices carefully, select the best answer to each question, and circle
the corresponding letter. When you have completed all unit review exercises, transfer your answers to
the Field-Scoring Answer Sheet.
Do not return your answer sheet to the Air Force Career Development Academy (AFCDA).
88. (034) Why should the number of splices be controlled during fiber optic installation?
a. Prevent eavesdropping.
b. To aid in troubleshooting.
c. Reduce potential damage.
d. Minimize transmission loss.
89. (035) Which is a function of Air Force Communications Special Instructions?
a. Permanently change technical order procedures.
b. Means to issue optional equipment modifications.
c. Provide significant system repair history for equipment transfers.
d. Paraphrases technical order procedures via quick reference index cards.
90. (036) What are the three main functions of grounding?
a. Personnel safety, equipment protection, and noise reduction.
b. Signal amplification, personnel safety, and emissions security.
c. Equipment protection, signal amplification, and capacitance reaction.
d. Corrosion prevention, emissions security, and over-voltage protection.
91. (036) What facility ground subsystem uses surge arrestors?
a. The earth electrode.
b. The fault protection.
c. The signal reference.
d. The lightning protection.
92. (036) What facility ground subsystem is also called the safety ground?
a. The earth electrode.
b. The fault protection.
c. The signal reference.
d. The lightning protection.
93. (037) Bonds in the earth electrode subsystem normally are direct permanent bonds and do not
exceed a resistance of
a. 1 milliohm.
b. 2 milliohms.
c. 5 milliohms.
d. 10 milliohms.
94. (037) What kind of bonds are normally applied to signal reference bonds?
a. None.
b. Brazing.
c. Welding.
d. Mechanical.
5–23

95. (038) What are the two basic functions of shielding?


a. Prevent interference between multiple emanating systems and minimize compromising
emanations.
b. Prevent interference between multiple emanating systems and block all emanations from
exiting a secure facility.
c. To equalize all electronic systems to the same potential and minimize compromising
emanations.
d. To equalize all electronic systems to the same potential and block all emanations from exiting a
secure facility.
96. (038) What equipment should be designated as RED?
a. Equipment processing unencrypted unclassified information.
b. Equipment processing clear-text unclassified information.
c. Equipment processing encrypted classified information.
d. Equipment processing clear-text classified information.
97. (039) Which of these is considered a primary cell?
a. Nickel cadmium.
b. Carbon zinc.
c. Lead-acid.
d. Lithium.
98. (040) What power line fault category is more typically referred to as Electro-Magnetic
Interference and Radio Frequency Interference?
a. Sag.
b. Spike.
c. Noise.
d. Blackout.
99. (040) What type of uninterruptible power supply (UPS) has traditionally been referred to as
“offline” by technicians?
a. Passive standby.
b. Line interactive.
c. Double conversion.
d. Ferro resonant standby.
100. (041) How many phases of generator power are usually required for industrial and commercial
applications?
a. Single.
b. Double.
c. Three.
d. Four.
5–24

Student Notes
G–1

Glossary of Abbreviations, Acronyms, and Terms


Terms
Air Force spectrum resolution (AFSIR) program—Official Air Force program for documenting and
resolving spectrum interference problems. Program is intended to identify and resolve EMI at the
lowest possible level.

amplitude—The magnitude or size of a signal. The amount of change in a quantity from its zero value.
Usually modified by peak, root-mean-square (rms), effective, maximum, and so forth.

amplitude modulation (AM)—The process of varying the amplitude of a radio-frequency (RF) signal at
an “audio rate.” The RF signal is called the carrier, and the audio is called the intelligence. In this
process, two sidebands are developed—one that is equal in frequency to the carrier plus the
intelligence (upper sideband) and one that is equal to the carrier minus the intelligence (lower
sideband). The percentage of modulation is the amount of change in the carrier that takes place as
a result of the modulation process. The percentage of modulation is equal to:

Where:
A = the amplitude of the modulating signal, usually audio.
R = the amplitude of the modulated signal, usually radio frequencies.

To be accurate, we have to consider the amplitude of both the audio and the radio frequency. For
example, if we modulate a 20 Vrms RF signal with a 5 Vrms audio, the percentage of modulation
would be 25 percent. If we modulate a 20 Vrms RF signal with a 20 Vrms audio, the percentage of
modulation would be 100 percent. If we modulate a 10 Vrms RF signal with a 10 Vrms audio, the
percentage of modulation would be 100 percent type of modulation that changes the height of the
waveform as time goes by.

asynchronous transmission—A data transfer method in which signals are individually synchronized,
usually by the use of start and stop bits. Each data character is preceded by a start-bit and is
followed by an end-bit to inform the receiving equipment where the character begins and ends.

attenuation—The decrease in power of a signal, light beam, or lightwave, either absolutely or as fraction
of a reference value. The decrease usually occurs as a result of absorption, reflection, diffusion,
scattering, deflection, or dispersion form an original level. In fiber optics, it is normalized and
expressed in decibels per kilometer (dB/km).

audio frequency (AF)—The frequencies between 15 Hz and 20,000 Hz are generally considered to be
the audio frequencies. These are the frequencies that most human ears can hear. The most
important components of the human voice are primarily found between 300 and 3,000 Hz. For
voice communications systems, this is often the frequency range used.

avalanche photodiode—Photodiode in which the photocurrent is amplified with low additional noise by
carrier multiplication in an electric field (avalanche effect).

band limiting—Defines the highest and lowest frequency encoded.


G–2

bandwidth (BW)—In communication electronics, the space the transmitted signal will occupy in the
frequency spectrum. In computer networks, bandwidth is the amount of data that can be carried
from one point to another in a given period of time.

bit—An electrical or light pulse whose presence or absence indicates data. The optical waveguide
capacity to transmit information without error is expressed in bits per second per unit length.

bonding—The process of establishing the required degree of electrical continuity between the conductive
surfaces of members to be joined.

carrier frequency— A higher frequency signal of constant frequency, phase, and amplitude that can be
varied to efficiently transmit a lower frequency data or intelligence signal.

checksum—An error detection method used to count the number of bits in a transmitted unit using some
algorithm and storing the results with data. When data is received, a new checksum is calculated
and compared with the existing checksum for error.

connectors—Hardware installed on cable ends permitting attachment to another cable or to a transmitter


or receiver.

cyclic redundancy check (CRC)—An error checking technique used to ensure the accuracy of
transmitting digital data. The transmitted messages are divided into predetermined lengths which,
used as dividends, are divided by a fixed divisor. The remainder of the calculation is ppended onto
and sent with the message. At the receiving end, the computer recalculates the remainder and
compares it to the transmitted remainder for error detection.

dark current—The output current a photodiode emits in the absence of light.

dB/km—Units of attenuation per unit length. Decibels/kilometer.

detector—A device that changes a light signal into an electrical signal—the receiver in an optical system.

digitization—The process of converting an analog signal into a digital signal.

dispersion—A degradation factor in a fiber that limits the bandwidth of the information transmitted. It
causes components of a light pulse to arrive at the end of a fiber at different times, making it
appear spread out. Bandwidth limitation occurs because the pulses spread so far that they are
indistinguishable from each other.

distortion—Any change in the shape of a signal from input to output of a device.

electromagnetic—Anything having both magnetic and electrical properties.

electromagnetic interference (EMI) —The disruption of operation of an electronic device when it is in


the vicinity of an electromagnetic field in the RF spectrum that is caused by another electronic
device.

electromagnetic pulse (EMP)—An extremely strong, short-lived magnetic field resulting from a nuclear
explosion. A high-altitude explosion could cause a damaging magnetic field at 1,500 to 3,000
miles.

exoatmospheric nuclear explosion—The explosion of a nuclear weapon above the sensible atmosphere
(above 120 kilometers) where atmospheric interaction is minimal.
G–3

fall of potential principle—A test method that is used to measure the total resistance of an earth ground
system by using two earth electrodes.

fiber—A single discrete optical transmission element usually composed of a core and cladding.
fiber bundle—A group of parallel optical fibers over which a loose-fitting polyvinyl chloride (PVC)
jacket has been extruded.

fiber optic communication system—Converts an electrical signal into light, transmits the light through
the fiber, and converts the light back into an electrical signal.

fiber optics—The technique of converting light or images through a particular configuration of glass or
plastic fibers.

frequency—The number of cycles per unit of time denoted by hertz (Hz). One hertz = one cycle per
second.

frequency deviation—The amount a transmitted signal shifts away from the carrier frequency.

frequency modulation (FM)—Type of modulation that changes frequency of the carrier wave.

gain—How much a signal is increased by circuit action.

ground—The electrical connection to earth primarily through an earth electrode subsystem. This
connection is extended throughout the facility via the facility ground system consisting of the
signal reference subsystem, the fault protection subsystem, the lightning protection subsystem, and
the earth electrode subsystem.

guard band—Unused part of the radio spectrum between radio bands, for the purpose of preventing
interference.

hertz(Hz)—A unit of frequency of one cycle per second.

high-altitude electromagnetic pulse (HEMP)—Electromagnetic pulse that is emanated from the


detonation of a single nuclear warhead several kilometers in the atmosphere. It propagates to all
points in the air or ground within line of sight of the burst.

injection laser diode (ILD)—Sometimes called the semiconductor diode. A laser in which the lasing
occurs at the junction of N-type and P-type semiconductor materials.

jamming—deliberate radiation or reflection of electromagnetic energy for the purpose of disrupting


enemy use of electronic devices or systems.

joint spectrum interference resolution (JSIR) program)—

kilometer (km)—1,000 meter or 3,281 feet or 0.621 mile. The most common length measurement in
fiber optics.

laser (light amplification by stimulated emission of radiation)—a coherent light source that can
produce intense radiation in the form of a collimated beam.
G–4

light detector—An output device, such as a photodiode, that detects light and converts it into an
electrical output that's a useful facsimile of the original input information.

longitudinal redundancy check (LRC)—An error checking method that generates a parity bit from a
specified string of bits on a horizontal line.

modulation—A process by which certain characteristics of one wave, the carrier wave, are modulated or
modified in accordance with a characteristic of another wave or signal, the modulating wave.

modulating signal—Lower frequency such as audio which varies the higher frequency carrier.

modulation index (MI)—The amount of frequency deviation divided by the frequency of the modulating
signal.

nanometer (nm)—one billionth of a meter = 10-9 meter.

nanosecond (ns)—one billionth of a second = 10-9 second.

noise—Any unwanted signal in an output from a circuit.

numerical aperture (NA)—An indication of an optical fiber's acceptance of impinging light.

optical amplifier—device that amplifies an optical signal directly, without the need to first convert it to
an electrical signal.

oscillator—A circuit that generates a varying output. The rate that it varies is determined by the circuit
component values. Any amplifier with enough regenerative feedback will be an oscillator.

overmodulation—Any percentage of modulation greater than 100% that causes the bandwidth of an AM
signal to increase considerably and produces severe distortion.

parity bit—An extra bit that can be added to a group of "0" bits and "1" bits to make the parity of the
group odd or even. The parity bit is discarded when the message is received.

phase modulation (PM)—Type of modulation that changes the phase of successive waves of the carrier.

phase shift keying (PSK)—Method of digital communication in which the phase of a transmitted signal
is varied to convey information.

photodetector, receiver—Same as light detector; it transforms light into electricity.

pigtail—A short length of optical fiber that's permanently attached to an optical connector or coupler and
serves as a splicing point for a longer optical fiber.

positive intrinsic negative (PIN) photodiode—Photodiode with absorption primarily in a depletion layer
within its P-N junction. It has a high quantum efficiency but no internal gain.

pulse amplitude modulation (PAM)—A modulation technique where the message information is
encoded in the amplitude of the pulse train.

pulse code modulation (PCM)—A complete digitization process.


G–5

pulse modulation—The process of varying the pulses of a pulse train in some way to represent the
analog input signal.

pulse position modulation (PPM)—A modulation technique that changes the position of the pulses in
the pulse train according to the amplitude of the input signal.

pulse width modulation (PWM)—A modulation technique that generates variable-width pulses to
represent the amplitude of an analog input signal.

quantizing—The process by which an infinite number of amplitude values is assigned a finite number of
available transmission values.

quantization noise—the difference between the original analog signal to the reconstructed quantized
digital value.

radio frequency (RF) —Generally, the frequency spectrum from 15 kHz to 300 GHz.

regenerator—In optical fiber communicaions, devices that receive incoming distorted digital pulses,
convert them to electrical domain, sharpen the leading and falling edges, amplify pulse, and
retransmit signal optically.

repeaters—In optical fiber communications, devices that take incoming low-level optical signals, convert
them to the electrical domain, amplify them, then convert them back to optical signals and pass
them on.

sampling—Conversion of a continuous time signal into a discrete time signal.

satellite constellation—A group of electronic satellites working in concert. Such a constellation can be
considered to be a number of satellites with coordinated ground coverage, operating together under
shared control, synchronised so that they overlap well in coverage and complement rather than
interfere with other satellites' coverage.

scintillation effect—The signal fading or waveform distortion effects caused by the multiple path
propagation through the structured plasma field created from the interaction of the earth's magnetic
field with the high-altitude ionized region caused by the nuclear blast.\

shielding—A housing, screen, or cover which substantially reduces the coupling of electric and magnetic
fields into or out of circuits or prevents the accidental contact of objects or persons with parts or
components operating at hazardous voltage levels.

sideband—Information component of the modulated waveform.

significant sideband—Sidebands that contain at least 1 percent of the total transmitted power in FM.

SINAD—Ratio of total signal power level to unwanted signal power.

source region electromagnetic pulse (SREMP) —EMP that is produced by low-altitude nuclear or
surface bursts.
G–6

synchronous transmission—Data is sent in continuous blocks of characters spaced by fixed time


intervals.

system-generated electromagnetic pulse (SGEMP) —EMP that results from the direct interaction of
nuclear weapon gammas and X-rays with the system. SGEMP has special importance for systems outside
the atmosphere such as satellites.

undermodulation—Any percent of modulation which is less than that for which your system is designed
to use.

vertical redundancy check—An error checking method that generates and tests a parity bit for each byte
of data that is transmitted.

voltage controlled oscillator— a free running oscillator; it has periodic frequency calibration by a
crystalcontrolled oscillator in the automatic calibration (AUTO-CAL) section.
G–7

Abbreviations and Acronyms


%mod percent of modulation
8 PSK 8 phase shift keying
16 PSK 16 phase shift keying
A amp
AC alternate current
AFCQCC Air Force Communications Quality Control Checksheets
AFCSI Air Force Communications Special Instructions
AFFMA Air Force Frequency Management Agency
AFFSA Air Force Flight Standards Agency
AFMC Air Force Material Command
AFNIC Air Force Network Integration Center
AFSIR Air Force spectrum interference report
AFSMO Air Force Spectrum Management Office
AM amplitude modulation
APD avalanche photodiode
APWA American Public Works Association
ARP address resolution protocol
ARQ automatic retransmit on request
ASCII American Standard Code for Information Interchange
ATCALS air traffic control and landing systems
ATM Asynchronous Transfer Mode
ATSEP air traffic systems evaluation program
AUI attachment unit interface
BCC block check character
BER bit error rate
BITE built-in test equipment
BPSK binary phase shift keying
BR bit rate
BW bandwidth
C2 command and control
4
C command, control, communications, and computers
C-E communications-electronics
CPU central processing unit
G–8

COCOM combatant command


CoP community of practice
CRC cyclic redundancy check
CSM communications service monitor
CTTA Certified TEMPEST Technical Authority
CVSD continuously variable slope delta
CW continuous wave
dB decibels
dBm decibels (referenced to 1 milliwatt)
DC direct current
DOD Department of Defense
DSN Defense Switched Network
ECCM electronic counter-counter measure
EIA Electronic Industries Association
EIS Engineering and Installation Squadron
EM electromagnetic
EMI electromagnetic interference
EMP electromagnetic pulse
EP electronic protection
EW electronic warfare
FCC Federal Communication Commission
FCS frame check sequence
FDDI fiber distributed data interface
FEC forward error control
FM frequency modulation
FTP file transfer protocol
GaA gallium aluminum arsenide
Gb/s gigabits per second
GHz gigahertz
GND ground
GS/s gigasamples per second
HDLC high-level data link control
HEMP high-altitude electromagnetic pulse
HF high frequency
Hz hertz
G–9

ICBM intercontinental ballistic missile


ICMP internet control message protocol
IEC International Electrotechnical Commission
IEEE Institute of Electrical and Electronics Engineers
IF intermediate frequency
ILD injection laser diode
IP internet protocol
ISR intelligence, surveillance, and reconnaissance
JSC joint spectrum center
JSIR joint spectrum interference report
JTF joint task force
kHz kilohertz
LAN local area networks
LASER light amplification by stimulated emission of radiation
LCQCC local communications quality control checksheets
LED light-emitting diode
LD laser diode
LRC longitudinal redundancy check
LSB lower sideband
kbps kilobits per second
kHz kilohertz
kΏ kilo-ohm
MΏ megaohom
MAC media access control
MAJCOM major command
MAN metropolitan area network
Mb/s megabits per second
MHz megahertz
MI modulation index
MILSTAR Military Strategic and Tactical Relay
mS millisecond
μA microamps
mA milliamp
mV millivolts
G–10

NA numerical aperture
NIC network interface card
NiCd Nickel Cadmium
NFPA National Fire Protection Agency
nm nanometer
NRZ non-return zero
ns nanosecond
NSTISSAM National Security Telecommunications and Information Systems Security Advisory
Memorandum
PCM pulse code modulation
PDM pulse duration modulation
PEP peak envelope power
pF picofarads
PIN positive intrinsic negative
PM phase modulation
PMI preventive maintenance inspection
PSK phase shift keying
PAM pulse amplitude modulation
PPM pulse position modulation
PRN Pseudo Random Noise
pW picowatt
PWM pulse width modulation
QA quality assurance
QFIRC Quick Fix Interference Reduction Capability
QPSK quadrature phase shift keying
RAM random access memory
RF radio frequency
SCF system control facility
SGEMP system-generated electromagnetic pulse
SREMP source region electromagnetic pulse
SINAD signal plus noise and distortion
SINCGARS single channel ground to air radio system
SIPTO Standard Installation Practices Technical Orders
SNMP simple network management protocol
SNR signal-to-noise ratio
SSB single-sideband
G–11

SWR standing wave ratio


SYNC synchronization
TEMPEST Transient Electromagnetic Pulse Emanation Standard
TTL transistor-transistor logic
TV television
USB upper sideband
V volts
VAC volts of alternate current
VCO voltage controlled oscillator
VDC volts of direct current
VHF very high frequency
VLAN virtual local area network
VRC vertical redundancy check
W watt
WAN wide area network
WWW world wide web
Zin input impedence
G–12

Student Notes
Student Notes
AFSC 3D153
3D153 01 1206
Edit Code 02

You might also like