0% found this document useful (0 votes)
11 views

ITC-UNIT-4

Uploaded by

Era Kaushik
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

ITC-UNIT-4

Uploaded by

Era Kaushik
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

UNIT-4

Burst Error Detecting and correcting codes


Burst errors refer to sequences of errors that occur in a cluster within
a data transmission. Unlike random errors that occur sporadically,
burst errors affect a series of consecutive bits. Detecting and
correcting these errors is crucial for maintaining data integrity,
particularly in communication systems where data is transmitted over
noisy channels.

Burst Error Characteristics


 Definition: A burst error of length B is a contiguous sequence
of B bits in which the first and last bits, and any number of
intermediate bits, are in error.
 Common Causes: These errors often result from interference,
signal fading, or issues in transmission channels.

Error Detection and Correction Codes

1. Error Detecting Codes:


o Cyclic Redundancy Check (CRC): CRC is highly
effective for detecting burst errors. It involves appending a
sequence of redundant bits, called the CRC checksum, to
the end of a data block. Upon reception, the system
performs a polynomial division to detect errors.
o Checksum: Similar to CRC but simpler, checksums are
used to detect errors by summing the binary values in a
data block and appending the result to the end of the block.
This method is less effective for detecting burst errors
compared to CRC.
2. Error Correcting Codes:
o Reed-Solomon Codes: These are non-binary cyclic codes
that are particularly effective in correcting burst errors.
They are used in various applications like CDs, DVDs, and
QR codes. Reed-Solomon codes can correct up to t
symbols, where each symbol consists of multiple bits,
allowing for the correction of burst errors.
o BCH Codes: Bose-Chaudhuri-Hocquenghem (BCH)
codes are a class of cyclic codes that can be designed to
correct multiple burst errors. They are widely used in
communication systems and data storage.
o Convolutional Codes: These codes process data
sequentially and use shift registers to encode the data
stream. Viterbi algorithm is typically used for decoding.
They are robust against burst errors, especially when used
with interleaving.
o Interleaving: Interleaving is a technique used to spread
burst errors over a larger sequence of data, transforming
them into random-like errors that are easier to correct with
standard error-correcting codes. The data is interleaved
before transmission and de-interleaved upon reception.

Applications:
 Digital Television: Uses Reed-Solomon codes
 Data Storage: HDDs and SSDs use BCH codes
 Satellite Communication: Employs a combination of
interleaving and convolutional codes to burst errors.

Convolutional Codes
Convolutional codes are a type of error-correcting code used
extensively in communications systems to ensure data integrity during
transmission. Unlike block codes that encode data in fixed-size
blocks, convolutional codes operate on data streams of arbitrary
length, making them well-suited for real-time applications.

Key Concepts

1. Encoding Process:
o Input Data Stream: Convolutional codes take a continuous
stream of input bits.
o Shift Registers: The encoder uses a series of shift registers
to store the input bits.
o Generator Polynomials: The shift registers are connected
to modulo-2 adders (XOR gates) based on generator
polynomials, which determine the output bits.
o Output Sequence: For each input bit, the encoder
produces a set of output bits (encoded bits), creating a
convolution of the input sequence.
2. Rate and Constraint Length:
o Code Rate (k/n): The ratio of the number of input bits (k)
to the number of output bits (n). For example, a rate 1/2
code means each input bit results in two output bits.
o Constraint Length (K): The number of bits in the shift
register, determining how many previous input bits affect
the current output. It defines the memory of the encoder.
3. Decoding Process:
o Viterbi Algorithm: A widely used decoding algorithm for
convolutional codes that performs maximum likelihood
decoding. It traces the most likely path through a trellis
diagram, representing state transitions of the encoder.
o Trellis Diagram: A graphical representation of the states
and transitions of the convolutional encoder. Each path
through the trellis corresponds to a possible sequence of
encoded bits.

Applications:

 Satellite Communications: Ensuring data integrity over long


distances with high error rates.
 Mobile Communications: Enhancing the robustness of data in
cellular networks.
 Digital Video Broadcasting: Maintaining the quality of video
streams.
 Deep Space Communications: Handling the noise and signal
degradation in space transmissions.
Advantages:

 Real-Time Encoding
 High Error-Correction Capability
 Low Latency: Minimal delay in encoding and decoding.

Limitations

 Complexity: Higher constraint lengths increase complexity.


 Redundancy: redundancy increases the B requirement.

Code Tree
Code tree (also called prefix tree or a trie) is a data structure used to
represent a set of strings (or sequences) where each string is
represented by a path from the root to a leaf. And provide a
structured way to represent and manipulate sequences for efficient
data compression and encoding.

Code Tree Basics

1. Structure: A code tree is a rooted tree where each edge is


labelled with a symbol from a given alphabet, and each node
represents a prefix of the sequences in the set. The root
represents the empty prefix.
2. Encoding: To encode a sequence using a code tree, start at the
root and follow the edges labelled with the symbols of the
sequence. The path you take from the root to a leaf represents
the encoded sequence.
3. Decoding: To decode an encoded sequence, start at the root and
follow the edges labelled with the symbols of the encoded
sequence. The path from the root to the leaf will reconstruct the
original sequence.

Applications

1. Huffman Coding: Huffman coding is a popular application of


code trees. Huffman trees are a type of code tree used to
represent variable-length prefix codes that minimize the average
code length for a given set of symbols based on their
probabilities of occurrence.
2. Data Compression: Code trees are fundamental in data
compression algorithms where sequences are encoded into
shorter representations to save space.
3. Prefix Codes: Code trees are used to generate prefix codes,
where no code word is a prefix of another. This property ensures
that encoded data can be uniquely decoded without ambiguity.

Advantages

 Efficiency: In form of Huffman trees, allow for efficient


encoding and decoding of information.
 Uniqueness: Prefix-free properties ensure unique decodability.
 Optimality: Guarantees shortest average code length.

Trellis
It is a graphical representation of a finite-state machine, typically used
to model the behaviour of convolutional codes. It is a useful tool for
understanding and analysing the encoding and decoding processes in
error-correcting codes. It is a powerful graphical tool for representing
the states and transitions of a convolutional encoder.

Structure of a Trellis

 Nodes (States): Trellis diagram consists of nodes that represent


the possible states of the encoder at each time step.
 Edges (Transitions): The edges between nodes represent the
possible transitions from one state to another, determined by
the input bits and the encoder's structure.
 Time Steps: The trellis progresses through discrete time steps,
with each step representing the encoding of one input bit (or a
group of bits).

Decoding with trellis: Viterbi Algorithm


Key Concepts

1. Convolutional Codes:
o These are a type of error-correcting code where the
output bits depend on both the current input bits and the
previous input bits.
o The encoder can be represented as a finite-state machine,
making the trellis an ideal way to visualize the encoding
process.
2. State Representation:
o At each time step, the state of the encoder is represented
by a node in the trellis.
o Each state corresponds to a possible combination of past
input bits that influence the current output.
3. Transitions:
o Edges between states show how the state of the encoder
changes with each new input bit.
o Each transition is labelled with the corresponding output
bits generated by the encoder.
4. Path:
o A path through the trellis from the initial state to the final
state represents a possible sequence of input bits and the
corresponding output bits.
o The correct path through the trellis corresponds to the
transmitted code sequence.

Applications

 Error Correction: Used in mobile phones, satellite


communications.
 Efficient Decoding: Enable efficient decoding algorithms like the
Viterbi algorithm, critical for real-time communication systems

Decoding of convolutional codes


Decoding convolutional codes is a critical aspect of error correction in
information theory and coding. Convolutional codes are used
extensively in various communication systems, including wireless
networks, satellite communications, and data storage systems, to
improve the reliability of data transmission by correcting errors that
may occur during transmission.

Key Concepts of Convolutional Codes

1. Encoder: Convolutional codes are generated using an encoder


that processes the input data sequence and produces a coded
output sequence. The encoder typically consists of shift registers
and modulo-2 adders.
2. Code Rate: The code rate of a convolutional code is defined as
the ratio of the number of input bits to the number of output bits.
For instance, a rate 1/2 convolutional code means that for every
input bit, two output bits are produced.
3. Constraint Length: The constraint length of a convolutional
code is the number of bits in the encoder's memory, which
affects the code's ability to correct errors. A larger constraint
length generally improves error correction capability but
increases the complexity of the decoder.

Decoding Algorithms

Several algorithms can be used to decode convolutional codes, each


with its own strengths and weaknesses. The most commonly used
algorithms are:

1. Viterbi Algorithm:
o Description: It is a maximum decoding algor that finds the
most likely sequence of states given the received
sequence. It is optimal in terms of minimizing the
probability of error.
o Complexity: Algo has a computational complexity that
grows exponentially with the constraint length, making it
suitable for codes with moderate constraint lengths.
o Steps:
1. Initialization: Set the initial state metrics (usually, set
the metric of the initial state to zero and all others to
infinity).
2. Recursion: For each received symbol, update the
state metrics for all possible transitions between
states, keeping track of the path metrics.
3. Termination: Select the state with the best
(minimum) metric at the end of the sequence.
4. Traceback: Traceback through the state transitions
to determine the most likely transmitted sequence.
2. Sequential Decoding:
o Description: Sequential decoding uses a search strategy,
such as breadth-first or depth-first search, to explore
possible transmitted sequences. It is less computationally
intensive than the Viterbi algorithm for codes with long
constraint lengths but is not guaranteed to find the
optimal solution.
o Strengths: Suitable for codes with long constraint lengths
and less stringent real-time constraints.
o Weaknesses: Performance can degrade significantly under
high noise conditions, and it may fail to find the correct
path in such cases.

Practical Considerations

 Performance: The choice of decoding algorithm impacts the


error correction performance and computational requirements.
The Viterbi algorithm is widely used for its optimal performance
for moderate constraint lengths, while the BCJR algorithm is
essential for iterative decoding techniques.
 Hardware Implementation: Decoders for convolutional codes
can be implemented in hardware for real-time processing. The
Viterbi algorithm, in particular, is well-suited for hardware
implementation due to its regular and parallelizable structure.

Applications:
 Digital Television: To ensure robust transmission of video
signals.
 Mobile Communications (e.g., GSM, LTE): For reliable voice
and data transmission.
 Satellite Communication: For error correction in space
communication channels.
 Deep Space Communication: Used by NASA for communicating
with space probes.

Viterbi’s algorithm
Viterbi’s algorithm, named after Andrew Viterbi, is a dynamic
programming algorithm used for decoding convolutional codes,
which are a type of error-correcting code used in digital
communications. This algorithm finds the most likely sequence of
hidden states (called the Viterbi path) that results in a sequence of
observed events, especially in the context of a Hidden Markov Model
(HMM).

Key Concepts

1. Convolutional Codes: These codes add redundancy to the


transmitted information by convolving the input bits with a
series of generator polynomials. They are widely used in
communication systems, such as in mobile communications and
satellite communications, to ensure data integrity.
2. Trellis Diagram: The algorithm uses a trellis diagram to
represent the state transitions of the convolutional encoder. Each
node in the trellis corresponds to a state of the encoder, and each
edge represents a possible state transition with an associated
input bit.
3. Viterbi Algorithm: This algorithm efficiently finds the most
likely sequence of states (the Viterbi path) in an HMM given a
sequence of observed events. It reduces the complexity of
finding this path from an exponential to a polynomial time
complexity.
4. Path Metric: The path metric is accumulated as the algorithm
progresses through the trellis, representing the cumulative
measure of how well a particular path through the trellis
matches the received sequence
5. Survivor Paths: At each step in the trellis, the algorithm retains
the path with the lowest metric (best match) for each state.
These paths are called survivor paths.

Steps of Viterbi Algorithm

1. Initialization: Initialize the path metrics for each state.


Typically, the starting state (usually state 0) is initialized to zero,
and all other states are initialized to infinity (or a very large
number).
2. Recursion: For each time step and for each state, calculate the
path metrics for transitioning into that state from all possible
previous states. Update the path metrics and retain the survivor
path for each state.
3. Termination: At the final time step, the algorithm selects the
path with the lowest metric as the most likely transmitted
sequence.
4. Path Backtracking: Trace back through the trellis from the
final state to determine the most likely sequence of transmitted
bits.

Applications
 Error Correction: Used in decoding convolutional codes in
communication systems like satellite and mobile communications.
 Speech Recognition: Applied in Hidden Markov Models (HMMs)
for recognizing spoken words.
 Bioinformatics: Used for sequence alignment and gene prediction
in computational biology.
Advantages:

 Optimal Decoding: Provides the maximum likelihood estimate


of the transmitted sequence.
 Efficiency: Despite being optimal, it is computationally efficient
due to its use of dynamic programming.

Limitations:

 Computational Complexity: The complexity increases


exponentially with the constraint length of the convolutional
code.
 Memory Usage: Requires significant memory to store path
metrics and survivor paths, especially for codes with large
constraint lengths.

Sequential Decoding
Sequential decoding is an important concept in information theory
and coding, particularly for convolutional codes. It is a technique used
for decoding convolutional codes when the constraint length of the
code is large, which makes maximum likelihood decoding impractical
due to its computational complexity. Sequential decoding provides a
trade-off between performance and complexity, making it useful for
systems with limited computational resources.

Key Concepts in Sequential Decoding

1. Convolutional Codes: These are a type of error-correcting code


where each output bit is a function of the current input bits and a
fixed number of previous input bits. This makes them
particularly suited for sequential decoding.
2. Tree Representation: Sequential decoding can be visualized as
a tree search where each path from the root to a node represents
a possible sequence of input bits. The goal is to find the path
that most likely corresponds to the received sequence.
3. Metrics: The decoder uses a metric to evaluate how well a path
matches the received sequence. Commonly used metrics
include:
o Hamming Distance: For hard-decision decoding (binary
input).
o Euclidean Distance: For soft-decision decoding (real-
valued input).
4. Stack Algorithm: One of the most popular sequential decoding
algorithms is the Fano algorithm, which uses a stack to keep
track of potential paths. The stack stores paths with their
associated metrics, and the algorithm explores the most
promising paths first.
5. Error Correction: Sequential decoding corrects errors by
finding the path that has the smallest metric (i.e., is closest to the
received sequence). This path is then decoded to recover the
original input bits.

Advantages and Disadvantages

Advantages:

 Reduced Complexity: Sequential decoding reduces the


computational complexity compared to maximum likelihood
decoding, especially for codes with large constraint lengths.
 Flexibility: The algorithm can be adapted to balance between
performance and complexity by adjusting parameters like the
stack size.

Disadvantages:

 Variable Decoding Time: The time it takes to decode a


sequence can vary depending on the received sequence,
making it less predictable compared to fixed-complexity
algorithms.
 Error Propagation: In some cases, errors in the early part of the
sequence can propagate, leading to incorrect decoding of
subsequent bits.

Practical Applications

Sequential decoding is used in various communication systems where


efficient error correction is needed without excessive computational
resources. Some applications include:
 Deep Space Communication: Where long constraint length
convolutional codes are used due to the high noise
environment.
 Wireless Communication: In systems where computational
power is limited, such as mobile devices.

Algorithms for Sequential Decoding

Several algorithms are used for sequential decoding, with the Fano
algorithm being one of the most well-known:

1. Fano Algorithm: This algorithm dynamically adjusts the


threshold for path metrics and backtracks if the current path
does not meet the threshold, effectively exploring paths in a
depth-first manner.
2. Stack Algorithm: Another approach where paths are stored in a
stack (or priority queue), and the most promising paths are
expanded first. This can be more efficient but may require more
memory to store the stack.

Transfer function and Distance properties of


convolutional codes
Convolutional codes are a type of error-correcting code used in digital
communication systems to improve the reliability of data
transmission. They are characterized by their encoding process, which
uses shift registers and linear operations to transform a sequence of
input bits into a sequence of output bits. Two important properties of
convolutional codes are their transfer function and distance properties.
Let's delve into each:

Transfer Function

The transfer function of a convolutional code characterizes the


relationship between the input and output sequences in the z-domain.
It provides a mathematical representation of how the code transforms
the input data stream. The transfer function is useful for analyzing the
performance and behavior of the convolutional code, particularly in
terms of its error-correcting capabilities.

For a convolutional code with a given encoder, the transfer function


T(D,N) is defined as the generating function of the number of error
events at a particular Hamming distance DDD and free distance NNN.
It can be represented as:

T(D,N) =∑k A(D,N,k ) D^N N^k

where:

 A(D,N,k) is the number of error events with Hamming distance


DDD and output weight N, occurring at the k-th position.
 D is a variable representing the Hamming distance of error
events.
 N is a variable representing the free distance.

The transfer function is derived from the state diagram of the


convolutional encoder, which is then converted to a trellis diagram.
By analysing the trellis, one can determine the weight enumerating
function (WEF) of the code, which helps in calculating the transfer
function.

Distance Properties

The distance properties of a convolutional code are crucial for


understanding its error-correcting performance. The primary distance
measures are:

1. Free Distance (d free): The minimum Hamming distance


between any two distinct code sequences. It determines the
error-correcting capability of the convolutional code. The larger
the free distance, the better the error correction. For a
convolutional code, dfree can be found from the transfer
function or by analysing the trellis diagram directly.
2. Weight Enumerating Function (WEF): The function that
enumerates the number of code sequences with different
weights. It provides information on the distribution of code
weights and is closely related to the transfer function. The WEF
is useful for computing the probability of error and analysing the
code's performance over noisy channels.
3. Input-Output Weight Enumerating Function (IOWEF): This
function extends the WEF by considering both the input and
output weights. It gives a more detailed characterization of the
code's performance and is particularly useful for analysing the
error probability in more complex scenarios.
The distance properties are derived from the state and trellis diagrams
of the convolutional encoder. By analysing these diagrams, one can
determine the paths that contribute to different distances and weights.

Bound on bit error rate


the bit error rate (BER) is a critical performance metric that measures
the number of bit errors per unit time or per number of bits
transmitted. It indicates the reliability of the communication system.
Several factors influence the BER, including the signal-to-noise ratio
(SNR), modulation scheme, error-correcting codes, and the
characteristics of the communication channel.

Practical Considerations

1. Modulation Scheme: Different modulation schemes have


different BER performances.
2. Error-Correcting Codes: Such as Hamming codes, Reed-
Solomon codes and Low-Density Parity-Check (LDPC) codes
can significantly reduce the BER by adding redundancy .
3. Channel Characteristics: Nature of communication channel
affects BER
4. Signal-to-Noise Ratio (SNR): Higher SNR generally results in
a lower BER.
Bound on Bit Error Rate

1. Shannon Limit: It defines the theoretical maximum channel


capacity at which information can be transmitted over a noisy
channel. It is given by the Shannon-Hartley theorem.
2. Bhattacharyya Bound: The Bhattacharyya bound is another
method to estimate the upper bound on the bit error rate. It is
useful for evaluating the performance.
3. Hamming Bound: In coding theory, the Hamming bound (or
sphere-packing bound) is used to estimate the maximum number
of codewords that can be packed into a given space.

Coding Gain (dB)


It refers to the improvement in performance achieved by using a
particular coding scheme over an uncoded system. This improvement
is typically measured in terms of SNR Ratio or Bit Error Rate (BER)
for a given data rate. Coding gain can be understood as the reduction
in SNR required to achieve the same BER when using error-
correcting codes compared to an uncoded system.

Here are key points to understand coding gain:

1. Definition: It is the difference in SNR between an uncoded


system and a coded system to achieve the same BER.
2. Types of Codes:
o Block Codes: These codes work on a fixed-size block of
bits. Examples include Hamming codes, Reed-Solomon
codes.
o Convolutional Codes: These codes work on continuous
streams of data and use memory to introduce
redundancy.
o Turbo Codes and LDPC (Low-Density Parity-Check) Codes:
Modern codes that provide very high coding gain, close to
the theoretical limits defined by Shannon’s theorem.
3. Calculation of Coding Gain:
oCoding gain can be expressed as:
Coding Gain (dB)=SN Runcoded – SN Rcoded
o Here, SN Runcoded is the SNR required to achieve a
certain BER without coding, and SN Rcoded is the SNR
required to achieve the same BER with coding.
4. Impact of Coding Gain:
o Reduced Power Requirements: Lower SNR means that
less power is needed for transmission, which is beneficial
for battery-operated devices and reduces interference.
o Increased Range: For wireless communication, a lower
required SNR translates to a greater communication
range.
o Improved Reliability: Error-correcting codes can
significantly lower the BER, enhancing the reliability of the
communication system.
5. Trade-offs:
o Complexity: Implementing advanced error-correcting
codes increases the complexity of the transmitter and
receiver.
o Latency: Encoding and decoding processes can introduce
latency, which might be critical in real-time applications.
o Bandwidth: Some coding schemes require additional
bandwidth for transmitting the redundant bits.

Practical Considerations

When designing a communication system, engineers must consider


the trade-offs between coding gain, complexity, and other factors such
as power, bandwidth, and latency. The choice of coding scheme
depends on the specific requirements of the application.

You might also like