Error Unit3
Error Unit3
Error Unit3
Basically, a shift register and a modulo-2 adder are the two crucial
elements considered as building blocks of cyclic encoding. Using a shift
register, encoding can be efficiently performed. The fundamental
elements of shift registers are flip flops (that acts as a storage unit) and
input-output. While the other i.e., a binary adder has two inputs and one
output.
Encoding
Consider the message signal given as:
m = [1110]
Thus,
C = [1000110]
From the codeword bits, we can clearly interpret that the encoded
codeword contains the message and parity bits in an intermixed pattern.
Thus, is a non-systematic codeword and direct determination of message
bits is not possible.
M(X) = 1 + X2 + X3
While the generator polynomial G(X) = X3 + X + 1
The equation for determining 7 bits codeword for systematic code is
given as:
Since n= 7 and k = 4
Therefore,
C = [1001011]
So, here the first 3 bits are parity bits and the last four bits are message
bits. And we can cross-check that we have considered [1011] as the
message bits and parity polynomial remainder was 1 i.e., code [100].
Hence, in this way encoding of non-systematic and systematic codewords
is performed.
Decoding
To understand how the detection of cyclic code takes place. Consider
R(X) as received polynomial, C(X) as the encoded polynomial, G(X) to be
generator polynomial, and E(X) as error polynomial.
Here, if the obtained remainder is 0 then there will be no error and if the
obtained remainder is not 0 then there will be an error that needs to be
corrected.
This is so because actually coded bit does not contain error thus
syndrome is not required to be calculated.
Now, let us check for error polynomial. So, consider the error table given
below where we have assumed a single error in each of the bits of the
code:
So, now using the syndrome for error polynomial formula:
R(X) = 1 + X2 + X5
Now, let us check for the syndrome, by dividing the received polynomial
R(X) by the generator polynomial G(X).
We will get,
S=X≠0
This means the received codeword contains an error. From the tabular
representation, it is clear that X has an error code [0100000]. Thus, this
represents an error in the second bit of the received code.
x^3+x+1
x^3+x^2+1
The program supports error control, you can only encode a message that
has no more and no less 3-bits. You can only decode a message that has
no more than 7 bits.
Technologies
Swing
Maven
Setup
Screenshots
Shortened cyclic codes
New optimum binary shortened cyclic codes with redundancy r = 32 and
burst-error correction capability b are presented. The codes are found
by performing an exhaustive computer search using the Kasami
algorithm, and their performance is compared with analytical bounds by
Reiger, Abramson and Campopiano. The true burst-error correction
capability of the [2112, 2080] shortened Fire code selected for 10 Gb/s
Ethernet is determined to be b = 11 and [2112, 2080] shortened cyclic
codes with higher burst-error correction capability b = 13 are given. The
double burst-error detection properties of three cyclic redundancy
check codes used in standards are compared.
BCH Code
In coding theory the BCH codes form a class of cyclic error-correcting
codes that are constructed using finite fields. BCH codes were invented
in 1959 by Hocquenghem, and independently in 1960 by Bose and Ray-
Chaudhuri. The abbreviation BCH comprises the initials of these
inventors' names.
One of the key features of BCH codes is that during code design, there is a
precise control over the number of symbol errors correctable by the
code. In particular, it is possible to design binary BCH codes that can
correct multiple bit errors. Another advantage of BCH codes is the ease
with which they can be decoded, namely, via an algebraic method known
as syndrome decoding. This simplifies the design of the decoder for
these codes, using small low-power electronic hardware.
Decoding
There are many algorithms for decoding BCH codes. The most common
ones follow this general outline:
During some of these steps, the decoding algorithm may determine that
the received vector has too many errors and cannot be corrected. For
example, if an appropriate value of t is not found, then the correction
would fail. In a truncated (not primitive) code, an error location may be
out of range. If the received vector has more errors than the code can
correct, the decoder may unknowingly produce an apparently valid
message that is not the one that was sent.
Mattson-Solomon polynomials
The Mattson–Solomon polynomial is described and it is shown to be an
inverse discrete Fourier transform based on a primitive root of unity.
The usefulness of the Mattson–Solomon polynomial in the design of
cyclic codes is demonstrated. The relationship between idempotents and
the Mattson–Solomon polynomial of a polynomial that has binary
coefficients is also described. It is shown how binary cyclic codes may be
easily derived from idempotents based on the cyclotomic cosets. It is
demonstrated how useful this can be in the design of high-degree non-
primitive binary cyclic codes. Several code examples using this
construction method are presented. A table listing the complete set of
the best binary cyclic codes, having the highest minimum Hamming
distance, is included for all code lengths from 129 to 189 bits.
Reed-Solomon codes
Reed-Solomon codes are the code that was introduced by Irving S. Reed
and Gustave Solomon. Reed-Solomon code is a subclass of non-binary
BCH codes. The encoder of Reed-Solomon codes differs from a binary
encoder in that it operates on multiple bits rather than individual bits.
So basically, Reed-Solomon codes help in recovering corrupted messages
that are being transferred over a network. In Reed-Solomon codes, we
have:
Encoder and
Decoder
Reed-Solomon codes encoder receives data and before transferring it
over the noisy network it adds some parity bits with our original data
bits.
On the other hand, we have a Reed-Solomon codes decoder that detects
corrupted messages and recovers them from error.
Representation of n-bits Reed-Solomon codes
n-bits representation of the Reed-Solomon codes
Parameters of Reed-Solomon code:
(n, k) code is used to encode m-bit symbols.
Block length(n) is given by 2m-1 symbols.
In Reed-Solomon codes, message size is given by (n-2t) where t= number
of errors corrected.
Parity check size is given by = (n-k) or 2t symbols.
Minimum distance(a) is given by = (2t+1).
Message size is of k bits.
Generator function
In Reed-Solomon codes, the generator function is generated using a
special polynomial. In Reed-Solomon codes, all valid codewords are
exactly divisible by the generator polynomial. The generator function is
given by:
g(x) = (x-α)(x-α2)(x-α3)……(x-α2t)
Encoding
We perform encoding in Reed-Solomon codes with the following
methods:
Consider a Reed-Solomon code with parameters n(block size),
k(message size), q(symbol size in bits). For encoding, we encode the
message as a polynomial p(x) and then multiply it with a code generator
polynomial g(x)
where g(x) = (x-α)(x-α2)(x-α3)……(x-α2t)
Then we map the message vector[x1,x2,…..,xk] to a polynomial p(x) of
degree<k such that
px(αi) = xi for all i=1,2,3,….k
Polynomial can be done using Lagrange interpolation.
Sender calculates s(x) = p(x)*g(x) and then sends over the coefficients of
s(x)
Decoding
At the receiver end we perform the following methods:
The receiver receives r(x) at the receiver end.
If s(x)== r(x) then r(x)/g(x) has no remainder.
If it has remainder, then r(x) = p(x) * g(x) + e(x) where e(x) is an error
polynomial.
Application of Reed-Solomon codes
It is used in storage devices like CDs, DVDs, etc.
It is used in wireless or mobile communication for data transfer.
It is used in satellite communication.
Reed-Solomon codes are also used in digital TV.
It is used in high-speed modems.
It is used in the BAR code, QR code.
Advantages:
Here we will discuss how it is better than binary BCH codes.
It has the highest efficient use of redundancy.
It is possible to adjust block length and symbol size in Reed-Solomon
codes.
It provides a wide range of code rates.
In Reed-Solomon codes, there are efficient decoding
techniques available.
Disadvantages:
Despite all these advantages of Reed-Solomon codes it also has some
disadvantages in comparison with the BCH code.
For BPSK modulation schemes Reed-Solomon codes don’t perform well
in comparison with BCH code
In Reed-Solomon codes, the Bit Error Ratio(BER) is not satisfying in
comparison with the BCH codes.
MDS codes
A maximum distance separable code, or MDS code, is a way of encoding
data so that the distance between code words is as large as possible for a
given data capacity. This post will explain what that means and give
examples of MDS codes.
Notation
A linear block code takes a sequence of k symbols and encodes it as a
sequence of n symbols. These symbols come from an alphabet of size q.
For binary codes, q = 2. But for non-trivial MDS codes, q > 2. More on that
below.
The purpose of these codes is to increase the ability to detect and correct
transmission errors while not adding more overhead than necessary.
Clearly n must be bigger than k, but the overhead n-k has to pay for itself
in terms of the error detection and correction capability it provides.
The ability of a code to detect and correct errors is measured by d, the
minimum distance between code words. A code has separation
distance d if every pair of code words differs in at least d positions. Such
a code can detect up to d errors per block and can correct ⌊(d-1)/2⌋
errors.
Example
The following example is not an MDS code but it illustrates the notation
above.
Separation bound
There is a theorem that says that for any linear code
k + d ≤ n + 1.
This is known as the singleton bound. MDS codes are optimal with
respect to this bound. That is,
k + d = n + 1.
So MDS codes are optimal with respect to the singleton bound, analogous
to how perfect codes are optimal with respect to the Hamming bound.
There is a classification theorem that says perfect codes are either
Hamming codes or trivial with one exception. There is something similar
for MDS codes.
Definition 4.5.1
(i) A generator matrix for a linear code C is a matrix G whose rows form a
basis for C.
Remark 4.5.2
(i) If C is an [ n, k]-linear code, then a generator matrix for C must be
a k n matrix and a parity-check matrix for C must be an ( n ? k) n matrix.
(ii) Algorithm 4.3 of Section 4.4 can be used to find generator and parity-
check matrices for a linear code.
(iii) As the number of bases for a vector space usually exceeds one, the
number of generator matrices for a linear code also usually exceeds one.
Moreover, even when the basis is fixed, a permutation (different from
the identity) of the rows of a generator matrix also leads to a different
generator matrix.
(iv) The rows of a generator matrix are linearly independent. The same
holds for the...