Error Unit3

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Cyclic Code

Cyclic Code is known to be a subclass of linear block codes where cyclic


shift in the bits of the codeword results in another codeword. It is quite
important as it offers easy implementation and thus finds applications in
various systems.
Cyclic codes are widely used in satellite communication as the
information sent digitally is encoded and decoded using cyclic coding.
These are error-correcting codes where the actual information is sent
over the channel by combining with the parity bits.

Cyclic codes are known to be a crucial subcategory of linear coding


technique because these offers efficient encoding and decoding schemes
using a shift register. These are used in error correction as they can
check for double or burst errors. Various other important codes like,
Reed Solomon, Golay, Hamming, BCH, etc. can be represented using
cyclic codes.

Basically, a shift register and a modulo-2 adder are the two crucial
elements considered as building blocks of cyclic encoding. Using a shift
register, encoding can be efficiently performed. The fundamental
elements of shift registers are flip flops (that acts as a storage unit) and
input-output. While the other i.e., a binary adder has two inputs and one
output.

Encoding
Consider the message signal given as:
m = [1110]
Thus,

M(X) = 1*X0 + 1*X1 + 1*X2 + 0*X3


M(X) = X2 + X + 1
with generator polynomial G(X) = X3 + X + 1
For non-systematic code, the codeword is given as:

C(X) = (X2 + X + 1) (X3 + X + 1)


C(X) = X5 + X3 + X2 + X4 + X2 + X + X3 + X + 1
Here modulo 2 addition will be performed and in modulo 2 addition, the
sum of 2 similar bits results in 0.

C(X) = X5 + X3 + X2 + X4 + X2 + X + X3 + X + 1


Thus, X3, X2 and X will get cancelled. So,
C(X) = 1 + X4 + X5
Hence, from the above codeword polynomial, the codeword will be:

C = [1000110]
From the codeword bits, we can clearly interpret that the encoded
codeword contains the message and parity bits in an intermixed pattern.
Thus, is a non-systematic codeword and direct determination of message
bits is not possible.

Systematic Cyclic Encoding: Consider another message signal:


m = [1011]
So, message polynomial will be:

M(X) = 1 + X2 + X3
While the generator polynomial G(X) = X3 + X + 1
The equation for determining 7 bits codeword for systematic code is
given as:

: P(X) represents the parity polynomial and is given by:

So, to construct the systematic codeword first we have to determined


P(X).

Since n= 7 and k = 4
Therefore,

Hence, the obtained value of P(X) = 1

Now, substituting the values in codeword polynomial equation,

C(X) = X3 (X3 + X2 + 1) + 1


C(X) = 1 + X3 + X5 + X6
Hence, the codeword for the above code polynomial will be:

C = [1001011]

So, here the first 3 bits are parity bits and the last four bits are message
bits. And we can cross-check that we have considered [1011] as the
message bits and parity polynomial remainder was 1 i.e., code [100].
Hence, in this way encoding of non-systematic and systematic codewords
is performed.

Decoding
To understand how the detection of cyclic code takes place. Consider
R(X) as received polynomial, C(X) as the encoded polynomial, G(X) to be
generator polynomial, and E(X) as error polynomial.

Syndrome or error involved during transmission is given as:

Since R(X) is a combination of encoded polynomial and error polynomial


thus above equation can be written as:

Here, if the obtained remainder is 0 then there will be no error and if the
obtained remainder is not 0 then there will be an error that needs to be
corrected.

This is so because actually coded bit does not contain error thus
syndrome is not required to be calculated.

Now, let us check for error polynomial. So, consider the error table given
below where we have assumed a single error in each of the bits of the
code:
So, now using the syndrome for error polynomial formula:

We will determine, the erroneous bit. Consider the generator


polynomial G(X) = X3 + X + 1 and dividing each error polynomial with it.

Now, tabulating it,


Consider an example where transmitted codeword is [1110010]
received code, r(n,k) = [1010010]. Hence,

R(X) = 1 + X2 + X5
Now, let us check for the syndrome, by dividing the received polynomial
R(X) by the generator polynomial G(X).

We will get,

S=X≠0
This means the received codeword contains an error. From the tabular
representation, it is clear that X has an error code [0100000]. Thus, this
represents an error in the second bit of the received code.

Hence, by using the same approach we can perform encoding and


decoding using cyclic code.

cyclic hamming codes


The program encodes and decodes messages with a case of Hamming
codes (7.4) with two generating functions:

x^3+x+1

x^3+x^2+1

The program supports error control, you can only encode a message that
has no more and no less 3-bits. You can only decode a message that has
no more than 7 bits.

Technologies

Java - version 8++

Swing

Maven

Setup

You must have jdk or jre 8 ++ installed to run this project

Screenshots
Shortened cyclic codes
New optimum binary shortened cyclic codes with redundancy r = 32 and
burst-error correction capability b are presented. The codes are found
by performing an exhaustive computer search using the Kasami
algorithm, and their performance is compared with analytical bounds by
Reiger, Abramson and Campopiano. The true burst-error correction
capability of the [2112, 2080] shortened Fire code selected for 10 Gb/s
Ethernet is determined to be b = 11 and [2112, 2080] shortened cyclic
codes with higher burst-error correction capability b = 13 are given. The
double burst-error detection properties of three cyclic redundancy
check codes used in standards are compared.

error trapping decoding for cyclic codes


The error-trapping decoder is the simplest way of decoding cyclic codes
satisfying R < 1 / t , where t is the maximum number of errors to be
corrected and R is the code rate. These codes have low rates and/or
correct only a few errors. Kasami has used the concept of covering
polynomials to demonstrate modified error-trapping decoders for
several binary cyclic codes not satisfying R<1/t . In this paper Kasami's
decoder is modified further for correcting multiple symbol errors on
nonbinary cyclic codes satisfying R < 2 / t . The Berlekamp decoder for
these codes requires Galois field multiplication and division of two
variables which are difficult to implement. Our decoder does not require
these multiplications and divisions. Further, for all double-error-
correcting codes, and triple-error-correcting codes with rate R < 2/3 , an
algorithm is presented for finding a minimum set of covering
monomials.

BCH Code
In coding theory the BCH codes form a class of cyclic error-correcting
codes that are constructed using finite fields. BCH codes were invented
in 1959 by Hocquenghem, and independently in 1960 by Bose and Ray-
Chaudhuri. The abbreviation BCH comprises the initials of these
inventors' names.
One of the key features of BCH codes is that during code design, there is a
precise control over the number of symbol errors correctable by the
code. In particular, it is possible to design binary BCH codes that can
correct multiple bit errors. Another advantage of BCH codes is the ease
with which they can be decoded, namely, via an algebraic method known
as syndrome decoding. This simplifies the design of the decoder for
these codes, using small low-power electronic hardware.

BCH codes are used in applications like satellite communications,


compact disc players, DVDs, disk drives, solid-state drives and two-
dimensional bar codes.

Decoding
There are many algorithms for decoding BCH codes. The most common
ones follow this general outline:

1. Calculate the syndromes sj for the received vector


2. Determine the number of errors t and the error locator
polynomial Λ(x) from the syndromes
3. Calculate the roots of the error location polynomial to find the
error locations Xi
4. Calculate the error values Yi at those error locations
5. Correct the errors

During some of these steps, the decoding algorithm may determine that
the received vector has too many errors and cannot be corrected. For
example, if an appropriate value of t is not found, then the correction
would fail. In a truncated (not primitive) code, an error location may be
out of range. If the received vector has more errors than the code can
correct, the decoder may unknowingly produce an apparently valid
message that is not the one that was sent.

Mattson-Solomon polynomials
The Mattson–Solomon polynomial is described and it is shown to be an
inverse discrete Fourier transform based on a primitive root of unity.
The usefulness of the Mattson–Solomon polynomial in the design of
cyclic codes is demonstrated. The relationship between idempotents and
the Mattson–Solomon polynomial of a polynomial that has binary
coefficients is also described. It is shown how binary cyclic codes may be
easily derived from idempotents based on the cyclotomic cosets. It is
demonstrated how useful this can be in the design of high-degree non-
primitive binary cyclic codes. Several code examples using this
construction method are presented. A table listing the complete set of
the best binary cyclic codes, having the highest minimum Hamming
distance, is included for all code lengths from 129 to 189 bits.

Reed-Solomon codes
Reed-Solomon codes are the code that was introduced by Irving S. Reed
and Gustave Solomon. Reed-Solomon code is a subclass of non-binary
BCH codes. The encoder of Reed-Solomon codes differs from a binary
encoder in that it operates on multiple bits rather than individual bits.
So basically, Reed-Solomon codes help in recovering corrupted messages
that are being transferred over a network. In Reed-Solomon codes, we
have:
Encoder and 
Decoder
Reed-Solomon codes encoder receives data and before transferring it
over the noisy network it adds some parity bits with our original data
bits.
On the other hand, we have a Reed-Solomon codes decoder that detects
corrupted messages and recovers them from error.
Representation of n-bits Reed-Solomon codes
n-bits representation of the Reed-Solomon codes
Parameters of Reed-Solomon code:
(n, k) code is used to encode m-bit symbols.
Block length(n) is given by 2m-1 symbols.
In Reed-Solomon codes, message size is given by (n-2t) where t= number
of errors corrected.
Parity check size is given by = (n-k) or 2t symbols.
Minimum distance(a) is given by = (2t+1).
Message size is of k bits.
Generator function
In Reed-Solomon codes, the generator function is generated using a
special polynomial. In Reed-Solomon codes, all valid codewords are
exactly divisible by the generator polynomial. The generator function is
given by:
g(x) = (x-α)(x-α2)(x-α3)……(x-α2t)
Encoding
We perform encoding in Reed-Solomon codes with the following
methods:
Consider a Reed-Solomon code with parameters n(block size),
k(message size), q(symbol size in bits). For encoding, we encode the
message as a polynomial p(x) and then multiply it with a code generator
polynomial g(x) 
where g(x) = (x-α)(x-α2)(x-α3)……(x-α2t)
Then we map the message vector[x1,x2,…..,xk] to a polynomial p(x) of
degree<k such that 
px(αi) = xi for all i=1,2,3,….k
Polynomial can be done using Lagrange interpolation.
Sender calculates s(x) = p(x)*g(x) and then sends over the coefficients of
s(x)
Decoding
At the receiver end we perform the following methods:
The receiver receives r(x) at the receiver end.
If s(x)== r(x) then r(x)/g(x) has no remainder.
If it has remainder, then r(x) = p(x) * g(x) + e(x) where e(x) is an error
polynomial.
Application of Reed-Solomon codes
It is used in storage devices like CDs, DVDs, etc.
It is used in wireless or mobile communication for data transfer.
It is used in satellite communication.
Reed-Solomon codes are also used in digital TV.
It is used in high-speed modems.
It is used in the BAR code, QR code.
Advantages:
Here we will discuss how it is better than binary BCH codes.
It has the highest efficient use of redundancy.
It is possible to adjust block length and symbol size in Reed-Solomon
codes.
It provides a wide range of code rates.
In Reed-Solomon codes, there are efficient decoding
techniques available.
Disadvantages:
Despite all these advantages of Reed-Solomon codes it also has some
disadvantages in comparison with the BCH code.
For BPSK modulation schemes Reed-Solomon codes don’t perform well
in comparison with BCH code
In Reed-Solomon codes, the Bit Error Ratio(BER) is not satisfying in
comparison with the BCH codes.

MDS codes
A maximum distance separable code, or MDS code, is a way of encoding
data so that the distance between code words is as large as possible for a
given data capacity. This post will explain what that means and give
examples of MDS codes.

Notation
A linear block code takes a sequence of k symbols and encodes it as a
sequence of n symbols. These symbols come from an alphabet of size q.
For binary codes, q = 2. But for non-trivial MDS codes, q > 2. More on that
below.
The purpose of these codes is to increase the ability to detect and correct
transmission errors while not adding more overhead than necessary.
Clearly n must be bigger than k, but the overhead n-k has to pay for itself
in terms of the error detection and correction capability it provides.
The ability of a code to detect and correct errors is measured by d, the
minimum distance between code words. A code has separation
distance d if every pair of code words differs in at least d positions. Such
a code can detect up to d errors per block and can correct ⌊(d-1)/2⌋
errors.
Example
The following example is not an MDS code but it illustrates the notation
above.

The extended Golay code used to send back photos from the Voyager


missions has q = 2 and [n, k, d] = [24, 12, 8]. That is, data is divided into
segments of 12 bits and encoded as 24 bits in such a way that all code
blocks differ in at least 8 positions. This allows up to 8 bit flips per block
to be detected, and up to 3 bit flips per block to be corrected.
(If 4 bits were corrupted, the result could be equally distant between
two valid code words, so the error could be detected but not corrected
with certainty.)

Separation bound
There is a theorem that says that for any linear code

k + d ≤ n + 1.
This is known as the singleton bound. MDS codes are optimal with
respect to this bound. That is,
k + d = n + 1.
So MDS codes are optimal with respect to the singleton bound, analogous
to how perfect codes are optimal with respect to the Hamming bound.
There is a classification theorem that says perfect codes are either
Hamming codes or trivial with one exception. There is something similar
for MDS codes.

Generator Matrix and Parity-Check Matrix


Knowing a basis for a linear code enables us to describe its codewords
explicitly. In coding theory, a basis for a linear code is often represented
in the form of a matrix, called a generator matrix, while a matrix that
represents a basis for the dual code is called a parity-check matrix.
These matrices play an important role in coding theory.

Definition 4.5.1
(i) A generator matrix for a linear code C is a matrix G whose rows form a
basis for C.

(ii) A parity-check matrix H for a linear code C is a generator matrix for


the dual code C ?.

Remark 4.5.2
(i) If C is an [ n, k]-linear code, then a generator matrix for C must be
a k n matrix and a parity-check matrix for C must be an ( n ? k) n matrix.
(ii) Algorithm 4.3 of Section 4.4 can be used to find generator and parity-
check matrices for a linear code.

(iii) As the number of bases for a vector space usually exceeds one, the
number of generator matrices for a linear code also usually exceeds one.
Moreover, even when the basis is fixed, a permutation (different from
the identity) of the rows of a generator matrix also leads to a different
generator matrix.

(iv) The rows of a generator matrix are linearly independent. The same
holds for the...

You might also like