Burst-Error Correction

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

Hang 2002.

Chapter 7
Burst-Error Correction

1. Introduction
2. Known Codes and Coding Techniques for
Correcting Bursts
3. Fire Codes
4. Decoding of Burst-Error-Correcting
Codes
5. Binary RS Codes
6. Interleaving Technique
7. Concatenated Coding Scheme
8. Cascaded Coding Scheme: Product Code

1
Hang 2002.5

1. Introduction
In burst error channels, errors occur in clusters.
An error pattern, e = (e0 ,e1,e2 ,,...,en −1 ), is said to
be a burst of length l if its nonzero components
are confined to l consecutive positions, say
e j ,e j +1 ,...,e j +l -1 , the first and the last of which are

nonzero, i.e., e j = e j +l −1 = 1 .
For examples, the error pattern,
e = ( 0000101100100000 ), is a burst of length
7.
A linear code which is capable of correcting all
error bursts of length l or less but not all error
bursts of length l+1 is called an
l-burst-error-correcting code. The code is said
to have burst-error-correcting capability l.
For an l-burst-error-correcting linear code, all
the error burst of length l or less can be used as
coset leaders of a standard array.
Reiger Bound : The burst-error-correcting ca-
pability l of an (n,k) code is at most (n − k)/ 2 ,
i.e., l ≤ (n − k)/ 2.

2
Hang 2002.5

Codes meet the Reiger bound are called opti-


mal codes.
For a cyclic burst-error-correcting code, it can
correct bursts with one part at one end and one
part at the other end as shown in Figure 7-1.
These bursts are called end-around bursts.

λ l-λ

Figure 7-1. An end-around burst

3
Hang 2002.5

2. Known Codes and Coding Techniques


for Correcting Bursts
• Fire codes
• Binary RS codes
• Interleaving technique
• Product codes
• Concatenation
• Cascading

4
Hang 2002.5

3. Fire Codes
They are cyclic codes and were discovered by
P. Fire in 1959.
Let p(X) be a binary irreducible polynomial
of degree m. Let ρ be the smallest integer
such that p(X) divides X ρ + 1. The integer
ρ is called the period of p(X) .
Let l ≤ m such that 2l-1 is not divisible byρ.
And let n=LCM(2l-1,ρ).
Define the following polynomial:
g(X) = (X 2l −1 + 1 ) • p(X) .

Then g(X ) is a factor of Xn+1, and has de-


gree 2l-1+m. (L&C, p.262-)
The cyclic code generated by
g(X) = (X 2l −1 + 1 ) • p(X) is a Fire code, which
is capable of correcting any single burst of er-
rors of length l or less (including the
end-around bursts). The code has the following
parameters:
5
Hang 2002.5

n=LCM(2l-1,ρ) and n-k=2l-1+m.

Example 7-1: The polynomial p(X) = 1 + X + X


2 5

is irreducible and has periodρ=31. Let l=m=5.


Clearlyρ=31 does not divide 2l-1=9. Then
g(X) = (X 9 + 1 ) • ( 1 + X 2 + X 5 )
= 1 + X 2 + X 5 + X 9 + X 11 + X 14

generates a Fire code with n=LCM(31,9)=279


and n-k=2l-1+m=2x5+5-1=14.
Hence, it is a (279,265) code. This code is ca-
pable of correcting any error burst of length 5
or less.
2l 10 5
Code efficiency = = = .
n − k 14 7

6
Hang 2002.5

4. Decoding of Burst-Error-Correcting
Codes
Decoding consists two basic steps:
(1) Error-pattern determination, and (2) Burst
location determination.
These two steps can be easily achieved by er-
ror-trapping decoding.
The basic concept is to trap the error burst in a
(syndrome) shift register by cyclically shifting
the received vector r .
Let r ( X ) and e ( X ) be the received and error
polynomial respectively.
n − k −1
Let s ( X ) = s0 + s1 X + ... + sn − k −1 X be the syn-
drome of r ( X ) which is the remainder ob-
tained from dividing r ( X ) by the generator
polynomial g (X ) .
Recall that s ( X ) is actually equal to the re-
mainder of the error polynomial e ( X ) divid-
ing by g ( X ) , e ( X ) = a ( X ) ⋅ g ( X ) + s ( X ).

7
Hang 2002.5

r ( X ) = b( X ) g ( X ) + s ( X )
Note: r ( X ) = c( X ) + e( X ) = m( X ) g ( X ) + e( X )
⇒ e( X ) = (b( X ) − m( X ) )g ( X ) + s( X )

Suppose the errors in e ( X ) are confined to


the l high-order parity bit positions:
X n − k − l , X n − k − l +1 ,..., X n − k −1

n-k k
Message

Xn-k-l Xn-k-1
Then,
e ( X ) = en − k − l X n − k − l + en − k − l +1 X n − k − l +1 + ... + en − k −1 X n − k −1 .

Dividing e ( X ) by the generator polynomial


g ( X ) , we find that e ( X ) = 0 ⋅ g ( X ) + s ( X ) = s ( X ).
The l high-order syndrome bits,
sn − k − l , sn-k-l +1, ..., sn − k −1

are identical to the errors in e ( X ) .

8
Hang 2002.5

The other n-k-l low-order syndrome bits are


zeros, i.e., s0 = s1 = L = s n−k −l −1 = 0.
Thus, when the received polynomial r ( X ) is
completely shifted into the syndrome register,
the error pattern is trapped in the l high-order
stages of the syndrome register; and the other
n-k-l low order stages contain zeros.

Zeros Error Pattern


| n-k-l | l |

Suppose the errors in e ( X ) are not confined


to the l high-order parity bit positions, but
confined to l consecutive positions (including
the end-around case). For example,

9
Hang 2002.5

l-λ λ

Then, after a certain number of shifts of r (X ) ,


say i cyclic shifts, the errors in e ( X ) will be
shifted into the l high-order parity bit positions
(i )
of r ( X ).
At this instant, the errors are trapped in the l
high-order stages of the syndrome register, and
the other n-k-l low-order stages of the syn-
drome register contain zeros.
Knowing the number of shifts, i, (stored in a
counter), we can determine the location of burst
in e ( X ) .
Then, error correction is done by adding the
error pattern to r (X ) at the right location.
A general error-trapping decoder is shown in
10
Hang 2002.5

Figure 7-2. An error-trapping decoder for the


(279,265) Fire code is shown in Fig. 9.2.
The above procedure is valid not only for the
Fire code but also for the other cyclic codes.
In fact, there is a fast error-trapping decoder
structure for the Fire code, which explains why
the Fire code has a very specific generator
polynomial. (See L&C, p.263~)

Error Trapping Decoder


Gate

Input r(X)
+ Syndrome Register
... l stages

Test for Zeros Gate


n−k −l

Output
Buffer Register Gate +
Figure 7-2
11
Hang 2002.5

Figure 7-2a (L&C, p.264)

12
Hang 2002.5

5. Binary RS Codes
Consider a t-symbol-error correcting RS code
C of length 2m-1 with from GF(2m).
The binary code derived from C by represent-
ing each code symbol by a m-bit byte has
length n = m(2m-1) and number of parity bits
n-k = 2mt.
This binary RS code is capable of correcting
any single burst of length m(t-1)+1 or less be-
cause such a burst can only affect t or fewer
symbols in the original RS code C.
Example 7-4: Consider the NASA standard (255,
223) RS code over GF(28). It is capable of
correcting t=16 symbol errors. The binary code
derived from this RS code has length
n = 8 × 255 = 2040 , and dimension
k = 8 × 233 = 1784.

Hence it is a (2040, 1784) binary RS code.


This code is capable of correcting any single
burst of length l = 8 × (16 − 1) + 1 = 121 or less.

13
Hang 2002.5

6. Interleaving Technique
Let C be an (n, k) linear code.
Suppose we take λ code words from C and
arrange then intoλrows of anλ╳n array as
shown in Figure 7-3. This structure is called
block interleaver and C is the base code. A
typical system structure is shown in Figure
7-4.
Transmission

1st codeword
2nd codeword
3rd codeword

… …

… … λth codeword

n-k k

Figure 7-3 An interleaved array

14
Hang 2002.5

Channel cj mi
mi Interleaver
encoder

Channel
(with memory)

mi Channel c
^

j
c k

decoder Deinterleaver

Figure 7-4 An interleaver system

Then we transmit this code array column by


column in serial manner. By doing this, we
obtain a vector of λn digits.
Note that two consecutive bits in the same
codeword are now separated by λ-1 positions.
Actually, the above process simply interleaves
λcodewords in C. The parameterλis called
interleaving degree (or depth).
There are (2k)λ=2kλsuch interleaved sequences
and they form a (λn,λk) linear code, called an
interleaved code, denoted C(λ).

15
Hang 2002.5

If the base code C is a cyclic code with genera-


tor polynomial g ( X ) , then the interleaved
code C(λ) is also cyclic. The generator
λ
polynomial of C(λ) is g ( X ) .

Error Correction Capability of an Interleaved


Code
A pattern of errors can be corrected for the
whole array if and only if the pattern of errors
in each row is a correctable pattern for the base
code C.
Suppose C is s single-error-correcting code.
Then a burst of length λ or less, no matter
where it starts, will affect no more than one
digital in each row. This single bit error in
each row will be corrected by the base code C.
Hence the interleaved code C(λ) is capable of
correcting any error burst of lengthλ or less.

16
Hang 2002.5

Decoding of Interleaved Code


At the receiving end, the received interleaved
sequence is de-interleaved and rearranged
back to a rectangular array of λ rows.
Then each row is decoded based on the base
code C.
Suppose the base code C is capable of cor-
recting any burst of length l or less. Consider
any burst of lengthλl or less. No matter where
this burst starts in the interleaved code se-
quence, it will result a burst of length l or less
in each row of the corresponding code array as
shown in Figure 7-5.

17
Hang 2002.5

l −1

Figure 7-5 A burst of length λl

As a result, the burst in each row will be cor-


rected by the base code C.
Hence, the interleaved code C(λ) is capable of
correcting any single error burst of length λl
or less.
Interleaving is a very effective technique for
constructing long powerful burst-error correct-
ing codes from good short codes.
If the base code is an optimal
18
Hang 2002.5

burst-error-correcting code, the interleaved


code is also optimal.

Example 7-3:Consider a (7,3) cyclic code C


generated by
g ( X ) = ( X + 1)( X 3 + X + 1)
= 1+ X 2 + X 3 + X 4

This code is capable of correcting any burst of


length l = 2 or less. It is optimal since
2l 2× 2
z= = = 1.
n−k 7−3

Suppose we interleave this code to a depth λ


=10. The interleaved code C(10) is a (70,30)
code which is capable of correcting any burst
of length 20 or less.
The burst-correcting efficiency of C(10) is
2l 2 × 20
z= = = 1. Hence C(10) is also op-
n − k 70 − 30
timal.
19
Hang 2002.5

The generator polynomial of C(10) is


g ( X )= 1 + X 20 + X 30 + X 40 .

Convolutional Interleaver
A convolutional interleaver can be used in place
of a block interleaver in much the same way.
Convolutional interleavers are better matched
for use with the class of convolutional codes
that will be described in next chapter.
Suppose the N input symbols are partitioned
into B groups (depth) and each group contains
M=N/B symbols. This convolutional interleaver
then has B sets of registers, and each shift reg-
ister stores M=N/B symbols.
Thus, the nearby symbols are separated by N
symbols – NxB interleaver. (B and N here are
similar to λ and n in the block interleaver.)
The overall delay (including interleaving and
de-interleaving) is (B-1)N symbols. (The time
20
Hang 2002.5

needed for the decoder to decode all N sym-


bols.)
The memory requirement at the transmitter (or
the receiver) is (B-1)(B/2)M=(B-1)N/2. Com-
paring to the block interleaver, it requires less
memory.
A burst of shorter than or equal to B symbols
will results in a single symbol error of each
group (B groups in total). If a block code of
length N (=MB) symbol is in use, then a burst of
shorter than or equal to N symbols will results
in M symbol errors on a (block) codeword.

Example: American Digital TV standard (ATSC,


Advanced TV System Committee) Packet length
N=208, Depth B=52. (ATSC A53A Annex D
Fig.6) (R&C, p. 479-)

21
Hang 2002.5

1
M(=4Bytes)
2

From 2M To
3
Reed-Solomon Pre-Coder and
Encoder Trellis Encoder

(B-2)M
51
(B-1)M
(B=)52

M=4, B=52, N=208, R-S Block =207, BXM=N

Interleaved patterns:
Input:
365 313 261 209 157 105 53 1

366 314 262 210 158 106 54 2

107 55 3

416 364 312 260 208 156 104 52

22
Hang 2002.5

Output:
2329 261 209 157 105 53 1

2122 54 2 b b b b

b b b b b b

466

259

52 b b b b b b

Deinterleaver: Inverse of the interleaver (A54,


Fig.
10.14)
(B-1)M
1

2 (B-2)M

From To

Trellis Reed-Solomon
Decoder Decoder
2M
50
M(=4 Bytes)
51

(B=)52

M=4, B=52, N=208, R-S Block =207, BXM=N

23
Hang 2002.5

7. Concatenated Coding Scheme


Concatenation is a very effective method of
constructing long powerful codes from shorter
codes.
It was devised by Forney in 1965.
It is often used to achieve high reliability with
reduced decoding complexity.
A simple concatenated code is formed from two
codes: an (n1,k1) binary code C1 and an (n2,k2)
nonbinary code C2 with symbols from GF( 2 k ),1

say a RS code.
Concatenated codes are effective against a
mixture of random errors and burst errors.
Scattered random errors are corrected by C1.
Bursts may affect relatively few bytes, but
probably so badly that C1 cannot correct them.
These few bytes can then be corrected by C2.

24
Hang 2002.5

Outer Code Inner Code


Encoder Encoder
(n2,k2) (n1,k1)

Channel

Outer Code Inner Code


Decoder Decoder

Figure 7-6 Concatenated coding

Encoding
Encoding consists of two stages, the outer code
encoding and the inner code encoding, as
shown in Figure 7-6.
First, a message of k1k2 bits are divided into k2
bytes of k1 bits each. Each k1-bit byte is re-
garded as a symbol in GF( 2 k ).
1

This k2-byte message is encoded into an n2-byte

25
Hang 2002.5

codeword v in C2.
Each k1-bit byte of v is then encoded into an
n1-bit codeword w in C1
This results in a string of n2 codewords con-
taining in C1, a total of n1n2 bits.
There are a total of 2 k k such strings which
1 2

form an (n1n2,k1k2) binary linear code, called a


concatenated code.
C1 is called the inner code and C2 is called the
outer code.

k2-bytes n2-bytes

k1 - k1 -
C2 bits
bits

(n2-k2)-bytes k2-bytes

C1

26
Hang 2002.5

k1 -
bits

(n1-k1)
- bits
n2-bytes

Decoding
Decoding of a concatenated code also consists
of two stages, the inner code decoding and the
outer code decoding, as shown in Figure 7-6.
First, decoding is done for each inner code-
word as it arrives, and the parity bits are re-
moved. After n2 inner codewords have been
decoded, we obtain a sequence of n2 k1-bit
bytes.
This sequence of n2 bytes is then decoded
based on the outer code C2 to give k1k2 de-
coded message bits.
27
Hang 2002.5

Decoding implementation is the straightfor-


ward combination of the implementations for
the inner and outer codes.

Error Correction Capability


Concatenated codes are effective against a
mixture of random errors and bursts.
In general, the inner code is a ran-
dom-error-correcting code and the outer code is
a RS code.
Scattered random errors are corrected by the
inner code, and bursts are then corrected by the
outer code.
Various forms of concatenated coding scheme
are being used or proposed for error control in
data communications, especially in space and
satellite communications.
In many applications, concatenated coding of-
fers a way of obtaining the best of two worlds,
performance and complexity.
If the minimum distances of the inner and outer
28
Hang 2002.5

codes are d1 and d2 respectively, the minimum


distance of their concatenation is at least d1 d2.
(L&C, p.279) (Consider the minimum weight
codeword of the concatenated codes. Pick up
the d2 non-zero symbol outer codeword and
the worst case is that each non-zero column af-
ter inner coding has the minimum weight of d1
non-zero bits. Thus, the minimum nonzero bit
number is d1 d2.)
In fact, it can correct a mixture of random errors
and bursts much better than the preceding
number.

n1-bits n1-bits n1-bits

Random errors Burst error

29
Hang 2002.5

8. Cascaded Coding Scheme: Product Code


A simple generalization of the concatenated
coding is shown in Fig. 7-7. The
two-dimensional code is called the product
code. (R&C, p.123-)
The outer code C2 is an (n2,k2) RS code with
symbols from GF( 2 m ).
The inner code C1 is an (n1,k1) binary linear
code with k1 =λm where λ is a positive in-
teger.
The outer code C2 is interleaved to a depth of
λ.
Essentially, this is interleaving plus concate-
nated code!

30
Hang 2002.5

n2
k2
m -bit
byte
. . . ...

λ -byte .
segm ent
.

n1 - k1

1st n 2 -th
Fram e Fram e

Figure 7-7 Code array for the product code


C1×C2

Encoding/Decoding
A message of k2 m-bit bytes (or k2m bits) is first
encoded into an n2-byte codeword in C2.
This codeword is then temporarily stored in a
buffer as a row in an array as shown in Figure
31
Hang 2002.5

7-7.
After λ outer codewords have been formed,
the buffer stores a λ × n2 array (of m-bit
bytes).
Each column of the array consists ofλm-bit
bytes (orλm bits), and is encoded into an n1-bit
codeword in C1 and transmitted in serial man-
ner.
Note that the outer code is interleaved to a
depth of λ and the inner code consists of λ
bytes of message bits.

Error Correction Capability


(In the following statements, we consider only the
case that m=1.)
If the code C1 has minimum weight d1 and the
code C2 has minimum weight d2, the minimum
weight of the product code is exactly d1 d2. A
minimum weight vector is formed by (1)
choosing a minimum-weight code vector in C1
32
Hang 2002.5

and a minimum-weight code vector in C2; and


(2) forming an array in which all columns cor-
responding to zeros in the code vector from C1
are zeros and all columns corresponding to ones
in the code vector C1 are the minimum-weight
code vector chosen from C2. (L&C, p.275)
Example: Assume the minimum-weight code
vector in C1 = [ 0 1 0 1 0 0 0 ] and, the mini-
 0
 0
 
mum-weight code vector in C2 = 1  . Then, a
1 
 
0
minimum-weight product code vector is
 0 0 
 0 0 
 
0 1 0 1 0 0 0 .
0 1 0 1 0 0 0
 
 0 0 
The maximum random error correction capabil-
 d 1d 2 − 1   d1 − 1
ity is t =  2  = 2 t t + t + t 2 , where 1
t =
1 2 1  2 
 d − 1
and t2 =  22  . However, if the two-step de-
 

33
Hang 2002.5

coding procedure is in use, not all error patterns


with t or few errors are correctable. But many
error patterns of more than t errors are correct-
able by this two-step decoding. (R&C, p.123~)
Let b1 and b2 be the, respectively, burst error
correcting capabilities of the component codes
C1 and C2. Suppose that a codeword array of the
product code is transmitted row by row. When
its received sequence with any burst error of
length b2 n1 or less is re-arranged back into an
array in a row-by-row manner, it is clear that
the burst affects no more than b2 consecutive
rows. Since each column is affected by a burst
of length b2 or less, the burst can be corrected
by a column-by-column decoding. Hence, the
burst-error-correcting capability of the product
code is at least b2 n1. Similarly, the product code
can be transmitted column-by-column and de-
coded row-by-row. Thus, the burst er-
ror-correcting capability b is expressed by
b ≥ max{n1b2 , n2b1} (R&C, p.127)

34

You might also like