Simulation Results For Algebraic Soft-Decision Decoding of Reed-Solomon Codes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Simulation Results for Algebraic Soft-Decision

Decoding of Reed-Solomon Codes


Warren J. Gross Frank R. Kschischang Ralf Koetter P. Glenn Gulak
Abstract The Koetter-Vardy algorithm is an algebraic soft-
decision decoder for Reed-Solomon codes. The algorithm is based
on an extension to the Guruswami-Sudan list-decoding algorithm
with variable multiplicities that are assigned proportional to the
reliabilities of the received symbols. There are three steps: (1) mul-
tiplicity calculation, (2) interpolation of a bivariate polynomial,
and (3) nding the y-roots of this polynomial. A low-complexity
algorithm for calculating the multiplicities is proposed. Simula-
tion results indicate that the coding gain is dependent on the code
rate and ranges from 0.25 dB to 4.25 dB with a practical upper
limit of 1 1.5 dB, assuming binary phase shift keying and addi-
tive white Gaussian noise. Higher coding gains of between 2 dB
and 6.8 dB can be achieved over a Rayleigh fading channel. The
KV algorithm exhibits a performance-complexity tradeoff which
is tunable by the choice of m
max
, n and k. The code parameters
should be chosen carefully to take advantage of the sweet spots
in the performance-complexity prole.
I. INTRODUCTION
Reed-Solomon codes are powerful error-correcting codes
that can be found in a wide variety of digital communications
systems, from digital media to wireless communications and
deep-space probes. The ubiquitous nature of these codes con-
tinues to fuel research into decoding algorithms some forty
years after their introduction. A major challenge has been the
development of soft-decision decoders; that is, decoders that
can utilize the full information available from the channel in
the decoding process.
Reed-Solomon codes are non-binary linear block codes
whose symbols are chosen from a nite eld, usually the bi-
nary extension eld GF(2
q
). Algebraic decoders exploit the
underlying algebraic structure of the code to generate a system
of equations that is solved using the arithmetic operations of
the nite eld. This operation does not appear to be compatible
with the real-valued, soft information available from the chan-
nel. Traditional decoders quantize soft information into hard
decisions which can be utilized directly. It is well known that
a performance penalty of approximately 2-3 dB in asymptotic
coding gain on an additive Gaussian noise channel (AWGN)
is paid when using a hard-decision decoder [1]. Even greater
coding gains can be realized over Rayleigh fading channels.
In this paper we characterize the performance of a recently-
introduced algebraic soft-decision decoding algorithm, the
Warren J. Gross, Frank R. Kschischang and P. Glenn Gulak are with
the Department of Electrical and Computer Engineering, University of
Toronto, 10 Kings College Road, Toronto, Ontario, M5S 3G4, Canada.
Email: [email protected]. This research was supported by NSERC
and the Government of Ontario.
Ralf Koetter is with the Coordinated Science Laboratory, University of Illi-
nois at Urbana-Champaign, Urbana, IL, 61801, U.S.A. This research was sup-
ported by the National Science Foundation under grant CCR-0073490.
Koetter-Vardy Algorithm [2, 3], which provides substantial
coding gains while maintaining polynomial complexity in the
length of the code.
II. THE KOETTER-VARDY ALGORITHM
A. Reed-Solomon Codes
Consider the nite eld with Q elements, GF(Q). The mes-
sage to be transmitted, f , consists of k elements of GF(Q).
f = ( f
0
, f
1
, f
2
, . . . , f
k1
), f
i
GF(Q). (1)
The message symbols can be considered to be the coefcients
of a degree k 1 message polynomial
f (x) = f
0
+ f
1
x + f
2
x
2
+. . . + f
j
x
j
+. . . + f
k1
x
k1
. (2)
An (n, k) Reed-Solomon (RS) code over GF(Q), represents the
k-symbol transmitted message f by an n-symbol codeword c
formed by evaluating the message polynomial f (x) at n el-
ements of GF(Q). If the set of evaluation elements is D =
x
1
, x
2
, . . . , x
n
then the code is
RS
Q
(n, k) =( f (x
1
), f (x
2
), . . . , f (x
n
)), x
i
D, (3)
for all possible message polynomials f (x). The minimum dis-
tance of an (n, k) Reed-Solomon code is d
min
= nk +1.
Usually, n = Q1, and the set of evaluation elements is the
set of non-zero elements of GF(Q). If n = Q then we call the
code an extended Reed-Solomon code and D=GF(Q). We call
this method of generating a RS code the evaluation map method
[4]. It is also possible to view Reed-Solomon codes as BCH
codes. Then, the resulting cyclic code consists of codewords
generated by multiplying f (x) by a generator polynomial g(x),
c(x) = f (x)g(x), (4)
where c(x) is a degree n1 codeword polynomial correspond-
ing to the n-symbol codeword c. Other encodings are possible
(e.g. systematic encodings).
The evaluation map method is useful because it provides in-
sight leading to interpolation-based decoding algorithms.
B. Decoding as an Interpolation Problem
In this section, we describe the Guruswami-Sudan (GS) algo-
rithm for the hard-decision decoding of Reed-Solomon codes
[5, 6] which is the basis for the Koetter-Vardy algorithm. For
proofs, please see [57]. To formally state the algorithm, we
need to dene the weighted degree of a bivariate polynomial.
Let P(x, y) =

i=0

j=0
p
i, j
x
i
y
j
be a bivariate polynomial over
GF(Q) and let w
x
and w
y
be nonnegative real numbers. The
(w
x
, w
y
)-weighted degree of P(x, y), deg
(w
x
,w
y
)
(P), is dened as
the maximum over all the numbers iw
x
+ jw
y
such that p
i, j
,=0.
The (1, 1)-weighted degree of a bivariate polynomial is the
usual notion of degree.
A bivariate polynomial P(x, y) is said to pass through a
point (x
i
, y
i
) if P(x
i
, y
i
) = 0. Consider the received word y =
(y
1
, y
2
, . . . , y
n
) where y = c +e. An element of GF(Q), x
i
, can
be uniquely associated with each y
i
to form the list of points in
GF(Q) GF(Q),
L
n
=(x
1
, y
1
), (x
2
, y
2
), . . . , (x
n
, y
n
). (5)
If there is no noise (e = 0), then y
i
= f (x
i
) and the bivariate
polynomial P(x, y) = y f (x) passes through all of the points
in L
n
. To account for the effect of noise, introduce an error-
locator polynomial of the form (x, y) so that (x
i
, y
i
) = 0
whenever e
i
,= 0. The decoding problem can be posed as the
following interpolation problem [8]:
Given a set of n received points, L
n
, nd the bi-
variate polynomial with minimal (1, k 1)-weighted
degree of the form P(x, y) = (x, y)(y f (x)) with
deg f (x) < k such that P(x, y) passes though all the
received points and y f (x) passes though as many
received points as possible.
The bivariate polynomial P(x, y) can be factored to nd the list
of factors of the form y f (x), deg f (x) <k. The decoded code-
word can be found by re-encoding the decoded messages and
then choosing the codeword with the minimum distance to the
received word. However, there is no need to performa complete
factorization since we are just looking for the linear y-roots of
P(x, y). Roth and Ruckenstein give an appropriate root-nding
algorithm for this problem [9]. This algorithm is a bounded-
distance or list decoder for Reed-Solomon codes. The list de-
coding problem is to nd the set of codewords at a distance of
from the received word where 0 n. Traditional decoders
can only correct up to t =d
min
/2| errors. If we consider >t,
there may not be a unique codeword at a distance >d
min
/2 from
the received word. Therefore bounded-distance decoders with
>t return a list of candidate codewords.
An important concept for polynomials over a nite eld of
characteristic two is the Hasse Derivative [10]. The (, )th
Hasse derivative of a bivariate polynomial P(x, y) is dened for
integers , 0 as:
P
[,]
(x, y) =

ab
_
a

__
b

_
p
a,b
x
a
y
b
. (6)
We say that a polynomial passes through a point with multiplic-
ity m
i
if the polynomial and its Hasse derivatives P
[
i
,
i
]
,
i
+

i
< m
i
, all pass through the point. We can improve the error-
correcting capability of the decoding algorithm by introducing
singularities at each of the received points [6], forcing the poly-
nomial to intersect itself multiple times at a point. The interpo-
lation polynomial can be found by solving the system of equa-
tions implied by P
[
i
,
i
]
(x
i
, y
i
) = 0 for all points (x
i
, y
i
) L
n
,

i
,
i
0, such that
i
+
i
< m
i
. The Guruswami-Sudan algo-
rithm can correct up to n(1
_
k/n) errors. For hard-decision
KoetterVardy SoftDecision
Front End
information
from
channel
soft
Roots codeword
Decoded
Select
Output
Interpolate
Find
Calculate
Multiplicities
Modified GuruswamiSudan Algorithm
Fig. 1. The Koetter-Vardy algorithm.
decoding, the Guruswami-Sudan algorithm assigns equal mul-
tiplicities to the received points.
C. Soft-Decision Decoding
Guruswami and Sudan hint at a possible soft-decision exten-
sion to their algorithm in [6] by allowing each point on the in-
terpolated curve to have its own multiplicity. Koetter and Vardy
proposed a method to translate soft-information into multiplic-
ities [2, 3]. The Koetter-Vardy (KV) algorithm performs soft-
decision decoding by assigning unequal multiplicities to points
according to their relative reliabilities. We note that all possi-
ble Qn transmitted/received symbol pairs are considered and
not just the ones corresponding to the hard decisions. Once
multiplicities have been assigned, the rest of the decoding pro-
ceeds according to the Guruswami-Sudan algorithm. A block
diagram of the KV algorithm is given in Figure 1
A precise denition of soft information is needed. The sym-
bols are drawn from a nite eld GF(Q) and transmitted across
a memoryless channel. The channel input and output are the
random variables X and Y. The optimum value of the soft in-
formation for symbol y
j
given that symbol x
i
was sent is the
a-posteriori probability (APP):

i, j
= P(X = x
i
[Y = y
j
) (7)
where 1 i Q and 1 j n.
A (Qn) reliability matrix whose columns sum to unity
can be constructed from the
i, j
. Ultimately, the information in
this matrix has to be translated to a set of weighted points.
The weights, or multiplicities can be recorded in a (Q
n) multiplicity matrix M. The score of a codeword c =
(c
1
, c
2
, . . . , c
n
) over GF(Q) with respect to a multiplicity matrix
M is dened as
S
M
(c) =
Q

i=1
n

j=1
m
i, j
[c]
i, j
(8)
where the [c]
i, j
are elements of the (Qn) matrix [c] formed by
setting [c]
i j
= 1 if c
j
= x
i
and 0 otherwise. The decoder does
not know the codeword, but can only infer information about
it from the received soft information. Therefore, the codeword
appears as a random vector to the decoder and the score S
M
(c)
is a random variable. The goal is to nd a multiplicity matrix M
that maximizes the expected value of S
M
(c). The problem re-
duces to nding the matrix M which maximizes the inner prod-
uct of M and where M, ) =
Q
i=1

n
j=1
m
i, j

i, j
.
Algorithm 1 [3] is an optimal algorithm for generating a ma-
trix M which maximizes M, ) subject to the constraint that

Q
i=1

n
j=1
m
i, j
< s. If we let s , M, ) is maximized. The
cost, C
M
of M is the number of linear equations that need to be
solved for the interpolation. If an entry in M is increased from
m
i, j
to m
i, j
+1, this introduces m
i, j
+1 additional linear con-
straints on the interpolation problem. Therefore, we would like
to keep s as small as possible. We will also see that error-rate
performance in general improves with C
M
and hence s. This
gives the KV algorithm a tunable parameter to tradeoff perfor-
mance with decoding complexity.
Algorithm 1 Algorithm for calculating M from subject to
complexity constraint s (from [3]).
Choose a desired value for s =
Q
i=1

n
j=1
m
i, j

; M 0
while s > 0 do
Find the position (i, j) of the largest entry

i, j
in

i, j


i, j
m
i, j
+2
m
i, j
m
i, j
+1
s s 1
end while
Algorithm 1 has high complexity as it has to search through
a (Qn) matrix s times. This could require a maximum of
(Qns) memory accesses. If we are to have at least the per-
formance of a hard-decision decoder we need s n. Therefore
the complexity of Algorithm 1 is O((n +1)(n)(n)) = O(n
3
).
If Q is large, say 256, then n
3
= 2
24
. We propose a low-
complexity algorithm as an alternative to Algorithm 1. M, )
is maximized if [3]:
M =| as (9)
If we instead chose a nite value of R, > 0, then we can
obtain a xed-cost matrix M. Algorithm 2 is a heuristic that
has been experimentally determined to give comparable perfor-
mance to Algorithm 1. Algorithm 2 only has to make a single
pass through and therefore has complexity O(n
2
). We note
that in practice, is quite sparse with most of its entries 0.
Therefore, storing it explicitly is a waste of memory and com-
putational resources. At the extreme end of the spectrum is the
case of high SNR where has exactly one non-zero entry per
column which is equal to 1.0. This is a hard decision decoding
problem and the KV algorithm reduces to the GS algorithm if
s = n. Then the lower bound on the complexity of Algorithm 1
is n s = O(n
2
) and Algorithm 2 is O(n). The complexity of
interpolation is now dependent on the maximum multiplicity in
M, m
max
=|.
Algorithm2 for calculating M from. The tunable complexity
parameter is and the maximum possible entry in M is |.
for i = 1 to n do
for j = 1 to Q do
m
i, j

i, j
|
end for
end for
In practical implementations can be a power of two and the
multiplication reduces to a shift operation. The oor function
is naturally implemented by truncating the bits of the result to
the right of the decimal.
III. SIMULATION RESULTS
A. Software Implementation
We have implemented the KV algorithm in software. Soft
information is converted to multiplicities by Algorithm 2. The
interpolation step nds a Gr obner basis for the ideal of bivari-
ate polynomials which vanish at a set of points with prescribed
multiplicities. Fast O(n
2
) algorithms for interpolation are de-
scribed in [7, 9, 1114]. We use the algorithm from [7] for the
GS algorithm which is easily modied to handle unequal mul-
tiplicities. The root-nding algorithm from [9] is used.
The software implementation of the algorithm runs very
slowly, especially for large eld sizes. Fortunately, an upper-
bound on the frame-error rate (FER) can be easily obtained
through the following theorem which is proved in [3]:
Theorem 1: Let P(x, y) be an interpolation polynomial
obtained from multiplicity matrix M corresponding to the
transmitted codeword c with cost C
M
. Then the factor-
ization of P(x, y) contains a factor y f (x) such that c =
( f (x
1
), f (x
2
), . . . , f (x
n
)) if:
S
M
(c) >min
_
Z :
_
+1
k 1
__

k 1
2
_

k 1
_
+1
_
>C
M
_
.
(10)
If this threshold condition is satised then we are guaran-
teed that the decoding will be successful. Since the decod-
ing could still be successful otherwise, these simulation re-
sults might be slightly pessimistic. Simulations for n = 15 and
k =5, 7, 9, 11, 13 and a maximum multiplicity m
max
=2, 4
show that the estimated performance matches the actual de-
coder performance very closely. As an example, see Figure 2.
Hybrid simulations are possible where the full decoder is only
employed on failure of the threshold condition.
B. Coding Gain
Of particular interest is the effect of code rate on the cod-
ing gain. The Guruswami-Sudan algorithm can correct up to
n(1
_
k/n)) errors and therefore improves as the rate k/n de-
creases [6]. We would expect that this effect is preserved in
soft-decision decoding with the KV algorithm. Simulations for
n = 15 and k = 3, . . . , 13 demonstrate this effect. Figure 3 plots
the coding gain in dB at a FER of 10
3
of the KV algorithm
over a conventional hard-decision Reed-Solomon decoder as a
function of code rate. The modulation is BPSK and the channel
model is additive white Gaussian noise (AWGN). We see that
the coding gain ranges from 0.25 dB at high rates and low com-
plexity to 4.25 dB at low rates and high complexity, giving the
designer two degrees of freedom to obtain a given coding gain.
Figure 4 shows the performance of the KV algorithm for a
very common high-rate (255, 239) Reed-Solomon code. We see
that for very high complexities, a maximum gain of 0.47 dB can
be achieved. For reasonable complexities, say with m
max
= 4, a
gain of 0.27 dB is achieved.
We also investigated the performance of the KV algorithm
over a Rayleigh fading channel with 16-QAM modulation. Fig-
ure 5 shows the performance of a (15, 11) Reed Solomon code.
Four-bit symbols from GF(16) are mapped directly to 16-QAM
constellation points. The multiplicative fading factors are in-
dependent, simulating the effect of an ideal interleaver. The
reliability matrix is calculated directly from the received soft-
information assuming perfect channel state information. We
see that much larger gains are realized on a fading channel. At
a FER of 10
3
, the coding gain for a (15, 11) code is 5 dB for
m
max
= 4 and 6.8 dB for m
max
= 100.
A simulation setup for a (255, 191) Reed-Solomon code is
shown in Figure 6. Each eight-bit symbol of GF(256) is split
into two four-bit symbols and two 16-QAM channel uses are
needed to transmit the symbol. The simulation results are plot-
ted in Figure 7. The coding gain is 2.1 dB for m
max
= 4 and
2.9 dB for m
max
= 100 at FER 10
3
. We note that the coding
gain on a Rayleigh channel is not constant but increases as the
SNR increases since the two curves diverge. The results given
here for a high FER will improve as the SNR increases.
C. Complexity
Above we saw that the achievable coding gain increases as
the rate of the code decreases. Unfortunately, so does the com-
putational complexity of the algorithm, which is dominated by
the complexity of the interpolation algorithm. The algorithm
maintains a set of b polynomials of length L terms, which at the
end of each of the C
M
iterations, satisfy one additional linear
constraint. When the iterations terminate, the polynomial with
minimal (1, k 1)-weighted degree is chosen as P(X,Y). The
number of iterations C
M
is [3]:
C
M
=
1
2
Q

i=1
n

j=1
m
i, j
(m
i, j
+1), (11)
which is a function of the code length n, and the maximum mul-
tiplicity m
max
. The speed of the algorithm is determined by C
M
since iteration i requires the results of iteration i 1, creating
a dependency loop. The memory requirements for interpola-
tion are b L. The number of polynomials b is given as [7]
b =
_
(k1)+

(k1)
2
+8C
M
(k1)
2(k1)
_
, which is
_
2C
M
k1
|. There-
fore, b increases with both n and m
max
and also increases as the
rate k/n decreases. For maximum coding gain, we would like
the rate to be low and m
max
to be high, but this increases the
computational complexity and memory requirements. To study
this tradeoff, one possible measure is the space-time complexity
of interpolation:
C
par
= bLC
M
bC
2
M
,
(12)
assuming that the b polynomials can be updated within each
loop in parallel in constant time. This expression assumes
that the complexity of GS decoding with equal multiplicities
of m
max
is an upper bound to the complexity of of KV decod-
ing with maximum multiplicity m
max
. Note that L can be cal-
culated exactly [3, 7] but is approximately C
M
. For software
implementations on a serial machine, each iteration updates b
polynomials of length L and the complexity is
C
ser
= b
2
L
2
C
M
b
2
C
3
M
. (13)
Let us dene the complexity-gain ratios CGR
par
= C
par
/
r,m
and CGR
ser
=C
ser
/
r,m
where
r,m
is the coding gain. For the
coding gain experiment of Figure 3 (BPSK, AWGN, n = 15),
we plot CGR
par,norm
, the CGR
par
normalized by its largest
value. From Figure 8 we see that for this particular setup, there
are good (even values of k) and bad (odd k) choices of the
rate. The designer should judiciously choose the parameters of
the code with this in mind.
IV. CONCLUSIONS
The Koetter-Vardy algorithm is a soft-decision decoding
algorithm for Reed-Solomon codes that incorporates soft-
decisions into an algebraic decoding framework. The soft-
decision front end can be implemented with a reasonable
complexity using the proposed Algorithm 2. We have pre-
sented simulation results to characterize the coding gains pos-
sible from the soft-decision Koetter-Vardy algorithm for Reed-
Solomon codes. The algorithm can achieve signicant gain for
soft-decision decoding on an AWGN channel but only for low-
rate codes and with very high complexity. For reasonable com-
plexities, the upper limit is 1 1.5 dB. High rate codes only
benet from 0.25 to 0.5 dB of coding gain. Rayleigh fad-
ing channels exhibit larger coding gains of about 2 to 6.8 dB.
The KV algorithm exhibits a performance-complexity tradeoff
which is tunable by the choice of m
max
, n and k. The code pa-
rameters should be chosen carefully to take advantage of the
sweet spots in the performance-complexity prole.
REFERENCES
[1] A. Brinton Cooper III, Soft decision decoding of Reed-Solomon codes,
in Reed-Solomon Codes and Their Applications, ch. 6, pp. 108124, New
York, New York: IEEE Press, 1994.
[2] R. Kotter and A. Vardy, Algebraic soft-decision decoding of Reed-
Solomon codes, in Proc. of the IEEE Int. Symp. on Information Theory,
p. 61, 2000.
[3] R. Koetter and A. Vardy, Algebraic soft-decision decoding of Reed-
Solomon codes. Submitted to IEEE Trans. Inf. Theory, May 31 2000.
[4] I. S. Reed and G. Solomon, Polynomial codes over certain nite elds,
SIAM Journal of Applied Math., vol. 8, pp. 300304, 1960.
[5] M. Sudan, Decoding of Reed-Solomon codes beyond the error correction
bound, J. Complexity, vol. 13, no. 1, pp. 180193, 1997.
[6] V. Guruswami and M. Sudan, Improved decoding of Reed-Solomon and
Algebraic-Geometry codes, IEEE Trans. Information Theory, vol. 45,
pp. 17571767, September 1999.
[7] R. R. Nielsen, Decoding AG-codes beyond half the minimum distance,
Masters thesis, Technical University of Denmark, August 31 1998.
[8] G. D. Forney, Reed-Solomon codes. URL: http://truth.mit.edu/
eyeh/6.451/L3G.pdf, 2001.
[9] R. M. Roth and G. Ruckenstein, Efcient decoding of Reed-Solomon
codes beyond half the minimum distance, IEEE Trans. Information The-
ory, vol. 46, pp. 246257, January 2000.
[10] H. Hasse, Theorie der h oheren differentiale in einemalgebraischen funk-
tionenk orper mit vollkommenem konstantenk orper bei beliebiger charak-
teristik, in J. Reine. Ang. Math.,, vol. 175, pp. 5054, 1936.
[11] H. M. M oller and B. Buchberger, The construction of multivariate poly-
nomials with preassigned zeros, in EUROCAM 82, European Computer
Algebra Conference (J. Calmet, ed.), vol. 144 of Lecture Notes In Com-
puter Science, (Marseille, France), pp. 2431, April 1982.
[12] J. Abbott, A. Bigatti, M. Kreuzer, and L. Robbiano, Computing ideals of
points, J. Symbolic Computation, vol. 30, no. 4, pp. 341356, 2000.
[13] G. Feng and K. Tzeng, A generalization of the Berlekamp-Massey algo-
rithm for multisequence shift-register synthesis with applications to de-
coding cyclic codes, IEEE Trans. Inf. Theory, vol. 37, pp. 12741287,
September 1991.
[14] R. K otter, On Algebraic Decoding of Algebraic-Geometric and Cyclic
Codes. PhD thesis, Link oping University, 1996.
0 1 2 3 4 5 6 7 8 9
10
5
10
4
10
3
10
2
10
1
10
0
E
b
/N
0
(dB)
F
E
R
Hard Decision
KV Threshold. m
max
= 4
KV Decoded. m
max
= 4
Fig. 2. Simulation of a (15,7) Reed-Solomon code with BPSKmodulation over
an AWGN channel comparing the actual Koetter-Vardy decoder with m
max
= 4
and a simulation using the threshold condition in Theorem 1.
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Rate
C
o
d
in
g

G
a
in

a
t

F
E
R

=

1
0

3

(
d
B
)
m
max
= 2
m
max
= 4
m
max
=8
m
max
= 100
Fig. 3. The effect of code rate on coding gain for an (15, k) Reed-Solomon
code transmitted with BPSK modulation over an AWGN channel. The simula-
tions were performed using the threshold condition in Theorem 1.
5 5.5 6 6.5
10
4
10
3
10
2
10
1
10
0
E
b
/N
0
(dB)
F
E
R
Classical harddecision
GuruswamiSudan
KV. m
max
= 2
KV. m
max
= 4
KV. m
max
= 8
KV. m
max
= 100
Fig. 4. Simulation of a (255, 239) Reed-Solomon code with BPSK modulation
over an AWGN channel using the threshold condition from Theorem 1.
0 5 10 15 20 25
10
5
10
4
10
3
10
2
10
1
10
0
E
b
/N
0
(dB)
F
E
R
RS(15,11). Harddecision
RS(15,11). KV. m
max
= 4
RS(15,11). KV. m
max
= 100
Fig. 5. Simulation of a (15, 11) Reed-Solomon code with 16-QAMmodulation
over a Rayleigh fading channel using the threshold condition in Theorem 1.
+ X
X +
perfect CSI calculate
reliabilities
KV
16QAM
16QAM
RS(255,191)
n2
n1

{f}
f
Fig. 6. Simulation setup for decoding the RS(255, 191) code over a Rayleigh
fading channel with 16-QAM modulation.
12 12.5 13 13.5 14 14.5 15 15.5 16 16.5 17
10
5
10
4
10
3
10
2
10
1
10
0
E
b
/N
0
(dB)
F
E
R
Harddecision
KV. m
max
= 4
KV. m
max
= 100
Fig. 7. Simulation of a (255, 191) Reed-Solomon code with 16-QAM modu-
lation over a Rayleigh fading channel using the threshold condition from The-
orem 1.
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Rate
C
G
R
p
a
r,n
o
rm
m
max
= 2
m
max
= 4
m
max
= 100
Fig. 8. The normalized complexity-coding gain ratio vs. the code rate.

You might also like