1 Introduction of Coded Digital Communication Systems Modern digital communication systems often require error-free tran...
112 downloads
1204 Views
7MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
1 Introduction of Coded Digital Communication Systems Modern digital communication systems often require error-free transmission. For example, the information transmitted and received over a banking network must not contain errors. The issue of dara integriry is becoming increasingly important, and there are many more situations where error protection is required. In 1948, the fundamental concepts and mathematical theory of information transmission were laid by C. E. Shannon [I]. Shannon perceived that it is possible to transmit digital information over a noisy channel with an arbitrarily small error probability by proper channel encoding and decoding. The goal of approaching such error-free transmission can be achieved when the information transmission rate is less than the channel capacity. C C' in bits per second. Since Shannon's work, a great deal of efforr has been spent by many researchers to find good codes and efficient decoding methods for error-control. As a result, many different types of codes, namely block codes and convolutional codes. have been found and used in modern digital communication systems. This book is mainly concerned with the structures of binary and nonbinary, linear block codes, and the decoding techniques. The treatment here concentrates on the basic encoding and decoding principles of those block codes.
1.1
Introduction
The development of channel coding was motivated primarily by users who wanted reliable transmission in a communication system. This chapter describes
2
Error-Control Block Codes for Communications Engineers
the elements that make up a reliable communication system, then examines various rypes of transmission channels. The mathematical models presented here give an adequate description of a communication system, because they relate to the physical reality of the transmission medium. Such mathematical models also ease the analysis of system performance. The principle concepts are presented at a basic level. For a more thorough treatment, readers should consult other, more comprehensive and pioneering references [1-4].
1.2 Elements of a Digital Communication System 1.2.1
Data Source and Data Sink
The essential features of a digital communication system can be described in a block diagram, as shown in Figure 1.1. Information data is generated by the data source. In most applications, the information symbols are binary in nature. These information symbols are sent in sequence, one after the other, to give a serial system. The key parameter here is the information rate, Rs' of the source, which is the minimum number of bits per second needed to represent the source ompur. Normally, all redundancy should be removed from the data source, if possible. Details of redundancy removal from a source (source coding theory) are not discussed here. Rather, it is assumed that the symbols generated by the data source are statistically independent and equally likely to have one of its possible values and are free of redundancy. At the Channel code symbols
Noise w(t)
._-_._-_:
~----_._---_._--_._._--_.__
Discrete noisy channel Figure 1.1 Model of a coded digital communication system.
Introduction of Coded Digital Communication Systems
receIving end, the decoded information data (destination).
IS
3
delivered to the data sink
1.2.2 Channel Encoder and Channel Decoder The information sequence generated by the data source is processed by the channelencoder, which transforms a k-symbol input sequence into an n-symbol output channel code sequence. The code sequence is a new sequence that has redundancy in it. The rate at which the transmission is performed by the encoder is defined as the code rate, ReI where R, = k/ nand R, $ 1. Two kinds of codes can be used: block codes and convolutional codes [5-R]. Block codes are implemented by combination logic circuits. Examples of block codes are Bose-Cbaudhuri-Hocquenghem (BCH) codes, Reed-Muller (RM) codes, cyclic codes, array codes, sinyle-error-correctine (SEC) Hamming codes, and ReedSolomon (RS) codes. Convolutional codes are implemented by sequential logic circuits and are also called tree codes or trellis codes. In general, block and convolutional codes can be binary or nonbinary, linear or nonlinear, and systematic or nonsysrernatic. In these contexts, we do nor discuss the theory of convolutional codes. At the receiving end, the channel decoder uses the redundancy in the channel code sequence to correct errors, if there are any, in the received channel sequence. The decoder then produces an estimate of the source information sequence.
1.2.3 Modulator. Transmission Path. and Demodulator In a coded digital communication system, the modulator simply converts a block of channel code symbols to a suitable waveform. s(t), of finite duration for transmission. The process is referred ro as modulation. Viewed in a more general way. an M-ary modulator takes blocks of y binary digits from the channel encoder output and assigns one of the M possible waveforms to each block where M = 2Y and y ~ 1. Thus. y number of binary bits are used to select a signal waveform of duration T seconds. T is referred to as the signaling
interval. Modulation can be performed by varying the amplitude, the phase or the frequency of a high-frequency signal, called a carrier, by the input signal of the modulator. (Details of varying the frequency of a carrier signal are not discussed in this book.) If rhe input signal of the modulator is used to vary the amplitude of a carrier signal, the modulation is called amplitude-shift keying (ASK) modulation. For example, an M-ary amplitude-shift keying (M-ASK) signal can be defined by
4
Error-Control Block Codes for Communications Engineers
o s t~
T
(1.1)
elsewhere where
Aj
'"
A[2i- (M- I)J
0.2)
for i '" 0, 1, ... , M - 1. Here, A is a constant and fc is the carrier frequency. Figure 1.2 shows the 4-ASK signal sequence generated by the binary sequence 00 11 01 10. It can be seen that the modulated signal requires a bandwidth of fc = 11 T Hz. The signal bandwidth is obviously inversely proportional to
T If the input signal of the modulator is used to vary the phase of a carrier signal, the modulation is called phase-shift keying (PSK) modulation. An M -ary phase-shift keying (M-PSK) signal can be defined by
os
A COS(21T! et + OJ + 0'),
= { 0,
s(t)
t~
T
elsewhere
( 1.3)
where 21T
OJ = - i M 0 Binary signal
01"
:0
LJ
(1.4)
0
1
I
• Time
ita)
3A
4-ASK signal
A Time -A 0
T
·3A
~.
T
~:
(b)
Figure 1.2 4-ASK modulation: (al binary signal and (b) 4-ASK signal.
Introduction of Coded Digital Communication Systems
5
for i = 0, 1, . . . , M - 1. Here, A is the amplitude constant and (i' is an arbitrary phase angle. For convenience, the arbitrary phase angle (i' is taken to be zero. Figure 1.3 shows the binaryphase-shift krying(BPSK) signal sequence generated by the binary sequence 0 1. Figure 1.4 shows the 4-ary phase-shift krying (4-PSK) signal sequence generated by the binary sequence 00 11 0 I 10.
Also, we can use the input signal of the modulator to vary the amplitude and the phase of a carrier signal, the modulation is called quadrature-amplitude-
l
Binary signal
a
-JL
-+-__
~
Time
(a)
BPSK signal
Time
T
T (b)
Figure 1.3 BPSK modulation: (a) binary signal and (b) BPSK signaL
Binary signal
~a
0 I
:0
1
W
0
I
• Time
1(a)
A 4-PSK signal
+~-I-+---l,r-"*"-J~-ht--+-+---I.Ti me
(b)
Figure 1.4 4-PSK modulation: (a) binary signal and (b) 4-PSK signaL
6
Error-Control Block Codes for Communications Engineers
modulation (QAM) modulation. An M -ary quadrature-amplitude-modulation (M-QA1\1) signal can be defined by OS t S
T
elsewhere
(1. 5)
where
Ai
=
A[2i - (-1M -
l)J
( 1.6)
Bj
=
A[2) - (-1M - 1)]
( 1.7)
for i, j = 0, I, ... ,-1M - I and A is a constant. It can be seen from (1.5) that an M-ary QAM signal may be viewed as a combination of two -1M-ASK signals in phase quadrature. For M = 16, the modulation is called a 16-ary, or l o-point, quadrature-amplitude-modulation 06-QAM). In digital communication system design, it often is convenient to represent each signal waveform, s(t), as a point in a complex-number plane. This is referred to as the signalconstellation diagram, and the coordinates of each point can be found by reading the values along the real and imaginary axes in the complex-number plane. Thus, a block of y channel code symbols maps implicitly to the signal point (symbol), s I, at time l; and there is a unique one-co-one mapping between the channel code symbols and the signal symbols in the signal constellation diagram. Each signal waveform s(t) of duration T = Jlfc seconds can now be identified by a signal symbol and the signal symbol rate is 11 T symbols per second or bauds. The transmission rate at the input of the M-ary modulator is therefore equal co r! T bits per second, where y is the number of bits required to represent a signal symbol. Figure 1.5 gives the signal constellation diagrams of some commonly used binary and M-ary modulation techniques. The types of signal constellations shown in Figure 1.:; (a) and (b) are examples of one-dimensional modulation, where all signal points lie along the real axis in the complex-number plane. Note that the BPSK signal is equivalent to the binary amplitude-shift keying (BASK) signal. Figure 1.5 (c) and (d) arc examples of two-dimensional modulation, where all signal points lie along the real and imaginary axes in the complex-number plane. In all cases, it is common co set the average energy per signal symbol. F s ' to unity. For example, all the signal points are equally spaced and lie on the unit circle for the 4-PSK signals as shown in Figure 1.5(c). The average energy per signal symbol is equal to one. The modulator output waveform now passes into a transmission path (an analog channel), and the path may modify the waveform by introducing
Introduction of Coded Digital Communication Systems •
I
o
•
•
I
0
•
7
Real
(b)
(a)
Imaginary
Imaginary
Real
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
0
Real
(d)
(c)
Figure 1.5 (a) 4-ASK, (b) BPSK. (e) 4-PSK. and (d) 16-QAM signal constellation diagrams.
noise. The transmission path may take on various forms. It does not have to be a permanent hard-wired medium (e.g., a telephone line) that connects the modulator and demodulator together. Typical transmission paths are telephone circuits, mobile radio links, satellite links, or even magnetic tapes. Each of these analog channels is subject to noise disturbance. If an additive white Gaussian noise (AWGN), w(t), is added to the transmitted waveforms at the output of the analog channel, the received signal is r(t)
=
s(t)
+
w(t)
( 1.8)
The demodulator uses a process of linear coherent demodulation (matchedfilter detection and sampling), where the rransmirred carrier and the locally generated carrier at the receiver are synchronized in frequency and phase, to convert the received signal, r(t), into its digital form. With appropriate design of the demodulator, the sampled demodulator output signal at time I is r/=s/+w/
(1. 9)
where the real and the imaginary parts of the noise components {w II are statistically independent Gaussian random variables with zero mean and fixed variance (T2. In the case of a BPSK signaling scheme, the variance of the Gaussian noise samples is equal to the two-sided noise power spectral density No /2. The modulator together with the analog channel and the demodulator form a discrete noisy channel. The transmitter and the receiver are assumed to be held in synchronism (Q give a synchronous serial transmission system.
8
Error-Control Block Codes for Communications Engineers
1.2.4 Channel Models 1.2.4.1
Discrete Memoryless Channel
From the viewpoint of the channel encoding and decoding operations in the digital communication system of Figure 1.1, the segment enclosed in dashed lines characterizes a discrete noisy channel. One way of realizing this channel is to use a coherent M-ary signaling scheme. The channel can then be characterized by a set of M input symbols, a set of Q output symbols, and a set of M· Q transition probabilities. The transition probabilities are time-invariant and independent from symbol to symbol. This is the discrete memoryless channel (DMC). In the general case, a DMC comprises an M-ary modulator, a transmission path, and a Q-ary demodulator. This channel can be represented diagrammatically as shown in figure 1.6. P(jI i) defines the transition probability of receiving the j-th symbol given that the i-th symbol was transmitted, where i = 0, 1, ... , M - 1 and j = 0, 1, ... , Q - I. By making Q ~ M, the demodulator now produces soft-decision information to the channel decoder. The advantage of using the soft-decision demodulator output in the channel decoder is discussed in Chapter 3.
1.2.4.2 Binary Symmetric Channel The most commonly encountered case of the DMC is the binary symmetric channel (BSC), where a binary modulator is used at the transmitter and the demodulator is making hard-decisions with Q set to 2. The schematic representation of this channel is shown in Figure 1.7, where p' denotes the cross-over probability. For the coherent BPSK signaling scheme, the BSC cross-over probability, p', and the bit error probability, Pb, are given by M - 1 ~------..:.....!....=--...:--~ 0-1
o Figure 1.6 Discrete memoryless channel.
Introduction of Coded Digital Communication Systems
P(1/1) = 1 - p'
9
1
o Figure1.7 Binary symmetric channel.
( 1.10)
where
(1.11) a"
and the average energy per signal symbol, Eft equals the average energy per information bit, Eb. This is due to the fact that one bit is all we need to map onto a signal symbol. With this approach, we have combined the modulator, the transmission path, and the demodulator into a compact M-input, Q-output channel model depicted by the transition diagram.
1.2.4.3 Binary Symmetric Erasure Channel Another discrete memoryless channel of importance is the binary symmetric erasure channel (BSEC), where a binary modulator is used at the transmitter and the demodulator is making erasure-decisions for unreliably received symbols, The schematic representation of this channel is shown in Figure 1.8. p' is the cross-over probability when a transmitted symbol is incorrectly detected P(1/1) =1- p'-q'
O~'---=-------:---~O
Figure 1.8 Binary symmetric erasure channel.
10
Error-Control Block Codes for Communications Engineers
and q' is the transition probabiliry when a transmitted symbol is declared as an erasure. This is a binary-input, ternary-output channel and it is another special class of the DMC. It can be seen that the demodulator produces softdecision information to the channel decoder. For example, an erasure sequence 11?OO? 11 ... is input into the channel decoder, where? denotes erasure. The erasure position is known bur the value is not known. It is the task of the channel decoder to determine the value of the erasure symbols.
1.2.4.4 Burst Channel Another channel of interest is the burst channel. A burst channel can be modeled as a channel with memory. The schematic representation of a simple 2-state burst channel is shown in Figure 1.9, where q{ is the probability when the channel changes from the good state (state 1) (0 the bad state (state 2) and is the probabiliry when the channel returns from the bad state to the good state. The channel remains in the good state with probabiliry 1 - q{ most of the time and makes occasional visits to the bad state. When the channel is in the good state, the cross-over probability p{ is very low. Transmitted symbols will be affected by low-level noise. When the channel is in the highnoise state, the cross-over probabiliry pi is high and noise occurs in clusters. Typical values of p ( and pi are 10- 10 and 10- 2, respectively. It can be seen
qi
1 -q'1
Low-noise ~
q'1 «
q'2
High-noise ~
P (111) = 1 - P'1
Op:;....-------'--"'e
Figure 1.9 Model of a 2-state burst channel.
o p:;....------=-..,..O
Introduction of Coded Digital Communication Systems --------
11
that when the channel is in the bad state, transmitted symbols will be affected by a duster of high-level noise.
1.3 Types of Errors and Error-Control Codes The discrete mernoryless channels discussed in the previous section have the property that each transmitted symbol is affected by independent noise samples. As a result, channel errors occur in a random manner. These type of errors arc called random errors. The binary symmetric channel is a practical and simple channel model for deep space and many satellite communications that suffers from random errors. Error-control codes designed to correct random errors arc called random error-correcting codes. On the other hand, a channel with memory produces burst errors. The length of a hurst is defined as the length of an error sequence that begins with an error and ends with an error. Each transmitted symbol is now affected by dependent noise samples. In digital audio recording systems, the storage medium is subject to surface contamination from fingerprints and scratches on the storage medium. Burst errors may occur during playback. Radio channels arc another good example of burst channels where the transmitted signal symbols are subject to multipath propagation effect. As a result, the surrounding buildings and other objects can cause the signal to rake several different paths to travel from the transmitter to the receiver. Error-control codes designed to correct hurst errors are called burst error-correcting codes. If we use random error-correcting codes for bursry channels, the hurst errors must he isolated first. lnterleauine is a commonly used technique to isolate burst errors. In Chapter R, we describe applications of random error-correcting codes with interleaving. Channels may also suffer from a combination of these rypes of errors. This book is mainly devoted to random errors, the encoding, and decoding of random error-correcting codes.
References [1)
Shannon. C. 1'... "A Mathematical Theory of Cornrnunicarion." Bell Syst. Tech. I, 1'0.3. [uly 194R, pp. 37 e: 0 , an b(x) = bnx" + b,,_1 x,,-l + + b l X + b o of degree n, n ~ 0, be polynomials in indeterminate x over a ring R. A polynomial addition of a(x) and b(x)
Error-Control Block Codes for Communications Engineers
20
i- 1
i
is a(x) + b(x) = . . . + (a l + bj)x + (ai-I + bl_l)x + . . . + (al + bl)x + (ao + b o) and a polynomial multiplication of a(x) and b(x) is a(x)b(x) =
ambnx m+ n + (amb n- 1
+
am-I bn)x
m n- l +
+ ...
+ (aOb l +
al bo)x
+
aob o·
The coefficient associated with xi under polynomial rnulriplicarion is aob i + aLb i - l + a2bi-2 + ... + aibo, for i = 0,1, ... , m + n. The set of all polynomials in x over R is denoted by R[x). R[x] can be shown to obey all the axioms of a ring under polynomial addition and multiplication. R[x] is called a polynomial ring in x overR or a ring oJpolynomials in x over R. In working with polynomials in x over R, and regardless what x is, we shall define operations on the set of polynomials in a way that will yield a commutative ring. Theorem 2.7. If R is a commutative ring, then R[x] is a commutative ring. If R is an inregral domain, then R[x] is an integral domain. Proof Let R be a commutative ring. Also, let m- 1 a(x) = amx m + am_lX + + aJx + ao of degree m, m e: 0, and b(x) = bnx'i + b n-l x n- l + + b I X + b o of degree n, n 2. 0, be polynomials in a ring R[x]. Then the coefficient associated with x' of a(x)b(x) is ao b i + a I b i-I + a2 b i-2 + . . . + a i b o, and the coefficient associated with x' of b(x)a(x) is biao + bi-lal + bi-2a2 + ... + bOai, for i = 0, 1, ... , m + n. Since addition in any ring is commutative, then
biao + bi-Lal + b i- 2a2 + ... + boai = aob i + al bi - l + a2 bi-2 + ... + aibo and a(x)b(x) = b(x)a(x). Thus, R[x] is a commutative ring. Let R be an integral domain. By Definition 2.4, R is a commutative
*'
ring-with-identity 1 0 and has no divisors of zero. By the first parr of Theorem 2.7, R[x] is a commutative ring. Also, let a(x) E R[x]. Ixo . a(x) = a(x) . Ixo = a(x) and R[x] is a commutative ring-with-identity lxo. If a(x) and b(x) are nonzero elements of R[xJ, am 0 and b n O. m n Clearly amb n 0, where amb n is the coefficient associated with x + of a(x)b(x). The product of nonzero polynomials a(x) and b(x) in R[x] is not a zero polynomial. Thus, R[x] has no divisors of zero and R[x] is an integral domain.
*'
*'
*'
2.3 Fields Let F be a set of elements. Definition 2.9. A field F is a set that has two operations defined on it, called addition and multiplication, such that the following axioms are satisfied: I. F forms a commutative group under addition. The identity element with respect to addition is called the zero element or the additive identity of F and is denoted by 0.
Introduction to Abstract Algebra
21
2. The set of nonzero elements in F forms a commutative group under multiplication. The identity element with respect to multiplication is called the unit element or the multiplicative identityof F and is denoted by 1. 3. Multiplication a . b + a . c.
distributive
IS
over
addition.
a' (b + c) =
From our early study of groups under addition and multiplication, it is easy to see that every element a in a field has an additive inverse -a such that a+ (-a) = (-a) + a= O.Also,everynonzeroelementainafieldhasamultipli. .Inverse a-1 suc h t h at a . a-1 = a-1 . a = 1. CIear Iy, a 1:".. cauve ne ld iIS a commutative ring-with-identity 1 ::f. 0 in which each nonzero element has a multiplicative inverse and the set of nonzero elements forms a group under multiplication. The total number of elements in a field is called the order of the field. A field may contain an infinite or finite number of elements. If the order of a field is finite, the field is called a finite field. In the content of coding theory, we are interested solely in finite fields. Finite fields are also called Galois fields in honor of the French mathematician E. Galois. A Galois field of order q is denoted by GF(q). Example 2.6. Table 2.3 defines the Galois field of order 3 under module-S multiplication. Consider a finite field GF(q) = {O, 1, ... , q - I}. There exists a smallest A
positive integer A such that
L1
= O.
A is called the characteristic of the field.
i= 1
Theorem 2.8. The characteristic A of a finite field is a prime integer. Proof Suppose A is not a prime, there exists two positive integers
o < m,
n < A such that A = m . nand m
either
n
A
m
n
i= 1
i= 1
i: 1
L 1 = L 1 . L 1 = O. This implies
L 1 = 0 or L 1 = 0 and A is not the smallest positive integer such that i: 1
i= 1
Table 2.3 Galois Field GF(3) = {a, 1, 2} Modulo-3 Multiplication
Modulo-3 Addition
0 1
0 0 1
1 1
2 2
2
2
2
0
0 1
+
0 1
2
0 0 0 0
2
1 0 1
0
2
1
2
22
Error-Control Block Codes for Communications Engineers
A
LI
=
O. This contradicts the definition of the characteristic of a field. There-
i= I
fore, A is prime. Consider a nonzero dement a in a finite field GF(q). There exists a smallest positive integer n such that a" = I and an = a . a . . . . (n times). n is called the order o/the field element a. Theorem 2.9. If a is a nonzero element in GF(q), a q- I = I. Proof Let aI' «i- ... , 0'1- 1 he the nonzero elements of GF(q). The set of all nonzero clements in CF(q) forms a commutative group G = {aI' 0z, ... , aq-d under multiplication. Clearly, G= {aI' oz,···, oq-d = 1a' ai, a' fIZ"'" a : aq-d, where a is a nonzero dement in GF(q). Also, 01' «: ... aq_1 = (0' al) . (a' az) q- I q- 1 (0' aq_l) = a = I. • (al . «: ... aq_I):j:. O. Thus, a Theorem 2.10. If n is the order of a nonzero element in CF(q) , n divides q - I. Proof Suppose q - I is not divisible by n. Dividing n by q - 1, we get q - I = q' n + r. where q' is the quotient, r is the remainder and 0 < r < n. Also, a q- 1 = aq'nH = (an)q' . a' = a', By Theorem 2.9, a q- I = I. This r implies that a = I and r is the order of the field clement a. Since o < r < n, this contradicts the definition of the order of the field element, where n is the smallest positive integer such that an = I. Therefore, n must divide q - I. If n = q - 1, a is called a primitive element in CF(q). Every finite field GF(q) contains at least one primitive element. An important property of finite fields is that the powers of a primitive element in GF(q) generate all the nonzero elements in GF(q). Example 2.7 The characteristic of GF(5) = 10, 1, 2, 3, 4\ is 5. 1 2 The powers of the clement 2 in CF(5) are 2 = 2, 2 = 2 . 2 = 4, 4 3 2 = 2 . 2 . 2 = 8 = 3 module-S. 2 = 2 . 2 . 2 . 2 = 16 = I module-S. n ~ 4 and the order of element 2 is 4. n = q - I and element 2 is a primitive element. The powers of the primitive dement 2 in (;F(5) generate all the nonzero elements in GF(5). 1 2 The powers of the clement 3 in GF(5) are 3 = 3, 3 = 3 . 3 = 9 4 4 modulo-5 3.'\ = 3 . 3 . 3 = 27 2 module-S, 3 = 3 . 3 . 3 . 3 = 81 I module-S, n = 4 and the order of clement 3 is 4. n = q - I and clement .3 is a primitive clement. The powers of the primitive element 3 in GF(5) also generate all the nonzero elements in GF(5). The powers of the element 4 m GF(5) are 4 1 = 4, 1 4~ = 4 . 4 = 1() = I modulo-5. n = 2 and the order of element 4 is 2. n:j:. q - I and element 4 is not a primitive dement.
=
=
=
-------
Introduction to Abstract Algebrtl
23
For any prime p and a positive integer m > 1, it is possible ro extend the field GF( p) to a field of p 111 elements which is called an extension field of GF(p) and is denoted as GF(pm). In channel coding theory, error-conrrol codes are often constructed with symbols from GF(p) or GF(pl11). In practice, we shall only concern the construction of codes with symbols from GF(2) or 111). GF(2 Before we proceed to the construction of an extension field GF(pm) and error-control codes, we need to define what we mean by polynomials over GF(2).
2.3.1
Polynomials over GF(2)
In Section 2.2, we have studied the properties of rings and polynomial rings. Polynomial rings are of great interest in the study of error-control block codes. Our principal interest in this book will be polynomials with coefficients in a field F, in particular, the Galois field. We shall see in Chapters 4, 5, and G that polynomials with coefficients from the Galois field playa vital role in the design of cyclic codes. l11- 1 Let GF(q) be a Calois field. I(x) = Imxm + 1m-I x + ... + fi x + of degree m in indeterminate x is called a polynomial in x over a field GF(q), where 10' fi, ... ,1111 E GF(q). The set of all polynomials in x with coefficients from a Calois field GF(q) is denoted by GF(q)[x]. Like a polynomial ring Nix], GF(q)[x] can also be shown to obey all the axioms of a ring under polynomial addition and multiplication, and GF(q) [x] is a polynomial ring. Another ring is the polynomial ring GF(q)[x] modulo-Ix " - I), where GF(q)[x] rnodulo-Ix " - 1) denotes the set of polynomials in x with coefficienrs from GF(q) rnodulo-Ix" - 1). The polynomial ring GF(q)[x) rnodulo-Ix " - 1) is important and useful for the design of cyclic codes, and will be studied in Chapter 4. Theorem 2. JJ. Let GF(q) be a Galois field. Every ideal of the polynomial ring GF O. We begin by showing (ao
=
pm), where p is prime
al + , , , + a r )1
+
+ a~, The proof is by induction. Consider (ao + (1\ r = I, (ao + al)q = aX +
"
('q)_
coefficient
GF(q),
=
2
+ , , . +
(Ii
+ ...
(/r)1.
For
a f + , , , + ai, The binomial
q(q-i)!., , _ ')' IS a multiple of q for 0 < t < q. In
q':
t.q
'-q
t.
follows
It
+
= _'( _ ')' = "(
1.
(ao + al)1
(i)arl(ll (i)ag-
aX +
=
t.
(~) = 0
that
modulo-q
for
0 < i< q
and
a6 + ai·
Assume that (ao
+ ill +, , , + il r'_1)1 =
a6
+
ai
a~'_l is true.
+ .. , +
Then
(a 0
+
a\
+ , , ' +
a r' ) q
= [( a 0 + =
is rrue. Therefore, (ao + al (ao +
al
+ , , . + a,,)1 =
-I
• , ,
aX
af
+
a6
+
a1
ai
+ . , , +
=
ag
a;,.
+
a r' ] q
+
a;,
+ . , , +
+ ar'_1)1
aT' - 1)
+ , , . +
a;
+ , , . +
a;'_1 implies
We conclude by induction
1 q 1 1 + ill + , , , + il r ) = a o + a l + , , , + a r , m- I Let f(x) = fmxm + fm-I x + , , , + Ji x + fo with coefficients in
t hat (aO
CF(p). We wish and f
=
[0
show thatJip
-T. and
1, 2, ... , m, respectively. Consider
[f '(x )IP = =
(
I.mX m + I.m
(fmxmjP
= f pm(p)m X
By Theorem 2,9,fr 1 fore 'f'P I
I
=
I, and I
"
[f(x)]p' =f(xP') for 0 ~
I f(X)]pi.
I X m-l + , , , +
Ji1 x
for f
=
t-:
m
I,
I')p + JO
+ (fm_IXm-1)P + " , + (fix)P + (Jo)p +
=
fPm- 1 (p)m-I X
+ ... +
f I XP + r;0
1 for 0 ~ i ~ m.], 1:- 0 andf; E (;F(p). There-
28
Error-Control Block Codes for Communications Engineers
[/(x)]p
= Im(xP)m
+ 1m-I (xp)m-I + . . . + Ilx P + 10
p
= I(x )
= 1([3P) = O. [3P is a root ofI(x). By induction, it follows immediately that fiP' = fi and [/(x)]P' = l(x P') for 0 ~ i ~ m and Since 1(f3)
1= 1,2, ... ,
= 0,
[/([3)]p
m, respectively. Also, since 1([3) = 0, [/([3)]P'
= 1([3P') = 0
pl.
and [3 IS a root of I(x) for 1= 1,2, ... , m. The maximum value of I is m. This is due to the fact that, by Theorem 2.9, [3p"'-1 = 1 for [3:t:. 0 and [3 E GF(pm), and therefore [3P'" = [3. Thus,
2 22 2",-1 [3, [3 , [3 , ... , [3 are all the roots of I(x). Definition 2.12. Let I(x) be a polynomial over GF(2). Also, let [3 and 1
1
[32 be the elements in the extension field of GF(2) for I> O. [32 is called the conjugate of [3 if [3 and [32' are roots of I(x). m Definition 2.13. Let [3 be an element in an extension field GF(2 ) . A polynomial c/J(x) of smallest degree with coefficients in the ground field GF(2) is called a minimal polynomial 01[3 if c/J( (3) = 0, i.e., [3 is a root of ¢J(x). Theorem 2.13. A minimal polynomial ¢J(x) of [3 is irreducible. Proof Suppose c/J(x) of [3 is not an irreducible polynomial, there exists two polynomials 11 (x) and 12 (x) of degree greater than 0 and less than the of ¢J(x) such that ¢J(x) = fi (x) . hex) and degree c/J([3) = fi ([3) . 12([3) = O. This implies either fi ([3) = 0 or 12([3) = O. Since the degree of ¢J(x) is greater than the degrees of (x) and 12 (x), ¢J(x) cannot
It
be a minimal polynomial of smallest degree. This contradicts the definition of a minimal polynomial of [3. Therefore, a minimal polynomial of [3 is irreducible. Example 2.13 Let [3 = as and [32 = a 10 be elements in an extension 4 field GF(2'4) generated by a primitive polynomial p(x) = x + x + lover 2 5 GF(2). a and a 10 are roots of the polynomial ¢J(x) = x + x + lover GF(2). 2 c/J(x) = x + x + I is a minimal polynomial of [3. The minimal polynomial 2 x + x + I of [3 is not divisible by any polynomial over GF(2) of degree greater 2 than 0 but less than 2. x + x + I is an irreducible polynomial over GF(2). 4 Let [3 = a, [32 = ci, [34 = a , and [38 = all be elements in an extension 4 4 field GF(2 ) generated by a primitive polynomial p(x) = x + x + lover 2 4 R . 4 GF(2). a, a , a , and a are roots of the polynomial ¢J(x) = x + x + 1 4 over GF(2). c/J(x) = x + x + I is a minimal polynomial of [3. The minimal 4 polynomial x + x + 1 of [3 is not divisible by any polynomial over GF(2) of
Introduction to Abstract Algebra
29
4
degree greater than 0 but less than 4. x + x + 1 is an irreducible polynomial over GF(2). Theorem 2.14. Let ¢J(x) be the minimal polynomial of {3, where {3 is in
GF(2 m ), and 1-1
¢J(x)
I be the
f1 (x + {3
=
2i
smallest
integer such
that {3 2'
= {3. Then
).
i=O
Proof By Theorem 2.13, ¢J(x) is an irreducible polynomial. By Theorem 2.12, {3 and the conjugates of {3 are the roots of ¢J(x). Therefore, 1-1
¢J(x) = Il (x + {3 2i ). i=O
Theorem 2.15. If {3 is a primitive element of GF(2 m ) , all the conjugates 2
{32, {32 , • • • of {3 are also primitive elements of GF(2 m ) . Proof Let n be the order of the primitive element {3 in GF(2 m ) , where m n = 2 - 1 and {3n = = 1. Also, let n' be the order of the element
s":'
{32' in GF(2 m ) for I > 2' n'
n'2'
o.
By Theorem 2.10, n' divides 2 m 2m 1
n'2'
-
1, and
, I
({3) = {3 = 1. Thus, {3 = {3 - = 1 and n 2 must be a multiple of m m 2 - 1. Since 21and 2 - 1 are relatively prime, 2 m - 1 must divide n', Since n' divides 2 m - 1 and 2 m - 1 divides n', we conclude that n' = 2 m - 1 and m {32' is also a primitive element of GF(2 ) for I > o. m m Given p(x) ofdegree m, we can construct the field GF(2 ) of 2 elements (see Example 2.1 1). Due to Theorem 2.14, we can use the nonzero elements of GF(2 m ) to construct ¢J(x). Table 2.5 shows the minimal polynomials generated by the primitive 4 polynomial p(x) = x + x + 1. Table 2.5 Minimal Polynomials of a Generated by a Primitive Polynomial pIx) = i + x + 1 over GF(2) Conjugate Roots
Order of a
Minimal Polynomial
15 5
x x+1 x4 + X + 1 4 3 2 x +x +x +X+1 2 x +X+1 x4 + x3 + 1
o O
1 =a 1 2 4 8 a,a,a,a 3 6 a, a, a 12, a 9 5
a,a 7
10
a, a 13, a 14, a 11
3 15
30
Error-Control Block Codes for Communications Engineers m
A list of minimal polynomials of elements in GF(2 ) can be found in Appendix C.
2.4 Implementation of Galois Field Arithmetic Consider the multiplication of two polynomials a(x) and hex) with coefficients from GF(q). where
(2.1 ) and
(2.2) Let c(x) = a(x)h(x) =
) s' r- I as br X S H + ( a t-: I br + a[ h r- I x + ... + (ao b: + al
b,
+ a2 bO)x
2
+ (ao b I + al ho)x + ao ho
The above operation can be realized by the circuit as shown in Figure
2.2. Initially, the register holds all (I's and the coefficients of a(x) are fed into the circuit with high-order first, followed by r O's. Table 2.6 shows the input, register contents, and the output of the multiplier circuit.
Figure 2.2 GF(q) multiplication circuit.
Introduction to Abstract Algebra
31
Table 2.6 Input, Register Contents, and Output of the GF(q) Multiplication Circuit in Figure 2.2
Time
Input
Register Contents
Output ~---
aa... aa
8s
8s
8 s-1 8 s-2
8 s-l 8 s ...
I~ + r- 1 a
15++ r + 1 S
t
00
aa a0
a a
8 sb r (8 s-l br+ 8 sb r-,)
0 ... 00
aa
80 81
0
(80bl 80b o
80
a
+ 81 bo)
a
The product is complete after s + r shifts with the output corresponding to the coefficients of c(x) in descending power order. Consider the division of a(x) by b(x) with coefficients from GF(q) , where a(x) and b(x) are given by (2.1) and (2.2). a(x) and b(x) are called the dividend polynomial and the divisor polynomial, respectively. Let q(x) be the quotient polynomial of degree less than s - rand r'(x) be the remainder polynomial of degree less than r. The quotient polynomial of degree s - r is
q (x )
=
qs-rX
s-r
+ qs-r-l x
s-r-l
+ . . .
(2.3)
where
-1
qs-r = as br , qs-r-l = (as-l - asb r- 1 b;l )b;l , '[s-r-';
In the long division process, we first subtract (asb;l)b(x) from a(x) S to eliminate the term as associated with X in a(x). We then repeat the process by deducing the next term in q(x). For example, qs-r-l = (as-1 - asb r- 1 b;l) b;1 . In general, the polynomial q ibex) corresponding to each newly found quotient coefficient q i is subtracted from the updated dividend to eliminate termts) in the updated dividend. For example, (q i = q s- r-l ) b (x) is subtracted from the updated dividend to eliminate the term (a s-l - asb r- 1 b;l) in the updated dividend. The above operation can be realized by the circuit as shown in Figure 2.3.
32
Error-Control Block Codes for Communications Engineers
Figure 2.3 GF(ql division circuit.
Initially, the register holds all D's and the coefficients of a(x) are fed into the circuit. After s + 1 shifts, the coefficients of the remainder polynomial appear in the register with highest order to the right of the register and the coefficients of the quotient polynomial appear at the output with the highorder first. Over GF(2), b~1 = b, and 1. A BCH code of length n and minimum Hamming distance ;:::2td + 1 can be generated by the
. I g () b a b+l , ... , a b+2t r ' as t he generator po Iynomla x over GF()' q wit h a, roots of g(x). Let a', a nonzero element in GF(qS), be a root of the minimal polynomial cPi(X) over GF(q) and n , be the order of aI, for i = b, b + 1, ... , b + 2td - 1. The generator polynomial of a BCH code can be expressed in the form
111
112
Error-Control Block Codes for Communications Engineers
where LCM denotes least-common-multipliers. The length of the code is (5.2)
Example 5.1 Given the polynomials Ii (x), h(x), and h(x), where
Ii (x) = x 4
+x + 1 4 f"2(x) = x +x+ 1 4 2 3 }3(x) = x + x + x + 1 LCMf!l(x),h(x),h(x)}
=
(x
4
4
+ x+ 1)(x
+ x
3
+ x
2
+ 1)
Let 0d = 2td + 1. 0d and td are defined as the designed Hamming distance and the designed error-correcting-power of a BCH code. The true minimum Hamming distance, d min , of a BCH code mayor may not be equal to d> and can be obtained by applying Theorem 3. I to the parity-check matrix H of the code. In general, the minimum Hamming distance of a BCH code is always equal to or greater than 0 d. If b = 1, the code is called a narrow-sense BCH code. If n = q5 - 1, the code is called a primitive BCH code. The most important BCH codes are the binary (q = 2), narrow-sense (b = 1), primitive (n = q5 - 1) BCH codes. In this chapter, we discuss primarily the binary, narrow-sense, primitive BCH codes, but a brief discussion of nonprimitive BCH codes is also given in the next section.
o
5.3 Binary, Narrow-Sense, Primitive BCH Codes For any positive integers s ~ 3, t d < 2 - and n = 2 - 1, a binary, narrowsense, primitive BCH code has the following parameters. 5
1
5
5
n= 2 - 1 Block length: Number of check digits: c = (n - k) $ std Minimum distance: d m in ~ 2td + 1 5
Let a be a primitive element in GF(2 ) and g(x) be a generator polynomial over GF(2) which has a, ci, , a 2td as its roots. Let ¢Ji(X) be the minimal polynomial of a\ for i = 1, 2, , 2td. Then (5.3)
is the lowest-degree polynomial over GF(2) that generates a binary, narrowsense, primitive BCH code. The degree of ¢J i(X) is less than or equal to s. By
Bose-Chaudhuri-Hocquenghem Codes
Theorem 2.12, if
f3
113
m
is an element in GF(2 ) and a root of an irreducible 2
22
2 m- 1
polynomial of degree mover GF(2), then f3 ,f3 ,/3 are also the roots of 2i the irreducible polynomial, Therefore, each even-power element a and its l corresponding element a are the roots of the same minimal polynomial ¢Ji(X) given by equation (5.3). The generator polynomial g(x) can be reduced to g(x) = LCM {¢J I (x), ¢J3 (x), ¢Js (x), ... , ¢J 2tr I (x)}
(5.4)
and has a, a 3, as, ... , a 2tr i as roots. Since the degree of each minimal polynomial in equation (5.4) is less than or equal to s, the degree of g(x) is, therefore, at most equal to std. The design of a binary, narrow-sense, primitive BCH code of length s n = 2 - 1, minimum distance ~2td + 1, and s ~ 3 is described in the following steps: 1. Select a primitive polynomial of degree s over GF(2) and construct S GF(2 ) with a as a primitive element. 2. For some designed values of 8 d» compute t d and find the minimal polynomials of a' generated by the primitive polynomial, where aI, S a nonzero element in GF(2 ) , is a root of the minimal polynomial ¢Ji(X) and i = 1, 3, 5, ... , 2td - 1. 3. Compute g(x) = LCM {¢JI (x), ¢J3(X), ¢Js(x), ... , ¢J 2td- I (x)} with a, a 3, a S, . . . , a 2tri as t he roots. Example 5.2 Let a be a root of the pnmmve polynomial p(x) = x 3 + x + lover GF(2). The order of the element a in GF(2 3) is 7 3 and a is a primitive element in GF(2 ). The minimal polynomials are shown in Table 5.1. To generate a binary, narrow-sense, primitive BCH code of Table 5.1 Minimal Polynomials of a Generated by a Primitive Polynomial p(x) = x 3 + x + lover GF(2) Conjugate Roots
Order of Element
o 1 = aO 1 2 4 a,a,a
7
3 6 5 a,a,a
7
Minimal Polynomial
x x+1 x3 + X + 1 x3 + x2 + 1
Error-Control Block Codes for Communications Engineers
114
length n = 7 and designed Hamming distance of 3, both parameters band t d are equal to 1. The generator polynomial g(x) has a as a root and g(x)
= t' and deg ri(x) ~ t", then we are assured that deg bi(x) ~ t' and deg bi+1(X) > t', respectively. Letting 0i(X) = ri(x) and Ai(x) = bi(x), this implies that the partial results at the /-th step provide the only solution to the key equation. The only solution that guarantees the desired degree of O(x) and A(x) is obtained at the first i. Thus, we simply apply the extended Euclidean algorithm to x 2t ' + I and S(x) + 1 until deg ri(x) ~ t', It can be seen that the useful aspect of the algorithm is not in finding a greatest common divisor but the partial results that provide the only solution to the key equation. In general, the Berlekamp-Massey algorithm is more efficient than the extended Euclidean algorithm. For error correction of binary BCH codes, the decoding steps can be summarized as follows:
Bose-Chaudhuri-Hocquenghem Codes
131
1. Use the received polynomial rex) to compute the coefficients 51, 52, ... , 5 2t, of the syndrome polynomial 5(x). 2. Apply the extended Euclidean algorithm with ro(x) := x 2t ' + 1 and r1(x) := 5(x) + 1 to compute the error-locator polynomial A(x). The iteration stops when deg ri(x) ~ t', 3. Find the roots of A(x). 4. Determine the error polynomial e(x) as identified by the roots of A(x).
5. Add the error polynomial e(x) to the received polynomial for correction. Example
5.8 Consider the (15, 5) triple-error-correcting binary, narrow-
sense, primitive BCH code with the code polynomial vex) and the received polynomial rex) given in Example 5.6. From Example 5.6, we have 51 := 1, . 52 := 1,53 := a 10,54:= 1, 5 s := a 10,and 56 := a S. T he syndrome polynomial " 5() . t h e exten d ed Euc lid IS x:= a Sx 6 + a 10x S + x 4 + a 10x 3 + x 2 + x. U smg 1 ean algorithm we obtain Table 5.6. The iteration process stops when deg ri(x) ~ (t' := 3). From Table 5.6, the error-locator polynomial is A(x) := b 3 (x) := as x 3 + x + 1, same as in Example 5.6.
5.6 Correction of Errors and Erasures The error-and-erasure decoding problem can be solved using the BerlekampMassey algorithm or the extended Euclidean algorithm. The technique is similar to that described in Section 3.8. For error-and-erasure correction of binary BCH codes, the decoding steps using the Berlekamp-Massey algorithm can be summarized as follows. Table 5.6 Extended Euclidean Algorithm Decoding Steps q;(x)
r;(x) = O;(x)
o
x7
1
S(xl + 1
2
a
3
10x4
B;(X)
=p;(x)
b ;(x) = A;(x)
o + a 5x2 + a 5x + 1
1 1
a ox + l
a
5x3
+x+1
'I
Error-Control Block Codes for Communications Engineers
132
1. Modify the received polynomial rex) by substituting O's in the erasure positions in rex). Label the modified received polynomial ro(x) and compute the syndrome components 5 j , 52,"" 5u. 2. Use the syndrome components 5 1,52 , ••• , 5u and apply the standard Berlekamp-Massey algorithm to compute the error-locator polynomial A(x). 3. Find the roots of A(x).
4. Determine the error polynomial e(x) as identified by the roots of A(x).
5. Determine the code polynomial vo(x), where vo(x) = ro(x)
+ e(x).
6. Modify the received polynomial rex) by substituting 1's in the erasure positions in rex). Label the modified received polynomial rl (x) and compute the syndrome components 5 j , 52,"" 5u· 7. Repeat steps 2 to 4 to determine the code polynomial VI (x), where VI (x) = rl (x) + e(x). 8. From the twO code polynomials, select the one that differs from rex) in the smallest number of places outside the f erased positions. 4
Example 5.9 For t' = 3, p(x) = x + x + 1 with a as a primitive root, the generator polynomial g(x) of a (15, 5) triple-error-correcting, binary, . ., BCH co d e IS . x 10 + X 8 + X S + X 4 + X 2 + X + 1. narrow-sense, pnmltlve l2 2 5 Assume v(x) = 0 and rex) = x + x + ?x + i x, where? denotes erasures. Use the Berlekamp-Massey algorithm to compute the error-locator polynomial and decode. Modifying rex) by substituting O's in. the erasure positions, we get ro(x) = )2 + x S• The syndrome vector
where
Therefore,
51 = a =a
Simi Iar Iy, 52 = a
13
l2
+
as
+
a2
+
a
14
, S3 = a 13, 54 = a II , 55 = a S, and S6 = a II .
Bose-Cbaudburi-Hocquenghem Codes
133
Using the Berlekarnp-Massey algorithm, we obtain Table 5.7. The iteration process stops when J.L :;;: 2t' :;;: 6. From Table 5.7, the errorlocator polynomial A(x) :;;: a 2x2 + a 14x + 1. By trial and error method, 0'3 and a 10 are roots of the error-locator polynomial A(x) such that A(x) = o. Their inverses are 0'12 and 0'5. Therefore, (5.44) where es = el2 = 1. Substituting es and el2 into equation (5.44), we have e(x) = x
12
+ x
5
The decoded vector vo(x) = ro(x) + e(x) = o. Next, modifying rex) by substituting I's in the erasure positions, we get r1 (x) = x 12 + x S + x 2 + x. The syndrome vector
where
Table 5.7 Berlekamp-Massey Iterative Algorithm Decoding Steps JL
dp.
B(x)
0 1 2 3 4 5 6
-
x ax ax 2 a 13x2 + a 14x a 13 x3 + a 14x2 a 13x4 + a 14x3
51 0 a 0 0 0
a
13x 5
+a
14x4
A(p.)(x)
i
Lp.
1
0 1 1 2 2 2 2
0 1 1 2 2 2 2
a 14x + 1 a 14x + 1 a 2x2 + a 14x + 1 a 2x2 + a 14x+ 1 a 2x2 + a 14x + 1 a 2x2 + a 14x + 1
I I I
I I
Error-Control Block Codes for Communications Engineers
134
Therefore, 51
l2
=a =a
9
Similarly, 52 = a , 53 = a 5(x)
= a l 3x6
l
4,
+ a
12
54
+ alOx
2
+ as + a
= a 3,
5S
=a
lO,
and 56 = a
13.
S + a 3x4 + a l 4x3 + a 9x 2 + a l 2x
Using the Berlekamp-Massey algorithm, we obtain Table 5.8. The iteration process stops at the end of f1 = 2t' = 6. From Table 5.8, . 4 3 8 2 12 the error-locator polynomial A(x) = a x + a x + a x + 1. 2 By trial and error method, a, a , and a 8 are roots of the error-locator 14 l polynomial A(x) such that A(x) = O. Their inverses are a , a 3, and a l. Therefore, (5.45) where e7 = e13 = el4 = 1. Substituting e7' e13' and el4 into equation (5.45), we have e(x)
= x l4
+ x
l3
+ x
7 l4
l2
The decoded vector VI (x) = rl (x) + e(x) = x + x 13 + x + xl + 2 S x + x + x. The code polynomial Vo (x) differs from r(x) in 2 places outside the 2 erased positions and the code polynomial VI (x) differs from rex) in 3 places outside the 2 erased positions. vo(x) = 0 is selected as the decoded code polynomial. Double erasures and double errors are corrected. Table 5.8 Berlekamp-Massey Iterative Algorithm Decoding Steps JL
dp-
B(x)
A(p-)(x)
j
Lp-
0 1 2 3 4 5
-
x
1
0
Sl 0
a
0 1 1 2 2 3 3
6
a
0 1 0
8
a
3x 3x 2 4x 2
7 a X 7 2 a x
+ + all x3 + a 12x2 + X a
a
a
4x3
11x4
+ a 12x3 + x2
12x
+1 12x +1 a 11x2 a + a 12x + 1 11x2 + a 12x + 1 a 4x3 a + a 8x 2 + a 12x+ 1 4x3 a + a 8x2 + a 12x + 1 a
1
1 2 2 3 3
Bose-Chaudhuri-Hocquenghem Codes
135
We can of course use the extended Euclidean algorithm at Step 2 instead of the Berlekamp-Massey algorithm to decode BCH codes. To illustrate this, we rework Example 5.9 using the extended Euclidean algorithm. Example 5.10 Consider the (I5, 5) triple-error-correcting, binary, narrow-sense, primitive BCH code with the code polynomial v(x) and the received polynomial r(x) given in Example 5.9. Modifying r(x) by substituting O's in the erasure positions, we get 12 S 14 13 ro(x) = x + x. From Example 5.9, we have 51 = a , 52 = a , 13, 53 = a 54 = all, 55 = as, and 56 = all. The syndrome polynomial is 11 6 S S 11 4 13 3 13 2 14 . 5(x) = a x + a x + a x + a x + a x + a x. Usmg the extended Euclidean algorithm we obtain Table 5.9. The iteration process stops when deg ri(x) :::; (t' = 3). From Table 5.9, 2 14 = the error-locator polynomial A(x) = b 3(x) = ax + a 13x + a 14 2 2 14 . a (a x + a x + 1) and the error-evaluator polynomial O(x) = r3(x) = 2 14. ax + a It can be seen that the solution A(x) is identical (within a constant factor) to that found in Example 5.9 using the Berlekamp-Massey algorithm. Clearly, the error locations must be identical to those found in Example 5.9 and
where es = el2 = 1. The decoded vector vo(x) = ro(x) + e(x) = O. Next, modifying r(x) by substituting 1's in the erasure positions, we get rl (x) = x 12 + x 5 + xL + x. From Examp Ie 5.9, we have 51 = a 12,52 = a 9, lO 14, 13. 3, 53 = a 54 = a 5 S = a , and 56 = a The syndrome polynomial is 13 6 10 5 3 4 14 3 9 2 12 . 5(x) = a x + a x + a x + a x + a x + a x. Usmg the extended Euclidean algorithm we obtain Table 5.10. The iteration process stops when deg ri(x) :::; (t' = 3). From Table 5.10, . 14 3 3 2 7 10 the error-locator polynomial A(x) = b4(X) = a x + a x + a x + a = Table 5.9 Extended Euclidean Algorithm Decoding Steps q;(x)
'i(X) = O;(x)
B;(X) = p;(x)
b;(x) = A;(x)
1
0
0
2
a 4x + a 13
1
1 a 4x + a 13
3
a 12x + a 5
x7 S(x) + 1 11x4 a 14x 5 + a + a 9x 3 + 5x 2 6x a + a + a 13 14 2 ax + a
a 12x + a 5
ax 2 + a 13x + a
0 1
:
14
Error-Control Block Codes for Communications Engineers
136
Table 5.10 Extended Euclidean Algorithm Decoding Steps qj(x)
fj(X)
0 1
2
a 2x + a 14
3
a 7x+a 12
4
a 5x+ a 14
=Gj(X)
x7 S(x) + 1 a 6x 5 + a 5x4 + a 4x3 + a 6x 2 + a 9x + a 14 ax 4 + a 5x3 + a 12x + a 12 3x a 2 + a 10
aj(x)
=pj(X)
b j(x)
=Aj(x)
1
0
0
1
1
a 2x + a 14
a 7x + a 12
a 9x2 + a 8x + a 12 a 14x3 + a 3x 2 +
a 12x 2 + a 3X + a 12
a 7x+a 1O
i ' I a 10(a 4x 3 + a 8x 2 + a 12x + 1) and th e error-eva uator polynornia 3 2 fl(x) = r4(x) = a x + a 10. The solution A(x) is also identical (within a constant factor) to that found in Example 5.9 using Berlekarnp-Massey algorithm. Again, the error locations must be identical to those found in Example 5.9 and
where e7 = en = e14 = 1. The decoded vector VI (x) = rl (x) + e(x) = x x
13
+ x
12
+ x
7
+ x
5
+ x
2
14
+
+ x.
The code polynomial Vo (x) differs from r(x) in 2 places outside the 2 erased positions and the code polynomial VI (x) differs from r(x) in 3 places outside the 2 erased positions. vo(x) = 0 is selected as the decoded code polynomial. Double erasures and double errors are corrected.
5.7 Computer Simulation Results The variation of bit error rate with the EblNo ratio of two BCH codes with coherent BPSK signals for AWGN channel has been measured by computer simulations. The system model is shown in Figure 5.3. Here, E b is the average transmitted bit energy, and No 12 is the two-sided power spectral density of the noise. The parameters of the codes are given in Table 5.11. Perfect timing, synchronization, and Berlekamp-Massey algorithm decoding are assumed. In each test, the average transmitted bit energy was fixed, and the variance of the AWGN was adjusted for a range of average bit-error rates.
Bose-Chaudhuri-Hocquenghem Codes
137
v Channel i codeword
::.1
Transmission path (Analog channel)
Noise
1\
I I+--U---t Dala sink
R
Estimate " - - - -...... Received vector of U
Figure 5.3 Model of a coded digital communication system. Table 5.11 Parameters of Binary, Narrow-Sense, Primitive BCH Codes
n
k
Designed Distance 2t d + 1
g(x)
15 15
7
5 7
x 8 + x7 + x 6 + x 4 + 1 x 10 + x8 + x 5 + x4 + x 2 + X + 1
5
The simulated error performance of the binary BCH codes with Berlekamp-Massey algorithm decoding is shown in Figure 5.4. Comparisons are made between the coded and uncoded coherent BPSK systems. At high E b/No ratio, it can be seen that the performance of the coded BPSK system is better than the uncoded BPSK system. At a bit error rate of 10-\ the (15, 7) double-error-correcting, and (15, 5) triple-error-correcting binary BCH coded BPSK systems give 0.75 and 1.25 dB of coding gains over the uncoded BPSK system, respectively.
Error-Control Block Codes for Communications Engineers
138
0
10 10 10 Q)
-1
-2
~ 10
-3
.t::
-4
eQ;
to
10 10 10
-5
-6
-2
0
2
4
6
e; INa
(dB)
8
10
12
Uncoded coherent BPSK (15.7) binary BCH code (15,5) binary BCH code
Figure 5.4 Performance of binary BCH codes with Berlekamp-Massey algorithm decoding in AWGN channels.
References [I]
Hocquenghem, A., "Codes Correctrurs dErreurs," Chiffres, Vol. 2, 1959, pp. 147-156.
[2]
Bose, R. C, and D. K. Ray-Chaudhuri, "On a Class of Error Correcring Binary Group Codes," Information and Control, Vol. 3, March 1960, pp. 68-79.
[3]
Bose, R. C, and D. K. Ray-Chaudhuri, "Further Results on Error Correcting Binary Group Codes," InfOrmation and Control, Vol. 3, September 1960, pp. 279-290.
[41
Massey, J. L., "Shifr Register Synthesis and BCH Decoding," IEEE Trans. on InfOrmation Theory, Vol. IT-15, No. I, January 1972, pp. 196-198.
[5J
Berlekamp, E. R., "On Decoding Binary Bose-Chaudhuri-Hocquenghem Codes," IEEE Trans. on Injormation Theory, Vol. IT-II, No.5, October 1965, pp. 577-580.
[6]
Blahut, R. E., Theory and Practice of Error Control Codes, Reading, MA: Addison-Wesley, 1984.
[7]
Chien, R. T., "Cyclic Decoding Procedure for the Bose-Chaudhuri-Hocquenghem Codes," iEEE Trans. on InfOrmation Theory, Vol. IT-10, No.5, October 1964, pp. 357-363.
6 Reed-Solomon Codes 6.1 Introduction Reed-Solomon (RS) codes form a subclass of the nonbinary BCH codes. The Reed-Solomon codes are cyclic codes and the codes were discovered by Reed and Solomon in 1960 [1]. Although they are a subclass of the nonbinary BCH codes, Reed-Solomon codes offer better error control performance and more efficient practical implementation because they have the largest minimum Hamming distance for fixed values of k and n. In this chapter, we describe the generation and decoding of Reed-Solomon codes. A thorough discussion of the Reed-Solomon codes can be found in references [2-6).
6.2 Description of Reed-Solomon Codes Let a be an element of GF(q5) and let td be the designed error-correcting power of a BCH code. In Chapter 5, we have seen that for some positive integers sand b ~ 1, a BCH code oflength n and minimum Hamming distance ~2td + 1 can be generated by the generator polynomial g(x) over GF(q) with ) Let a i , a nonzero eI 'In a b, a b+l , ... , a b+2trl as t he roots 0 f g (x. ement GF(q5), be a root of the minimal polynomial ¢Ji(X) over GF(q) and n , be the order of aI, for i = b, b + 1, ... , b + 2td - 1. The generator polynomial of a BCH code can be expressed in the form
where LCM denotes least-common-multipliers. The length of the code is 139
140
Error-Control Block Codes for Communications Engineers
(6.2) The degree of ¢ i(X) is s or less, and the degree of g(x) is, therefore, at most equal to 2std. The codewords generated by a binary BCH code have symbols from the binary field GF(2). In a nonbinary BCH code, the codewords have symbols from GF(q). These codes are called q-ary BCH codes. The Reed-Solomon codes, a very important sub-class of q-ary BCH codes, can be obtained by setting s = 1, b = 1, and q = pm, where p is some prime. Let a be a primitive element in GF(pm). A primitive Reed-Solomon code with symbols from GF(pm) has the following parameters:
n
Block length:
=
pm - 1
Number of check digits: c = (n - k) = 2td Minimum distance:
d min
= 2td + 1
An important property of any Reed-Solomon codes is that the true minimum Hamming distance is always equal to the designed distance. This is shown as follows. From the Singleton bound (Theorem 3.2), the minimum distance of any (n, k) linear codes
d min
~
(6.3)
n- k+
For BCH codes, we have (6.4)
For Reed-Solomon codes, n - k
d m in
=
2td and
k
::?: n -
+
1
(6.5)
Therefore, d min = n - k + 1. The minimum Hamming distance of a Reed-Solomon code is exactly equal to the designed distance, i.e., dmin = 2td + 1, and the error-correcting-power t' of a Reed-Solomon is equal to td. This tells us that for a fixed (n, k), no code can have a larger minimum distance than a Reed-Solomon code. A Reed-Solomon code is therefore a maximum-distance code. Let a', a nonzero element in GF(pm), be a root of the minimal polynomial ¢i(X) over GF(pm) and ni be the order of a', for i = 1, 2, ... , 2t'. The generator polynomial of a primitive Reed-Solomon code is 2
2t'
g(x)=(x-a)(x-a) ... ( x - a )
(6.6)
Reed-Solomon Codes
141
2, g(x) has a, a ... , ~ 2/' as all its roots from GF(pm). The degree of ¢ Jx) is 1, and the degree of g(x) is, therefore, at most equal to g(x) can also be expressed in polynomial form with dummy variable x and is given by
u:
g () x = x
n-k
+ gn-k-lx
n-k-l
+ gn-k-2x
n-k-2
+ ... + glx + go
(6 ) .7
where gi are elements from GF(pm), 0 ~ gi ~ (n - k - 1). It can be seen that the coefficients of g(x) are in the same field as the roots of g(x). Figure 6.1 shows the general encoder circuit ofan in, k) Reed-Solomon code with symbols over GF(pm). m In practice, Reed-Solomon codes with symbols over GF(q = 2 ) are of m_ary greatest interest. Now, each 2 symbol can be expressed as an m-tuple over GF(2) and the Galois field circuit elements of Figure 6.1 can be realized by binary logic elements and delay flip-flops. In what follows we consider m). Reed-Solomon codes with symbols from GF(2 Example 61 Let a be a root of the primitive polynomial 4 4 p(x) = x + x + 1 over GF(2). The order of the element a in GF(2 ) is 15 2, I4 4 and a is a primitive element in GF(2 ) = 10, 1, a, a ... , a }. The ele4 4 ments of GF(2 ) generated by the primitive polynomial p(x) = x + x + 1 over GF(2) is shown in Table 6.1. The polynomial representation of the m eIement a,i t. = 0 , 1, . . . , 2 - 2, .IS expressed as a i = am-l a m-l + am_2am-2 + ... + ao with binary coefficients, and the coefficients of the polynomial representation of a' are expressed as a binary m-ruple lao al . . . am - Jl. To generate a double-error-correcting, primitive Reed-
o
,
Attime:::Q
Information symbols
vo ..... uk-2, uk-1
Figure 6.1 (n, k) Reed-Solomon encoder.
Code symbols
142
Error-Control Block Codes for Communications Engineers
Table 6.1 4 Elements of GF(241 Generated by a Primitive Polynomial p(x) = x + X + 1 4-Tuple
Elements --
0
0= 1=
1
2
a = 3 a =
a
2
4
0'+ 0'2 +
0'5=
a6 =
0'3 +
7
0'3 +
=
aS = a9 =
00 1 0
0'3
a =
a
00 1 1 0'+
0'3 +
a
14
= 0'3 +
1 1
a 0'2 + 0'+
= 0'3 + 0'2+
000 1 1 10 0
o1 1 0
a
11 a = 0'3 + 0'2 + a 12 a = 0'3 + 0'2 + 0'+ 13
1
a2 0'2 +
0'10 =
a
o 1 00
a
0'=
0000 1 000
1
1 10 1 1010 o1 0 1 1110
o1 1 1 1 1 1
1111 1011 10 0 1
Solomon code of length n = 15, the paraqIeter t' is equal to 2. The generator polynomial g(x) has a, 0'2, 0'3, and a (2t =4) as roots and 2)(x g(x) = (x - o')(x - O' - O' 3)(x - 0'4) 4 13 3 6 2 3 10 =x +0' x +o'x +O'x+o'
Since n = 15 and the degree of g(x) is n - k = 4, the number of information symbols k is 11. g(x) generates a (15, 11) double-error-correcting, primitive Reed-Solomon code. The minimum Hamming distance of the code is 5. Figure 6.2 shows the encoder circuit of the (15, 11) Reed-Solomon code. Example 6.2 Let a be a root of the primitive polynomial 4 4 p(x) = x + x + lover GF(2). The order of the element a in GF(2 ) is 15 4 and a is a primitive element in GF(2 ). To generate a triple-error-correcting, primitive Reed-Solomon code of length n = 15, the parameter t' is equal to 3. The generator polynomial g(x) has a, 0'2, 0'3, and a (2t'=6) as roots and 2)(x 3)(x g(x) = (x - o')(x - O' - O' - a 4)(x - O' 5)(x - 0'6) 6 10 5 14 4 4 3 6 2 9 6 =x +0' x +0' x +o'x +o'x +O'x+o'
Reed-Solomon Codes
143
Code symbols
Information symbols Figure 6.2 (15, 11) Reed-Solomon encoder.
Since n = 15 and the degree of g(x) is n - k = 6, the number of information symbols k is 9. g(x) generates a (I5, 9) tripe-error-correcting, primitive Reed-Solomon code. The minimum Hamming distance of the code is 7. It is possible to take a nonprimitive element {3 in GF(pm), p is a prime, to generate a Reed-Solomon code. Codes generated by nonprimitive elements in GF(p m) are called nonprimitive Reed-Solomon code. The generator polynomial IS
2
g(x) = (x - {3)(x - {3 ) ... (x - {3
21'
)
(6.8)
and the codeword length n is equal to the order of the field element {3. In practice, non primitive Reed-Solomon codes are rarely used because the distance of a nonprimitive Reed-Solomon code cannot be increased, and codes oflength other than p m - 1 can also be obtained by shortening a primitive Reed-Solomon code.
6.3 Decoding of Reed-Solomon Codes 6.3.1
Berlekamp-Massey Algorithm
Decoding of a primitive or nonprimitive Reed-Solomon code can be accomplished by the Berlekamp-Massey iterative algorithm with some extra calculations. Let a be an element over GF(q = fm), p is some prime. Also, let a code polynomial v(x) = Vn_IX n- 1 + Vn_2Xn- + ... + VIX + va, an error polyno. I ex ( ) = en-lX n-l + en-2X n-2 + ... + elx + eo, an d t he receive . d po Iynorrua . I ( ) n-l n-2 Th mla r x = rn_[x + rn-2X + ... + rlx + roo en
Error-Control Block Codes for Communications Engineers
144
r(x) = v(x) + e(x)
(6.9)
As described in Section 5.5, the syndrome vector and the infinite-degree syndrome polynomial are defined as (6.10)
and S(x)= ... +S2t'X
2t'
+S2t'_IX
21'-1
+".+S2X
2
+SIX
(6.11)
respectively, where the 2t' known syndrome components
S,
= r(a
l
(6.12)
)
( i) n-2 + ... +rla ( i) +rO ( i) n-I +rn-2a =rn_Ia
(6.13)
= e(a i )
(6.14)
or
S,
( i) n- 2 i i) n-I = en-l ( a + en-2 a + ... + el (a ) + eo
for 1 5 i 5 Zt', The error-locator polynomial is A(x)
where
v- 1
=
Avxl' + Av_1x
=
(l - a)lx)(l - ahx)
i.. n. ... .i. indicate the
+
+ Ajx +
(l - a).x)
(6.15) (6.16)
locations of errors in the error pattern (6.17)
The coefficients in e(x) represent the error values and are related to the known syndrome components by the following equation l'
s, = L ei/ a i,/
(6.18)
1=1
for 1 5 i 5 ir, Given the 2t' known syndrome coefficients of S(x), the decoding problem becomes one of finding an error-locator polynomial A(x) of degree v 5 t' that satisfies the following key equation
Reed-Solomon Codes !1(x) == A(x)[S(x) + IJ modulo-x
145 2t' 1 +
(6.19)
where !1(x) is the error-evaluator polynomial of degree less than or equal to P. The Berlekamp-Massey algorithm is used to determine the error-locator polynomial A(x) from the known syndrome coefficients of S(x) and the Chien search is used to find the roots of A(x). Once the roots of A(x) are found, we can dererrnine jj , ii. ... ,}]) to locate the errors. Up to this point, this is all we need to compute the error vector e(x) with unity coefficients for binary codes. For nonbinary codes, we need to determine the magnitude of the errors. We can find the error values by solving (6.18). Alternatively, we can use the error-evaluator polynomial fl(x) and the formal derivative of A(x) to determine the magnitude of the errors [2, 7J. The technique is more efficient than solving (6.18) when P becomes very large. Definition 6.1. Let f(x) = fmxm + fm-l x m- I + ... + fix + fo be a polynomial of degree m with coefficients in GF(q). The formal derivative is defined as f'(x) = mfmxm-1 + (m - I)fm_lxm-2 + ... + 3f3x 2 + 2h x + fi. The main properties of the formal derivative are: 1. If f
2(x)
divides a(x), then f(x) divides a'(x).
2. [f(x)a(x»)' = f'(x)a(x) + f(x)a'(x). m 3. Iff(x) E GF(2 )[x J, the set of all polynomials in x with coefficients from GF(2 m ), then f'(x) has no odd-powered terms.
The error magnitude can be computed by utilizing the error-evaluator polynomial !1(x) and the formal derivative of the error-locator polynomial A(x). The following theorem states how. Theorem 6.1 [2, 7]. The error magnitudes are given by
(6.20)
for 0 ~}i < nand 1 ~ i ~ P. Proof The error-locator polynomial is ])
A(x)
=
Il (l 1~1
and the formal derivative of A(x) is
ai/x)
146
Error-Control Block Codes for Communications Engineers
A'(x)
=
[rl
(I - ai/x)]
1= 1
By property 2, we get
Evaluating A'(x) at an actual error location x = (ai,r 1, we get
i
A'[(ai,r 1] =
)-I]}
{-ai'O [1 - a i/'(ai i
1=1
1'*1
If we expand the above expression, all the products in the above expression have a zero term except for the case l = i. We can write
A'[(a i ir 1]
=
-ai,O [1 - a J/'(a Ji)- I ] I' *i
The infinite-degree syndrome polynomial is oc
S(x) =
l.. Si'/ i' = 1
and
Si'
=
r(a i')
II
=
l.. eJ/aJ/)i' 1=1
where we only know the first 2t' coefficients. Then S(x) can be reduced to a simple rational expression.
S(x)
=
=
i~l [~ei/aJ/{]/
i
1=1
eJ/[
f (a J/{/]
r; 1
Reed-Solomon Codes
147
The error-evaluator polynomial is n(x) == A(x)[S(x) + 1] modulo-x
{[fI =i
==
(1 - ail'x)]
1'=1
[i !=]
ir + 1
eli (
ai/x )] + A(X)} modulo-x2t' + I 1 - a/!»
[eJpi'xn (l - ail'x)] + A(x) I'ot-I
!=l
Evaluating flex) at an actual error or erasure location x = (ai,)-l, we .
have A[(aJT
1
1 = 0 and
fl[(ai,r1l
=
i 1=1
{ei/aj/(aj'T
I
Il [1 -
ajl'(aJ,r
l
l}
r»!
Again, if we expand the above expression, all the products in the above expression have a zero term except for the case I = i. we can write
Hence,
and
The decoding procedure of Reed-Solomon codes is as follows. 1. Compute the syndrome components 51, 52, . . . , 52t' from the received polynomial rex). 2. Use the syndrome components 51, 52, . . . , 52t' and apply the Berlekamp-Massey algorithm to compute the error-locator polynomial A(x).
Error-Control Block Codes for Communications Engineers
148
3. Compute the error-evaluator polynomial 2t' I !1(x) == A(x)(5(x) + 1) rnodulo-x "'.
!1(x),
where
4. Find the roots of A(x) and locate the error positions.
5. Determine the magnitude of the errors and store in the error polynomial e(x).
6. Subtract the error polynomial e(x) from the received polynomial r(x) for correction. 4
Example 6.3 For t' = 3, p(x) = x + x + 1 with a as a primitive root, the generator polynomial g(x) of a (15, 9) triple-error-correcting, primitive 6 10 5 14 4 4 3 6 2 "9 6 . Ree d-Solornon co de IS x + a x + a x + a x + a x + a x + a . 4 12 3 6 7 3 Assume v(x) = and r(x) = a x + a x + a x and use the Berlekamp-
°
Massey algorithm to decode. The syndrome vector
where
Therefore,
5] = r(x = a) = a =a+a =a
9
+a
4(a I 2
) + a
3(a 6)
+ a
7(a 3
)
10
12
Similarly, 52 = 1, 53 = a 14, S4 = a 10, 55 = 0, and S6 = a 12. Using the Berlekamp-Massey iterative algorithm as discussed in Section 5.5.1 of Chapter 5, the decoding steps are shown in Table 6.2. From Table 6.2, A(x) = A16)(x) and A(x)
=
a
A(x)(5(x) + 1] = a
6x 3 3x9
+ a
4x2
+ ax
8
+ a
+ x
7
7x+ + a
1 6x 3
(6.21 ) + x
2
and !1(x) == A(x)[5(x) + 1] rnodulo-x 6 3
=ax
+x
2
2
+ax+l
2[' I +
+ a
2x
+
Reed-Solomon Codes
149
Table 6.2 Berlekamp-Massey Iterative Algorithm Decoding Steps JL
dlJ
B(x)
A(IJ)(x)
i
LIJ
0 1 2 3 4 5
-
1
6
a 13
x a 3x a 3x 2 a 3x 2 + X a 3x 3 + x 2 a 2x 3 + a 9x 2 + a 5x a 2x4 + a 9x 3 + a 5x 2
0 1 1 2 2 3 3
0 1 1 2 2 3 3
51 7
a
1
a7 a
10
a 12x + 1 a 3x + 1 a 3x 2 + a 3x+ 1 a 12x 2 + a 4x + 1 a 13x 3 + a 3x 2 + a 4x + 1 a 6x 3 + a 4x 2 + a 7x + 1
. an d error method, a 3, a 9, and a 12 are roots of A(x) such that By trial A(x) = O. Their inverses are a 12 , a 6, and a 3. Therefore, (6.22) Use (6.20),
r
-a 3{t:i
. [(a 3)3
1+
r
[(a 3)2 1 + a 2 • (a 3)- 1 + I}
a6 •
[(a 3)2 1 + a 7 a 3{a6 • a 6 + a 9 + a 2 . a l 2 + l} 6 9 7 a
a
3{a 12
+ a
. a
9
12
+ a
9
+ a
+ a
1+ a
a 3{a
14
+ l}
7
+ a
1+ a
r
14
+ 1}
7
a 3 • a l3 a9
=a 7 Similarly, e6 we have
=
4. 3 a and e12 = a Substituting e3' e6, and e12 into (6.22), 4 12
e(x) = a x The decoded vector vex)
=
3 6
+ a x
rex) - e(x)
7 3
+ a x =
O. Triple errors are corrected.
Error-Control Block Codes for Communications Engineers
150
6.3.2 Euclid's Algorithm For error correction, the decoding steps using the extended Euclidean algorithm are as follows. 1. Compute the coefficients 51' 52, ... , 5 2t , of the syndrome polynomial 5(x) from the received polynomial r(x). 2t'+1
2. Apply the extended Euclidean algorithm with ro(x) = x and r1 (x) = 5(x) + 1 to compute the error-locator polynomial A(x) and the error-evaluator polynomial O(x). The iteration stops when deg ri(x) ~ t '. 3. Find the roots of A(x) and locate the error positions. 4. Determine the magnitude of the errors and store in the error polynomial e(x). 5. Subtract e(x) from the received polynomial r(x) for correction. Example 64 Consider the (IS, 9) triple-error-correcting, primitive Reed-
Solomon code with the code polynomial v(x) and the received polynomial r(x) given in Examfle 6.3. From Example 6.3, we have 51 = a 12, 52 = 1, 14 12 1 53 = a , 54 = a , 55 = 0, and S6 = a . The syndrome polynomial is 12 6 10 4 14 3 2 12 . . 5(x) = a x + a x + a x + x + a x. Using the extended Euclidean algorithm as discussed in Section 5.5.2 of Chapter 5 to compute the errorlocator polynomial A(x) and the error-evaluator polynomial O(x), we obtain Table 6.3. The iteration process stops when deg ri(x) ~ (t ' = 3). From Table 6.3, . A(x) = b4(X) = a 73 the error-locator polynomial x + a 52 x + a 8x + a = Table 6.3 Extended Euclidean Algorithm Decoding Steps qj(x)
fj(X)
0 1
x
2
a
13
a
4
a
3x
14x
+a3
5x+
a
=fij(x)
7
SIx) + 1 13x5 a + a 2x4 + a\3 + 2 3x x +a 8x4 a + a 6x3 + a 13x2 + 4x a +1 2 3 3x a) x + ax + a +a
Bj(X)
=Pj(x)
1
a
=Aj(x)
0 1
0 1 a
bj(x)
a 14x
+a3
4x2
+ a 2x + a
a
3x
2x2
+ a 6x+ 1
7 3 5x2 a x +a + 8x a +a
Reed-Solomon Codes
151
4x 2 + a + 0'7 X + 1) and the error-evaluator polynomial D(x) = 73 2 3 63 2 2 r4(x) = a X + ax + a X + a = 0'(0' X + X + a X + 1). 3 9 d 12 By mal and error method, a , a , an a are roots 0 f A(x) such that A(x) = O. Their inverses are 0'12, 0'6, and 0'3. Therefore, a(a
6x3
o
(6.23)
To determine the error magnitudes, we substitute the syndrome coefficients 5 1,52 , and 53 and the valuesjl,h, andj3 into (6.18). We get 51 = a
J2
52 = 1
53
= 0'14
3
a e3 = (a 3)2
=
=
(a 3)3
6
+ a e6 + (a 6)2
+ a
12
el2
e6 + (a12)2e12 12)3 6)3 e6 + (a eI2 e3 + (a e3
Hence,
0'7, e6 = 0'3, and el2 glvmg e3 (6.23), we have 4 12
e(x) = a x
4
a . Substituting e3' e6, and el2 into
3 6
+ax
7 3
+ax
The decoded vector v(x) = r(x) - e(x) = O. Triple errors are corrected.
6.4 Correction of Errors and Erasures 6.4.1
Berlekamp-Massey Algorithm Decoding
Error-and-erasure correction of Reed-Solomon codes using the BerlekampMassey algorithm can be accomplished as follows. Let a be an element over GF(q = pm), p is some prime. Denote the infinite-degree syndrome polynomial by 5(x)
= ... + 521'x
21'
+ 521'-1 x
21'-1
where the Lt' known syndrome coefficients
+ ... + 5 2x
2
+ 5 1x
(6.24)
Error-Control Block Codes for Communications Engineers
152
Si
= r(a i ) = r n-l ( a i)n-I
(6.25)
I) + ro
( i)n-2
+ . . . + rl (a
( i)n-2
i) + eo + . . . + el a
+ r n- 2 a
(6.26)
or
S,
= e(a =
l
(6.27)
)
i)n-I
en-l ( a
+ en-2 a
(
u: Suppose that the error vector has TJ = v + f nonzero digits at
for 1 ~ i ~ locations it,
ji- . . . ,
i TJ , then
(6.28)
where 0 ~ i, < j i . . . < i TJ < nand 2v + f 5, 2t'. The coefficients in e(x) represent the error values and are related to the known syndrome coefficients by the following equation TJ
s, = L ej,(ai/)i
(6.29)
1=1
for 1 ~ i 5, ir. Also denote the error-locator polynomial by A(x)
=
ll
AlIx + AlI_1X
ll
-
1
+ . . . + A1x +
(6.30)
Define an erasure-locator polynomial (6.31 )
where v and f are the number of errors and erasures in the received polynomial r(x), respectively. The erasure-locator polynomial is defined so that one can compute the syndrome coefficients by substituting arbitrary symbols into the erased positions. The substitution offarbitrary symbols into the erased positions can introduce up to f errors. Then the polynomial 'I'(x)
= A(x)f(x) \Tr
='YTJX =
TJ
\Tf
+'Y1/-1X
(6.32) TJ-l
\Tr
+···+'Ylx+l
(l - aJ1x)(I - a J2x) ... (l - aJqx)
(6.33) (6.34)
TJ
=
IT (l != 1
ai/x)
(6.35)
Reed-Solomon Codes
153
represents the error-and-erasure-Iocator polynomial of degree 'Tf, and the power terms associated with the inverses of the roots of'l'(x) give the locations of errors and erasures. In a similar fashion to the technique described in Section 5.5.1 of Chapter 5 for error-only decoding ofBCH codes, it is easily verified that the coefficients of 'I'(x) and the known syndrome coefficients are related by the following equation. (6.36) Because 2v +15, 2t', the above equation always specifies known syndrome coefficients 1 5, i 5, 2t', if l' is in the interval 1 5, l' 5, u, Letting j = l' + 'Tf, we obtain
s..
(6.37) for 'Tf + 1 5, j 5, 'Tf + u, Again, the functions defined by (6.37) are Newton's identities. Rearranging (6.37), we get 5 j = -'I'15j - 1 - 'l'2 5j-2 - ... - '1'7/-1 5j_7/+1 - 'I' 7/5j_7/
(6.38)
7/
= - I'I'/5j-1
(6.39)
1=1
Define
flex) = 'I'(x)[5(x)
(6.40)
+ 1]
where the error-evaluator polynomial "'() x
.1£
5 2t' +7/ = ,It '1:'7/ 2t'X
+
('It 5 ,Tr 5 ) 2t' +7/-1 + ... '1:'1]-1 Tt' + '1:'7/ 2t'-1 x
,
+ (52t' + 'l'I S2t'-1 + 'l'2 52t'-2 + ... + 'l'7/ 52t'-7/)X + 'l'1]5 1)x1]+1
+ (57/+1 + '1'1 57/ + '1'2517-1 +
+ (52 + '1'151 + 'l'2)X
2
+ ...
(6.41)
+ 'l'1])x7/ + ...
+ (51] + '1'1517-1 + '1' 2517-2 +
+ (53 + '1'\52 + '1'251 + 'l'3)X
2t'
3
+ (51 + 'l'1)x+ 1
Assuming v errors and I additional errors added in the erased positions, then the degree of flex) is less than or equal to 1]. This means that the
154
Error-Control Block Codes for Communications Engineers
coefficients of D,(x) from x 1J + I through x2t' define exactly the same set of equations as (6.36). These coefficients provide a set of 2t' - v - f linear equations in YJ unknown coefficients of 'I'(x) and can be solved if 2v + f ~ 2t'. Given the 2t' known syndrome coefficients of S(x), the decoding problem becomes one of finding an error-and-erasure-locator polynomial 'I'(x) of degree YJ thar satisfies rhe following key equation D,(x) == 'I'(x)[S(x) + 1] rnodulo-x
2t' 1 +
(6.42)
Since F(x) is a factor of'l'(x), f(x) is used to initialize the BerlekarnpMassey algorithm at step J-t = f = degf(x) together with the known syndrome coefficients to compute the error-and-erasure-locator polynomial 'I'(x). To compute the syndrome coefficients, we assign arbitrary values to the symbols at the erasure positions. In practice, zeros are often assigned to the erasure positions. This reduces the number of computations in the syndrome calculations. Since arbitrary assignment of symbols into the erased positions may result in incorrect guessing of those symbols, we need to estimate the values associated with the erasure positions. In a similar fashion to the technique described in Section 6.3.1 for erroronly decoding of Reed-Solomon codes, the error-and-erasure values may be calculated from
(6.43)
where 0 ~ ji < nand 1 ~ i ~ YJ, !l(x) is the error-evaluator polynomial, and 'I"(x) is the formal derivative of the error-and-erasure-locator polynomial 'I'(x). The error-evaluator polynomial is computed from (6.42). For error-and-erasure correction, the decoding steps can be summarized as follows. 1. Compute the erasure-locator polynomial f(x).
2. Modify the received polynomial r(x) by substituting arbitrary received symbols in the erasure positions in r(x) and compute the syndrome coefficients 5 I, 52, ... , S2t'. 3. Replace the error-locator polynomial A(x) shown in Figure 5.2 of Chapter 5 by the error-and-erasure-locator polynomial 'I'(x). 4. Set initial conditions J-t = f, B(x) = xf'(»), 'I'(,u)(x) = I'(x), j = 0, and L p = f
Reed-Solomon Codes
155
5. Use the syndrome coefficients 51' 52, ... , 52t' and apply the Berlekamp-Massey algorithm to compute the error-and-erasure-locator polynomial 'I'(x). 6. Compute the error-evaluator polynomial !1(x), where !1(x) == 'I'(x)[5(x) + 1]modulo_x 2t ' + I . 7. Find the roots of'l'(x) and locate the error-and-erasure positions. 8. Determine the magnitude of the errors and erasures associated with the error-and-erasure positions and store in e(x). 9. Subtract e(x) from the modified received polynomial for correction. 4
Example 6.5 For t' = 3, P(x) = x + x + 1 with a as a primitive root, the generator polynomial g(x) of a OS, 9) triple-error-correcting nrirnitive . 6 10 5 14 4 4 3 6 2 '9 6 Reed-Solomon code IS x + a x + a x + a x + a x + a x + a . 11 10 7 3 2 Assume v(x) = 0 and r(x) = a x + a x + x + x, where r1 = 1 and r: = 1 denotes erasures. Use the Berlekamp-Massey algorithm to compute the error-and-erasure-locator polynomial and decode. 2x) 3x2 The erasure-locator polynomial f(x) = 0 - ax)(l - a = a + a\ + 1. The syndrome vector
S
=
[51
52t']
52'"
where
Therefore, 51 = a
=a =a 3
11(a 10 6
+a
) +
10
=
2
+ a
2
+a +a
13
3
Similarly, 52 = a , 53 = a , 54 = a
5(x)
a 7(a 3 ) + a
a 9x6 + a 3x 5 + a
14 ,55
14x4
= a 3, an d
+ a
S6
= a 9.
3x3 + a 3x2 + a 13x
Using the Berlekarnp-Massy algorithm we obtain Table 6.4. The iteration process stops when J-L = Zt' = 6. From Table 6.4, the error. 4 14 3 2 14 h and-erasure-locator polynomial 'I'(x) = ax + a x + x + a x + 1. T e 4 21 3 error-evaluator polynomial !1(x) == 'I'(x) [5(x) + 1] modulo-x ' + 1 = a x + forrnal deri ',Tr a'3x 3 + a 5x 2 + a 2x + 1. The formal envanve 't' (x) = a 14x 2 + a 14.
Error-Control Block Codes for Communications Engineers
156
Table 6.4 Berlekamp-Massey Iterative Algorithm Decoding Steps W(pl(x)
B(x)
dp
JL
j
Lp
0
2 3 3 4 4
0 1 2 3 4 5 6
f(x)
xf(x) a 12
6x 3
8x 2
3x
14x+
x +a +a 1 a 3x 2 + a 5x + 1 ax 4 + a 3x 3 + a 8x 2 + a 5x + 1 ax4 + a 14x 3 + x 2 + a 14x + 1
a +a +a a 6x 4 + a 8x 3 + a 3x 2 a 8x 3 + a 10x2 + a 5x a 8x 4 + a lOx3 + a 5x 2
a9 a 10 a7
6x 2
3
1
1
2 2
. By tnal and error method, a 5,a 12,a 13,and a 14 are roots 0 f the errorand-erasure-locator polynomial 'I'(x) such that 'I'(x) = O. Their inverses are 3 a 2,an d a. T hererore, C a 10,a,
(6.44)
Use (6.43),
r
r
r
r
_a 3{a3 • [(a 3)4 l + a 3 . [(a 3)3 l + a 5 • [(a 3)2 l + a 2 • (a 3 1 + I}
r
l + a 14
2
. a
a 14 • [(a 3)2
a 3{a 3 . a- 12 + a 3 . a -9 + a 5 . a -6 + a 2 . a -3 + I} a 3
a {a
3
3
=
+ a
3
. a
. a
6
-6
+ a
5
9
14
+ a . a + a 14 9 14 a . a + a 14 14 3{a6 9 a + a + a + a + I} 8 14 a +a 3 10 a 'a
a
. a
14
12
+ l}
6
a7 Similarly, el = I, «: = I, and t' + (112) and deg ri(x) :S t' + (112), then we are assured that deg bi(x) :S t' - (112) and deg b i+ 1(x) > t' - (112), respectively. The case for lodd follows in a similar manner. In this case, if deg ri-l (x) > t' + (1- 1)/2 and deg ri(x) :S t' + (1- 1)/2, then we are assured that deg bi(x):S t' - (I - 1)/2 and deg b i + 1(X) > t' - 'J': 1)/2, respectively. Letting ni(x) = ri(x) and Ai(x) = bi(x), this implies that the partial results at the z-th step provide the only solution to the key equation. The only solution that guarantees the desired degree of n(x) and A(x) is obtained at the first i. Thus, we simply apply the extended Euclidean algorithm to x 21' + 1 and 2(x) + 1, until Consider
t
deg ri(X) :S
{
t
l
+
I
1;-
for even I 1
+--
2
for oddf
The modified syndrome coefficients are used to compute the error-locator polynomial A(x) and the error-evaluator polynomial n(x). We can then compute the error-and-erasure polynomial 'I'(x) , where 'I'(x) = A(x)f(x). For error-and-erasure decoding, the decoding steps can be summarized as follows. I. Compute the erasure-locator polynomial r(x).
2. Modify the received polynomial r(x) by substituting arbitrary received symbols in the erasure positions in r(x) and compute the syndrome coefficients 5 I, 52, ... , S2t'.
3. Compute f(x)[S(x) + I] and the modified syndrome polynomial 21 S(x), where 2(x) == f(x)[S(x) + 1] - 1 modulo-x ' + I. 21
4. Apply the extended Euclidean algorithm with ro(x) = x ' + I and rl (x) = 2(x) + 1 to compute the error-locator polynomial A(x) and the error-evaluator polynomial n(x). The iteration stopS when
Error-Control Block Codes for Communications Engineers
160
t' + [
deg ri(X)
s
I
for even!
, ;_ 1
t +--
5. Find the roots of'l'(x) positions.
=
2
for odd!
A(x)nx) and locate the error-and-erasure
6. Determine the magnitude of the errors and erasures associated with the error-and-erasure positions and store in e(x).
7. Subtract e(x) from the modified received polynomial for correction. Example 6. 6 Consider the (15, 9) triple-error-correcting, prImitive Reed-Solomon code with the code polynomial v(x) and the received polynomial r(x) given in Example 6.5. From Example 6.5, we have I' (x) = (1 - ax)(1 - a 2x) = a 3x 2 + a 5x + 1, 13 3 51 = a , 52 = a , 14, 3, 3, 9. 53 = a 54 = a 55 = a and 56 = a The syndrome polynomial is 5 (x) = a 9x 6 + a 3x 5 + a 14x 4 + a 3x 3 + a 3x 2 + a 13x. T hus, nx)[5(x) + 1]
=
a
12 8
8 7
7 6
x + a x + a x 7x+ 3x 2 + a 1 + a
+ a
10 5
x
+ a
12 3
x
and 2(x) + 1
= f(x)[5(x)
2t
+ 1]modulo-x ' +1 7 6 10 5 12 3 3 2 7 =ax +a x +a x +ax +ax+
Using the extended Euclidean algorithm to compute the error-locator polynomial A(x) and the error-evaluator polynomial fi(x), we obtain Table
6.5. The iteration process stops when deg ri(x)
s
(t'
+ { =
4).
From Table
6.5, the error-locator polynomial A(x) = b3 (x ) = a 9x 2 + a 8x + II
13 2
12
.
all
=
a (a x + a x + 1) and the error-evaluator polynomial fi(x) = r3(x) = l 4x3 l 3x 2 l 4x4 2 2x l l(a 3x4 3x3 + a + ax + a + all = a + a + a 5x + a + a l 2x4 1). The error-and-erasure-locator polynomial 'I'(x) = A(x)nx) = a + 10 3 II 2 10 II II 4 14 3 2 14 a x + a x + a x + a = a (ax + a x + x + a x + 1). The C . ,'r'() ' rorma I deri errvatrve 't' x = a 10x 2 + a 10 = a II( a 14x 2 + a (4) . Th e soI utions fi(x), 'I'(x), and 'I"(x) are identical (within a constant factor) to those found
Reed-Solomon Codes
161
Table 6.5 Extended Euclidean Algorithm Decoding Steps qi(X)
'i(X)
=0i(X)
Bi(X)
=Pi(X)
b i(X)
=Ai(X)
--_.. _ - - - - - - - - -
0 1 2
a 8x + a
3
aX+ a
11
x7 1 SIx) + 1 0 7 a 6x5 + a 5x4 + a x 3 + a 3x2 + 1 12x a + all a
14x4
a
+ a 14x3 + ax 2 + a 13x + ax+ a
0 1 a 8x+ a 11 a 9x2 + a 8x + all
11
in Example 6.5 using the Berlekamp-Massey algorithm. Clearly, the error-anderasure locations and values must be identical to these found in Example 6.5 and (6.60)
h el = 1, e2 = 1, e3 = a 7 ,and elO = a 11. were Substituting el' e2, e3' and elO into (6.60), we have e(x) = a
The decoded vector v(x) errors are corrected.
11 10
x
= r(x)
7 3 2 + a x +x +x
- e(x)
= O. Double erasures and double
6.5 Computer Simulation Results The variation of bit error rate with the EblNo ratio of two primitive ReedSolomon codes with coherent BPSK signals and Berlekamp-Massey algorithm decoding for AWGN channels has been measured by computer simulations. The system model is shown in Figure 6.3. Here, E b is the average transmitted bit energy, and N o/2 is the two-sided power spectral density of the noise. The parameters of the codes are given in Table 6.6. Perfect timing and synchronization are assumed. In each test, the average transmitted bit energy was fixed, and the variance of the AWGN was adjusted for a range of average bit error rates. The simulated error performance of the Reed-Solomon codes with Berlekamp-Massey algorithm decoding in AWGN channels is shown in Figure
Error-Control Block Codes for Communications Engineers
162
v
r·
__.__..
_--~
Channel codeword
Transmission path (Analog channel)
Noise
1\
U I
Data sink
R
1.....- - - 1
Estimate of U
L..-_ _--J
Received vector
~
_
---..- -_.._----,
Discrete noisy channel
Figure 6.3 Model of a coded digital communication system.
Table 6.6 Parameters of the Primitive Reed-Solomon Codes (n, k)
dmin
r
g(x)
115, 11) (15,9)
5 7
2 3
x 4 + a 13x 3 + a 6x 2 + a 3x + a lO x 6 + a 10x 5 + a 14x 4 + a 4x 3 -;- a 6x 2 + a 9x+ a 6
6.4. Comparisons are made between the coded and uncoded coherent BPSK systems. At high E b/No ratio, it can be seen that the performance of the coded BPSK systems is better than the uncoded BPSK systems. At a bit error rate 4 of 10- , the (15, 11) double-error-correcting, and (15, 9) triple-error-correcting Reed-Solomon coded BPSK systems give 1.4 and 1.7 dB of coding gains over the uncoded BPSK system, respectively.
Reed-Solomon Codes 0
10
10
10 \I)
li1...
...0
10
Qj .1::
m 10 10
10
163
-1
-2
-3
-4
-5
-6
-2
0
2
4
6
8
10
12
-a- Uncoded coherent BPSK
......
-0-
(15, 11) Reed-Solomon code (15,9) Reed-Solomon code
Figure 6.4 Performance of Reed-Solomon codes with Berlekamp-Massey algorithm decoding in AWGN channels.
References [1]
Reed, I. S., and G. Solomon, "Polynomial Codes over Certain Finite Fields," SIAM Journal on Applied Mathematics, Vol. 8, 1960, pp. 300-304.
[2]
Berlekarnp, E. R., Algebraic Coding Theory, New York: McGraw-Hill, 1968.
[31
Peterson, W. W., and E. L. Weldon, jr., Error Correcting Codes, 2nd ed., Cambridge, MA: MIT Press, 1972.
[4]
MacWilliams, F. J., and N. J. A. Sloane, The Theory ofError Correcting Codes, Amsterdam: North-Holland, 1977. Blahut, R. E., Theory and Practice of Error Control Codes, Boston, MA: Addison-Wesley, 1984. Wicker, S. B., and V. K. Bhargava Eds., Reed-Solomom and Their Applications, IEEE Press. 1994.
[5] [6] [7]
Forney, G. D., Jr.,"On Decoding BCH Codes," 11:.'1:.'£ Trans. on Inftrmation Theory, Vol. IT-II, No.5, October 1965, pp. 393-413.
[8]
Forney, G. D., Concatenated Codes, Cambridge, MA: MIT Press, 1966.
This page intentionally left blank
7 Multilevel Block-Coded Modulation 7.1 Introduction We have seen that in classical digital communication systems with coding, the channel encoder and the modulator design are considered as separate entities. This is shown in Figure 7.1. The modem (modulator-demodulator) transforms the analog channel into a discrete channel and the channel codec (encoder-decoder) corrects errors that may appear on the channel. The ability to detect or correct errors is achieved by adding redundancy to the information bits. Due to channel coding, the effective information rate for a given transmission rate at the encoder output is reduced. Example 7.1 It can be seen from Figure 7.2 that the information rate with channel coding is reduced for the same transmission rate at locations 2'
ISource
u Binary information vector
V
.I Channel I I encoder I
Channel codeword
J
Modulator
•
I
Transmission Noise
path
(Analog channel) 1\
U
ISink IEstimate
I Channell
R
I decoder I Received
of U
vector
Figure 7.1 Model of a coded digital communication system.
165
•
I Demodulator Discrete nosiy channel
Error-Control Block Codes for Communications Engineers
166
1'
Isource
I 2 biVs
2'
~ I No coding 12 biVs 4'
3'
Is ource 1--""'--'1 biVs .
1
819oa' at location 3'
2biVs
Jf---------J o
Signal at locations 1" 2' and 4'
SAME transmission rate
1 'Tlma
-+---+---H~Time
0
1 second
Figure 7.2 Comparison of coded and uncoded signaling schemes.
and 4'. If we increase the information rate with coding from 1 bps to 2 bps, the transmission rate at the encoder output increases to 4 bps. In modem design, we can employ bandwidth efficient multilevel modulation to increase spectral efficiency. For 2 Y-ary modulation, y number of bits are used to specify a signal symbol which, in rum, selects a signal waveform for transmission. If the signaling interval is T seconds, the symbol rate is 1IT symbol/s and the transmission rate at the input of the modulator is yl T bps. Also, the signal bandwidth is inversely proportional to the signaling interval if the carrier signal frequency I. = 1/ T. Example 7.2 Consider Figure 7.3. For BPSK (y
=
1): symbol rate, 1/ T = 2 symbol/so information rate, yl T = 2 bps.
For 4-PSK (y = 2): symbol rate, II T = 1 symbol/so information rate, yl T = 2 bps.
It can be seen from Figure 7.3 that 4-PSK (l symbol/s) requires less bandwidth than BPSK (2 symbolls) for the same information rate of 2 bps. Suppose the following: For BPSK (y = 1): symbol rate, II T = 2 symbolls. information rate, yl T = 2 bps. For 4- PSK (y = 2): symbol rate, 1/ T = 2 symbol/so information rate, yl T = 4 bps.
Multilevel Block-Coded Modulation 1'
SAME information rate
{
2 bit/s
.1
BPSK
3' 2 bit/s
·14-PSK
I
167 2'
•
2 symbol/s 4'
I1
•
symbol/s
Signal at locations l' and 3'
BPSK signal at location 2'
4-PSK signal at location 4'
1 second Figure 7.3 Comparison of uncoded BPSK and 4-PSK modulation schemes at same information rate.
4-PSK (4 bps) now has a higher information rate than BPSK (2 bps) for the same bandwidth requirement. From our early example, we know that channel coding reduces the information rate for the same symbol rate. To compensate for the loss of information rate, two methods can be employed. In the first method, if the type of modulation is fixed and the bandwidth is expandable (i.e., to increase the symbol rate), we simply increase the information rate from the output of the source for the coded system. In the second method, if the bandwidth is fixed, we increase the modulation level of the coded system. That increases the information rate of the coded system. Example 73 From Figure 7.4, it can be seen that the information rate at location 4' with coding is reduced when compared with the uncoded case at location l ' for the same modulation and bandwidth constraints. Example 74 The information rate at location 4' is increased from 1 bps to 2 bps. From Figure 7.5, it can be seen that more bandwidth is needed with coding for the same type of modulation and the same information rate at locations l ' and 4'. Example 75 The information rate of 1 bps is maintained at location 4'.
Error-Control Block Codes for Communications Engineers
168
t : : __ ·--'--,2'
"
--.~ _~~ ~~~~~~ 2 bit/s
J
4' ~ R~~/2 , bit/s
I
.r l 2 bit/s 5'
2 bit/s
Signal at location 4'
~
3'
PI'
BPSK 2 symbol/s BPSK
S' ~
'/3' = , bit/symbol
I 4'/S'=O.5bitlsymbol I
2 symboVs
I'oj
l. 0
Time
I
Signal at locations ",2' and 5'
- I f - - - f - - - - + + Time
BPSK signal at locations 3' and S'
~4--I-H"---T--++-
o
Time
, second
Figure 7.4 Comparison of uncoded and coded BPSK modulation schemes at same symbol rate.
From Figure 7.6, it can be seen that for the same information rate at locations l' and 4' with different modulation level (BPSK and 4-PSK), the uncoded BPSK and the coded 4-PSK require no change in the bandwidth requirement. The following discussion is concerned with bandwidth-constrained channels. We have already seen that if we double the modulation level with coding, the uncoded and the coded modulation schemes have the same information rate and the bandwidth requirement remains unchanged. Bandwidth-efficient modulation with coding has been studied by many researchers [1--4), but trellis-coded-modulation (TCM) was first introduced and documented by Ungerboeck [5] in 1982. The technique combines the design of convolutional codes with modulation and the codes were optimized for the minimum free Euclidean distance [5J. Here, we shall restrict ourselves to a particular kind of bandwidth-efficient coded-modulation design, namely, multilevel block-coded modulation (MBCM). MBCM is a technique for combining block coding and modulation. The technique allows us to construct an L-level code C from L component codes C 1, Cz, ... , CL with minimum Hamming distances of d], d z, ... , d i- respectively, and d] ~ d z ~ ... ~ d i. We now ask ourselves some questions. Can we simply use well-known block codes from a textbook? Will the multilevel block-coded modulation
Multilevel Block-Coded Modulation l'
,-----------,
2'
~~.~?_:~~i~~_rl 2 bitls
~ .-I R~~/2
I
2 bitls
2 bitls 5'
~
4 bit/s
3' BPSKP 2 symbol/s
I 1'/3' = 1 bit/symbol
6' BPSK ~
I 4'/6' = 0.5 bit/symbol I
169
4 symbol/s
Signal at locations 1',2' and 4' BPSK signal at location 3'
Signal at location 5'
BPSK signal at location 6'
o 1 second
Figure 7.5 Comparison of uncoded and coded BPSK modulation schemes at same information rate.
scheme be an optimum scheme? The answer is yes provided we design the coding and the modulation as a single entity. Although the codes were optimized for the minimum Hamming distance, the mapping of the coded sequence onto the 2 L_ary modulation signal symbol sequence can give a good minimum Euclidean distance and the approach gives surprisingly good performance. Multilevel block-coded-modulation schemes are analogous to the TCM schemes in that they use the geometry of the signal constellation to achieve a high minimum Euclidean distance. In 1977, multistage decoding of block codes was first discussed by Imai and Hirakaw [6]. Further work by Cusack [7, 8] and Sayegh [9] led to the introduction of multilevel block-coded modulation for bandwidth constrained channels. Since their publications, active research works have been carried out by others in the development of I-dimensional (I-D) and 2-dimensional (2-D) combined block coding and modulation techniques as an alternative to TCM schemes when the implementation of the latter is not economically feasible at very high transmission rates [10-46]. This chapter discusses the principles of 2-D MBCM with maximum-likelihood and
Error-Control Block Codes for Communications Engineers
170 l'
,----------,
2'
3' BPSK 1 symbol/s
1'/3' = 1 bit/symbol
~ 4-PSK~ I
4'/6'= 1 bit/symbol
~:_ ~_~ ~~~~n?__~ 1 bit/s
1 bit/s
~ R~~~/21 1 bit/s
5'
P
2 bit/s
1 symbol/s
~nme
Signal at locations 1',2' and 4'
01
BPSK signal at location 3'
O~.Tlme
Signal at location 5'
~
4-PSK signal at location 6'
I·Time Time
a •
1 second
.j
Figure 7.6 Comparison of uncoded BPSK and coded 4-PSK modulation schemes at same
information and symbol rates.
sub-optimum decoding. We shall illustrate the design procedure of2-0 MBCM with M-PSK signals. The design procedure is also valid for MBCM schemes with 1-0 or other 2-D signal constellations.
7.2 Encoding and Mapping of Multilevel Block-Coded Modulation In multilevel block-coded modulation, an L-level code C is constructed ftom L component codes C], C z- ... , C L with minimum Hamming distances of d], d z, ... , dL, respectively, and d] ~ d z ~ ... ~ dL' In terms of error correcting capability, C 1 is the most powerful code. For simplicity, we assume that all component codes are binary, linear block codes. The choice of the l-th component code C, depends on the intra-subset distance of a partitioned signal set for 1 ~ I ~ L. In Figure 7.7, the encoding circuit consists of L binary encoders £], £z, ... , £L with rates R] = k] In, Rz = kzln, ... ,
Multilevel Block-Coded Modulation
171
s· Mapper
Figure 7.7 Encoding model of a multilevel block-coded modulation system.
RL = sc!». respectively. k, is the number of information symbols associated with the I-th component encoder and n is the block length of all encoders for 1 $ 1$ L. The information sequence is first partitioned into L components of length k J, k 2 , . . . , k L, and the I-th k ,-component information sequence is encoded by the Z-th encoder E, which generates a codeword of length n in
c,
We now illustrate the signal mapping procedure for a specific example, namely, the M-ary PSK signals. The signal constellation can be partitioned into two subsets and the subsets can themselves be partitioned in the same way. At each partitioning stage, the number of signal points in a set is halved and there is an increase in minimum Euclidean distance in a set. Figures 7.8 and 7.9 show the set-partitioning of 4-PSK and 8-PSK constellations, respectively. It is assumed that all the signal points are equally spaced and lie on a unit circle. The average signal symbol energy is equal to 1. Consider the 8-PSK signals shown in Figure 7.9. The Euclidean distance between two closest signals is 8\ = ""';0.586. Ifwe partition the 8-PSK signals
[00]
[10]
Figure 7.8 Set-partitioning of a 4-PSK constellation.
[01]
[11]
Error-Control Block Codes for Communications Engineers
172
[000]
[100]
[0101
[110]
(001)
[101]
[011)
[111)
Figure 7.9 Set-partitioning of an 8-PSK constellation.
in a natural way, we can see that the Euclidean distance between two closest whereas the smallest subset, signal points in the subset to, 2, 4, 6} is 02 = to, 4}, formed from the subset to, 2, 4, 6}, has the Euclidean distance 03 = 2. Hence, this set partitioning divides a signal set successively into smaller subsets with increasing intrasubset distance. We now form an array of n columns and L rows from the encoded L sequence, where M = 2 . Each column of the array corresponds to a signal point in the signal constellation diagram. Each symbol in the first row corresponds to the right-most digit and each symbol in the last row corresponds to the left-most digit in the representation of the signal points. The l-th row n-component vector corresponds to the codeword generated by an (n, k[, d,) component code C, with minimum Hamming distance d]. The total number of codes used in an MBCM scheme is simply L. There are L . n code symbols and we send n signals in a transmitted block. If the overall coding rate is R e , the total number of information symbols is L . n • R e . We have
-/2,
0< k,
~
n
(7.1)
and L
"Lk,=L'n'R c t; I
(7.2)
Multilevel Block-Coded Modulation
113
In matrix notation, let the input to the component codes be represented by the I-component vector
v=
[VI V 2
...
(7.3)
VL]
and the output of the component codes be represented by the I-component vector
V"
[VI'
=
V5.' VLJ
(7.4)
where the f-th encoder input is represented by the kl-component subvector
(7.5) and the f-th encoder output is represented by the n-component subvector (7.6)
with elements 0 and 1, for 1 S f formulated as
s
L. The f-th encoding operation can be
(7.7)
V[ = VIG,
G7 is the
kI-by-n generator matrix of the f-th component code, and is given
by
G"I
" gl,o,o
" gl,O,1
g "I,O,n-1
" gl,I,O
" gl,I,1
" g 1,l,n-l
" g l,kl-I,O
" gUrU
" l - I, n- l g I,k
with elements 0 and 1. The I-component codevector V" can, represented as
V" = VG" where
(7.8)
=
10
turn, be
(7.9)
Error-Control Block Codes for Communications Engineers
174
Gi' 0 G"= 0
0
G2'
0 0
0
GZ
(7.10)
Define an L-by-n permutation matrix
P=
Pl,D P2,o
Pl,I P2, 1
PL,D
PL,J
Pl,n_l] P2, n- 1
(7.11)
PL,n-1
where the n-by-L sub-matrix Pt.j is given by PO,2
Pt,j =
[PO" Pl,I
PO,L
P1,2
~1,L
Pn-I,2
Pn-!,L
:
Pn-l,l
]
(7.12)
1 s l s L, 0 ~ j s n - 1 with zero entries except the element Pj,l = 1. The codevector, V, of the L-level code Cis
V = V"P
(7.13)
Subsrituting equation (7.9) into equation (7. J3), we get
V = UG"P
(7.14)
where the n-component vector V is given by
V = [Yo V: ... V n -
I]
(7.15)
and the L-component subvecror Vj is
V= [v'i· j ,j
o~ j
~ n -
VI2/ ,j
1. In terms of the channel bits,
l
V'L ,j ]
(7.16)
Multilevel Block-Coded Modulation
v=
[v'1.0 v2,O
v'Lo v'{, I v2, I
175
v "L,I
(7.17)
V'l,n-l V2,n-1 ... V[,n-Jl " v "Z,j . . . v " . I sym b cror an d[ Vl,j L,j) 'IS mappe d onto a sIgna 0 l" S j' o ~ j ~ n - 1. After the signal mapping, we get the n-component vector
s" =
[so s'{ ... s~-Jl
(7.18) L
with elements chosen from the signal set 10, 1, ... , M - II and M = 2 . Example 7.6 Choose n = 4 and 4-PSK signal constellation shown in Figure 7.8. Let C, be the (4, 1,4) binary repetition code of G'{ = [1 1 1 1], and C Z be the (4, 3, 2) binary single-parity-check code of G
z=
[~ ~ ~ ~]. The resultant two-level BCM scheme has R o
0
1
I =
1/4,
1
Rz = 3/4, and an overall coding rate of 1/2. All the uncoded information bits are shown in Figure 7.10. The corresponding codewords and signal vectors generated by the component codes are shown in Figures 7.11 and 7.12, respectively. Example 7.7 Choose n = 4 and 8-PSK signal constellation shown in Figure 7.9. Let C I be the (4, 1,4) binary repetition code of G'{ = [1 1 1 1], and C 2 be the (4, 3, 2) binary single-parity-check code of
o 111 1
111 Figure7.10 Partitioning of uncoded bits to the input of the component codes in Example 7.6.
V V
1
0000
1111 2 ~"';;";"I---I---I--~~~~~~~+-:-~ 1111 1111
Figure 7.11 Partitioning of codewords generated by the component codes in Example 7.6.
Error-Control Block Codes for Communications Engineers
176
n ....... =4
2222 1111 1133 1313 1331 3113 3131 3311 3333
Figure 7.12 Partitioning of signal vectors generated by the component codes in Example 7.6.
G~ ~ :],
G, =
G 3=
[~ ~ ~ ~]. 000
R2
and C, be the I-i, 4, I) binary nonredundanr code of
The resultant three-level BeM scheme has R I = 1/4,
1
3/4, R 3 = 4/4, and an overall coding rate of2/3. AU the uncoded information bits are shown in Figure 7.13. The corresponding codewords and signal vectors generated by the component codes are shown in Figures 7.14 and 7.15, respectively. Alternatively, the multilevel encoding process can be performed by a =
L
single encoding process. In this case, the
L k I-by-(L . n) generator matrix is 1=\
G
=
G"P
(7.19)
UG
(7.21)
where (see Equation 7.20 on p. 177) and the codevector is
v,
Also, the parity-check matrix associated with the generator matrix G is
L
the
L (n -
k/)-by-(L . n) matrix H, where
!=l
H and
=
H"P
(7.22)
Multilevel Block-Coded Modulation
0
0
0
0
0
0
0
g'{,k, -1.0
0
0
0
0
0
0
0
0
0
g2.0,O gl,l,O
0
0
0
0
g2.k,-1.0
0
0
0
0
0
0
0
0
0
0
0
0
gl.1.0
G=
0
g'1.0.1 g'1.1.1
gl.O.O
0
0
0
0
177
0
g'1.0.n-1 g'{,l.n-1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0
0
0
0
g'1.k,-I.n-1
0
0
... 0 0
0
0
0
0
0
g2.0,n-1 g2,l.n-1
0
0
0
0
0
g2,.,-l.n-1
0
0
0
g'Lo,1 g'i.,1,l
0
0
0
0
0
0
0
0
0
g'LO,n-1 g'Ll,n-1
0
g'Lk,-1.1
0
0
0
0
g'!..k,-I.n-I
0
0
0
g'l.k, -1.1
0
0
0
g2.0,1
0
0
g2.l.!
0
0
g2.k,-1.I
g'LO.O g'Ll,O
0
0
0
0
0
0
g'i..k,-1.0
0
0
0
(7.20)
178
u, U 2
Error-Control Block Codes for Communications Engineers
0 0 000 000 000 000 000
0 000
~~~~~~~~~~~~~~~~~ 0000000000000000 001 001 001 001 001 001 001 001 001 001 001 001 001 001 001 001 00000001 00100011 01000101 0110 0111 1000 1001 10101011 1100 1101 1110 1111 ········· +·······..···t···········+··········t·············1..·..·······+· ····+..····· 1·······..·..+ ··· + " + + + . 0000000000000000 010 010 010 010 010 010 010 010 010 010 010 010 010 010 010 010 000000010010001101000101 0110011110001001101010111100110111101111 ..• +· ·..+ " t ·..I • ·+ +··..· +·· ·t ····..· ·..······ ·..+··· ·..+ ··..·+ + .. 0000000000000000 011 011 011 011 011 011 011 011 011 011 011 011 011 011 011 011 0000000100100011010001010110011110001001 101010111100110111101111 ............+ ·t " ·..·t ·..I..· · ·..I · t · t · ·..· · ·+ .. 0000000000000000 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 0000000100100011010001010110011110001001 101010111100110111101111 ......·..··..t ····..t· ·t·· · ·t ..·..·..·..·t · ·+ · ·+· · ··1 ·····+ · ··..· +·..· + ..·· ··..+ + . 0000000000000000 101 101 101 101 101 101 101 101 101 101 101 101 101 101 101 101 0000000100100011010001010110011110001001 101010111100110111101111 ....· ·· ··"..·..·..·..·t ..· ·..·I..••• ·+..· ·+ + ·..+ ····· · t · +..· -l.. 0000000000000000 110 110 110 110 110 110 110 110 110 110 110 110 110 110 110 110 01000101 0110 01111000 1001 1010 1011 1100· 1101· 1110 ·00000001 ·..· ·+00100011 t· · ·I ··..·I · ·•· · ·I · •..· ·t· · ·..·+· · ·11111 · .. 0000000000000000 111 111 III III 111 111 111 111 111 111 111 111 111 111 111 111 00000001 00100011 01000101 0110 0111 1000 1001 10101011 1100 11011110 1111 1111111111111111 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 0000000100100011010001010110011110001001 10101011110011011110 1111 ........·· ..·+ ··..t ..· ·"..·..·..·..·t ..··· I •..• ·1..•..• + ·+ ·•·· •..· ·• ·..· + .. 1111111111111111 001 001 001 001 001 001 001 001 001 001 001 001 001 001 001 001 0000 0001·t 00100011 01000101 ··..· ·..·t ..· ·..·..t ..·..·· ·t ..· ·i·+0110 · +01111000 + +1001 10101011+1100 1101+1110 ·..·..·11111 · .. 1111111111111111 010 010 010 010 010 010 010 010 010 010 010 010 010 010 010 010 000000010010001101000101011001111000 1001101010111100110111101111 ..•..• ·4 · +· ·t ..·..·..·..-t- ·..·-/ · ·+..·..• -/ ·/· ·4 + ·..+ ·..·t ·· · ·· · / . 1111111111111111 011 011 011 011 011 011 011 011 011 011 011 011 011 011 011 011 00000001 00100011 t 01000101 0110 0111· 1000 1001 ......· t ..·..·..·..·4..· ·..-/ · ·t ..·..· ·..·1 · ·..+ · +10101011 + ·+1100 ·..· 1101 1110 ·..I1111· 1111111111111111 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 0000000100100011010001010110011110001001 10101011 1100110111101111 ............·t " t· ..··• ·..I ·I · · · 1 ·I · ·t ..· · · ·-/ +·..· -f.. 1 1 1 1 11 1111111111 101 101 101 101 101 101 101 101 101 101 101 101 101 101 101 101 000000010010001101000101011001111000 1001101010111100110111101111 ·..·..·..·..·t ·..·t ..·..· t · I · I · ·..-1 •..•..• ·4·..·..·..•..· · " •..·..· ·..· · ·1 ·..·..· 1111111111111111 110 110 110 110 110 110 110 110 110 110 110 110 110 110 110 110 000000010010001101000101 0110011110001001101010111100 110111101111. ......·.. ·..·+ 4· • t · ·-/ ·· ·..I..• · • 1 • 1..• •..+..·..·..· · • • +..• 1111111111111111 111 111 111 111 111 111 111 111 111 111 111 111 111 111 111 111 000000010010001101000101 0110011110001001101010111100 110111101111
Figure 7.13 Partitioning of uncoded bits to the input of the component codes in Example
7.7.
Multilevel Block-Coded Modulation
179
..... v; n =4
Vi
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
~~~~~~~~~~~~~~~~~ 0000000000000000000000000000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0011 0011 0011 0011 00110011 0011 00110011 0011 00110011 0011 00110011 0011 0000000100100011010001010110011110001001 101010111100 110111101111 .........................~ - + + ·······t·..··..·····+·..······ ·..· t ··..· ~······ ~· · ·~ · ·ot-·····..·•• ·········I············· 00000000 0000 0000 0000 0000 0000 0000 0000 0000 0000 00000000 000000000000 010101010101010101010101010101010101010101010101 0101010101010101 0000000100100011010001010110 01111000 ............t ..·..· · · ·_ ·- · -l-t..·.._·..·+ ·..+·..· +1001 · · 101010111100110111101111 ·..··· ·..+ ·..ot- •..t· + .. 0000 0000 0000 0000 00000000 0000 0000 0000 0000 0000 0000 0000 00000000 0000 0110011001100110011001100110 011001100110 011001100110 011001100110 00000001 0010t 0011 0100 01 01 t 0110 0111 I 1000t 1001 10101011 · ·i i ..·..· · + ..·..·.._..·,· · ~ ~ ..·..· + 1100 ··..· 1101 ·..+1110 ·..·-t1111. 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1001100110011001100110011001 100110011001 100110011001 100110011001 0000 0001 00100011 0100 01 01 0110·,·0111 l··..·· 1000t 1001 10101011 1100· 1101 t1110 1111 ..·..· i· · ·· · ·i..·-· · + ..·..···..t · ~ ..· · ~ · ..· · I..· ·..··· 0000 00000000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1010 1010 1010 1010 10101010 101010101010 1010 101010101010101010101010 00000001 0111 1000·t 1001·+1010·t1all 1100 1101 +1110 · i-..· +00100011 ·..·..· ·· +01000101 ·_..-l- ·_-t 0110 .._ ·+..· · ·..t · -t1111.. 0000 0000 00000000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1100 1100 1100 1100 1100 11 00 1100 1100 11 00 1100 1100 1100 1100 1100 1100 1100 0000 0001• 0010 0011 0100•..I01• 01·_+.. 0110 01111000 1001 1011 1100 110111101111 ....• •..• • •..•.. ~ ..•..•••..• _ +·.. · 1 +.. ·..· • 1010 · +..· ·_ ·.._ , . 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 00000000 0000 0000 0000 0000 1111111111111111111111111111 111111111111 111111111111 111111111111 000000010010001101000101 0110011110001001101010111100 110111101111 1111111111111111111111111111111111111111111111111111111111111111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 00010010 0011010001010110 01111000 1001 10101011 1100 110111101111 ·..· ·..·+··..·• t ..· ·i ..·• ..· · I· ·..-t· ·..· 1 ·+··· ··..· ··..· ·,·· · + + .. 1111111111111111111111111111111111111111 111111111111 111111111111 0011 0011 0011 0011 00110011 0011 00110011 0011 00110011 0011 00110011 0011 0000000100100011010001010110 011110001001101010111100 11011110 1111 ............·+.. ·..• ··.···_·..···+· ··+ ·..1-..·..···- ··..·..··..I · ·+ · ·········..~··· ..····..·~· ..··· -'] _ , . 1111111111111111111111111111111111111111111111111111111111111111 010101010101010101010101010101010101010101010101 0101010101010101 0111 1000 1001 ·~10101011 1100 1111 ·0000 ·..· 0001 · ·..+00100011 ..· ·+..·..· ~01000101 ..· · ·I •..• 0110 • I· •.. + · · ,.. · +11011110 -..· ·1..·..·• .. 1111111111111111111111111111 111111111111111111111111111111111111 01100110 0110 011001100110 all a 0110 all 00110 0110 all a 0110 0110 all 0 0110 00000001 0010 0011 01000101 0110 0111+1000 ....· _· • ·~ ..• ·_..·I • ot- • -t- +1001 1010 1011 - 1100 _..-+11011110 + +1111.. 1111111111111111111111111111111111111111 111111111111 111111111111 1001100110011001100110011001 100110011001 100110011001 100110011001 0000 0001· ·0010 0011 01000101 t·_..· 0110·101111000 1001 1010+1011 ·11100 11011110 1111 ..........· · ·..t- ·..+·_ ·~ •• ·+ ·+-· ·..· ·..• •..• •..1 • .. 1111111111111111111111111111 111111111111 111111111111111111111111 1010 1010 1010 1010 10101010 1010 10101010101010101010 1010 101010101010 00000001 00100011 01000101 0110 01111000 ......·..· ·..· ·i ..·_ · ·..·•· · ·..· ·· + + _ +1001 +10101011 1100 +11011110 + - +1111 -. 1111111111111111111111111111 111111111111 111111111111 111111111111 1100 1100 1100 1100 1100 1100 1100 11001100 1100 1100 1100 1100 1100 1100 1100 0000 00010010 0011010001010110011110001001 10101011 1100110111101111 ..·..· ·..i · · ·..·+ ·.., -t..· ·..• • ·t · · t ..·..·..·..· 1 •..•..•.. 1111111111111111111111111111111111111111111111111111111111111111 1111111111111111111111111111111111111111111111111111111111111111 00000001 00100011 01000101 0110 01111000 1001 10101011 1100 11011110 1111
Figure 7.14 Partitioning of codewords generated by the component codes in Example 7.7.
180
Error-Control Block Codes for Communications Engineers
.... n
S
=4
",..---~-~-~-~-~-r--T"'"""--r--T"'"""-"T"'"-"T"'"-T"'"""-"T"'"-"T"'"--r--...,
0000 0004 0040 0044 400 404 0440 0444 000 4004 4040 04 4400 4404 4440 4444 ••••a
.
0022 0026 0062 006604220426 0462 04664022 4026 40624066 4422 4426 4462 4466 ....•..···..4··....••..··4··....······4··..•....•..4··....•..•...•··•..··•·......•···••....4·......•·..·4··....
·····+·····..·..··4·..•..······4..•....····4··....·······•·..·•·······...•··•····•·••••..··•·.•..
0202 0206 0242 024606020606 0642 06464202 4206 42424246 4602 4606 4642 4646 ..•..•..•..·4
•• 4···
·4
4··..• • ·
· 1..••
·4
·..·4..•..••..• • •
4
•..•..··4··..•••..··4·
••..•..•..•..•• ••..•·•..·4..••······•••
0220022402600264062006240660 066442204224426042644620462446604664 ·4 •..· 4 4 4 • • • + ·+..· 4..·..•• ·t..·..· 4 • •..4 •••..4·..•..• •• .. 2002 2006 2042 20462402 2406 2442 24466002 6006 60426046 6402 6406 6442 6446 •..·•..•·..4 ···..·4 ···..•..4·..·· •..4 ··•..·..4 •..·..1···..·•·..·..f·· f ..···..· t ·..···..4 •· ····..··4· + . 2020202420602064242024242460246460206024606060646420642464606464
·
....·..· +···
f··
··
f
f
•
·
+
·
·..· f..·..·
f
•
+· ·
•..4·..•..•
f·
•..• ••..• ••
.
2200220422402244260026042640264462006204624062446600660466406644
......·..·..·f
f
f
4
·..·
•
·..f ..··
·I·..·
f
+· ·
·
4
·
·4
·
• ·•·•..•
·•
..
2222 2226 2262 226626222626 2662 26666222 6226 62626266 6622 6626 6662 6666 1111 1115 1151 115515111515 1551 15555111 5115 5151 5155 5511 5515 5551 5555
....•
4·
•..•..·4·
·
4·
•
• •..1
f·
•
·
•
• ·..•
f
4
·
• •..• •
..
1133113711731177153315371573 1577 5133513751735177 5533 5537 55735577
..• · ·4·..• ·
4
f·
•..· · ·
· I
f
•
·..·
·
·..t
·..·
·
·..·..· ·..·..·
..
1313131713531357171317171753 175753135317 5353535757135717 5753 5757 f ..······ 4····..·..···4
·
·•····
·..·
•
·1·····
··
··..·..·f..····..···+
··
·
·4·
4
.
1331 13351371 137517311735 1771 17755331 5335 5371 53755731 57355771 5775
..•..•
•..•..• •
• •
•
•
• •
•..•
f ..•
1 •
·
t
·
·f
•
·f..·
·
···
·
·..·
f
•..•
·
31133117 3153 315735133517 3553 355771137117 7153715775137517 7553 7557
·
·
4
· ·4 ·
·
·..• •..·1
·f
·..·.. 4..•
·t
·..·f
··4
•
•..•..• •..•
.
3131313531713175353135353571 35757131713571717175753175357571 7575
............·f
··
4 ·
•..4
+
·
··..· ··
I..•
•..·
• •..4
t
· ·
·
·..4 ·
4..•
·..·
·
.
3311331533513355371137153751 37557311731573517355771177157751 7755 •..••
• •..••..•
•
•••
•
•..•••..•
1
•••
• •
• •
•..4..•..•..•..• • •• • •• ••
..
3333333733733377373337373773377773337337737373777733773777737777 Figure 7.15 Partitioning of signal vectors generated by the component codes in Example 7.7.
Multilevel Block-Coded Modulation
o
H2
o o
o
o
HI
H'{ 0 H"",
181
(7.23)
H'/is the (n - k ,)-by-n parity-check matrix associated with the l-th component code C, with elements 0 and 1, for 1 s I s L. Example 7.8 If L '" 2, U I '" [1], U 2 '" [0 1 1], G'{ = [1 1 1 1], and G 2'"
1 0 0 1] [o 0
1 0
0 1
1 ,then 1
V'{ '" UIG'{
V'i = [1 1 1 1) V3'" U2G 3 V3'" [0 1 1 0) V" '" [V'i V 2) V" '" [1 1 1 1 V", v-r
o 1 1 0)
1 0 0 0
0 0 0 0
0 1 0 0
0 0 0 0
0 0 1 0
0 0 0 0
o o o
0 0 0
1
0
0 0 0 0
1 0 0 0
0 0 0 0
0 1 0 0
0 0 0 0
0 0 1 0
o o o o
0 0 0 1
p=
V = [1 0 1 1 1 1 1 0) Example 7.9 If L = 2, U I
G, [~ 0
0 1 0
U", [U j U 2) U = [1 0 1 1)
0 0 1
:1
chon
=
[1], U 2 = [0 1 1], G'{ = [1 1 1 1], and
Error-Control Block Codes for Communications Engineers
182
~2]
G" = [G" 0 1
G" "
[~
1
1
1
0
0 0
0 0
0
1
0 0
0 0
0
0
1
0
0
0
0
0
0 1
G = G"P
G"
[~
0
1
0
1
0
1
1
0
0
0
0
0 0
0 0
0 1
0 0
0 1
0 0
0
V= UG and
~]
~]
V = [V'l.O v2.0 v "l.l v 2, I v "1,2 v "2,2 v '{,3 V2,3] ff
= [1 0
11
11
1 0]
This agrees with the results obtained using the 2-level encoding process in Example 7.8. In both cases, the 4-PSK modulated signal vector is 5"=[1331]. Given equations (7.1) and (7.2), we want to choose {k/} to maximize the minimum Euclidean distance between the codewords of the code. Let (n, k[. d/) denote code C/. Consider rwo different codewords in C/. For each one of the positions where these two codewords differ, the squared Euclidean 2 distance between the corresponding signal points is ~ 8 / _ Since two distinct codewords must differ in at least d/ positions, the minimum squared Euclidean 2/d/. distance d~ed between two codewords in C/ must be ~ 8 In general, the minimum squared Euclidean distance between the codewords of the L-level code Cis (7.24) Consider the rate-L'Z two-level block-coded 4-PSK signals shown in Example 7.6 and the uncoded BPSK signals. It is assumed that all the signal points are equally spaced and lie on the unit circle. The average signal symbol energy is equal to 1. For the rate-II2 block-coded 4-PSK signals, L = 2, kl + k : = 4, a total of 64 codewords in C, L· n = 8, 8r = 2, d j = 4, 8~ = 4, d: = 2, and d~ed ~ 8. The minimum squared Euclidean distance of
Multilevel Block-Coded Modulation
183
the uncoded BPSK signals is 4. The information rate and the bandwidth of both signaling schemes are the same but the coded signaling scheme requires more signal points for transmission than the corresponding uncoded signaling scheme. It can be seen that the Euclidean distance between any two nearest coded 4-PSK signals is smaller than the corresponding uncoded BPSK signals. The gain from coding with 4-PSK signals must, therefore, outweigh the loss of distance due to signal expansion. Otherwise, the coded 4-PSK signaling scheme will not outperform the corresponding uncoded BPSK signaling scheme.
7.3 Decoding Methods 7.3.1
Maximum-likelihood Decoding
Maximum-likelihood decoding involves finding the most likely codeword in t.
~>I
C among the 2 'd codewords in C. This is done by computing the squared Euclidean distance between the received n-component signal vector R = [ro rt ... r11-1] and all possible n-component signal vectors, bearing in mind that there is a one-to-one correspondence between each n-component signal vector and codeword in C. The closest n-component signal vector to the received vector is taken as the decoded signal vector. The corresponding codeword in C can now be determined. Example 710 Choose n = 4 and 4- PSK signal constellation shown in Figure 7.8. Let C 1 be the (4, 1,4) binary repetition code of G'l = [1 1 1 1] (4, 3, 2) binary single-parity-check code of and C 2 be the G 2=
[~ ~ ~ o
0
1
: ]. All the information vectors, the codewords generated 1
by the component codes, and the corresponding signal vectors are shown in Figures 7.10, 7.11, and 7.12, respectively. Assuming that the all-zero codeword is transmitted and a 4-component received signal vector R = fro rt r2 r3] = [(0.25, -0.5) (0.25, -0.5) (0.25, -0.5) (0.5, 0.3)], where each received signal point rj is represented by the x- and y-coordinates. The squared Euclidean distances between vectors Rand S" are shown in Table 7.1. The squared Euclidean distance between R and the all-zero 4-component signal vector is 2.7775 which is the closest signal vector to R. The decoded codeword is the all-zero codeword, and the corresponding decoded information vector is the all-zero information vector. The errors are corrected.
184
Error-Control Block Codes for Communications Engineers
Table 7.1 Squared Euclidean Distance Computation Signal Vector S"
Squared Euclidean Distance S2(R, S")
f---~~~--~-----~~~~-------------I
2.7775 5.7775 5.7775 4.7775 5.7775 4.7775 4.7775 7.7775 7.6775 6.8775 6.8775 3.6775 6.8775 3.6775 3.6775 2.8775
0000 0022 0202 0220 2002 2020 2200 2222 1111 1 1 33 13 13 1 331 3 1 13 3 13 1 331 1 3333
If hard-decision decoding is used, the received signal vector is quantized to [(0.0, -1.0) (0.0, -1.0) (0.0, -1.0) (1.0, O.O)J. The squared Euclidean distance between the quantized received signal vector and the 4-component signal vector [3 3 3 3J is 2.0 which is the closest signal vector to the quantized received signal vector. The decoded codeword is the all-one codeword, and the corresponding decoded information vector is the all-one information vector. Errors are not corrected. In general, maximum-likelihood decoding becomes prohibitively complex L
~k,
when the value of 2'=1
is large.
7.3.2 Multistage Decoding [9] Instead of performing a true maximum-likelihood decoding, a multistage decoding procedure can be used. The system model is shown in Figure 7.16. The idea underlying multistage decoding is the following. The most powerful component code C 1 is decoded first by decoder D 1• The next powerful code C 2 is then decoded by decoder D 2 by assuming that C 1 was correctly decoded. In general, decoding of C I by decoder D I always assumes correct decoding of all previous codes, for 1 s I ~ L.
Multilevel Block-Coded Modulation
185
M as· Discrete R p noisy I-+~~ p channel
e r
Transmitter
Multistage decoders
Figure 7.16 Model of a multilevel block-coded modulation system.
The multistage decoding scheme partitions all the possible codewords in C into 2 k, sets. All the elements in a set have the same codeword in the first row. The first decoding step involves choosing the most likely set among the 2 k, sets. When the decision is made, the first row is decoded and is assumed to be correct. The elements in the chosen set are then partitioned into 2 k2 sets. All the elements in a set have the same codeword in the first row as well as having the same codeword in the second row. The next decoding step involves choosing the most likely set among the 2 k2 sets from the chosen set. When the decision is made, the second row is decoded and is assumed to be correct. The above procedure is repeated L times and the received signal vector is decoded in a suboptimal way. This is shown in the following example. Example 7.11 Choose n = 4 and 4-PSK signal constellation shown in Figure 7.8. Let C 1 be rhe (4,1,4) binary repetition code ofG'{ = [1 1 1 1] and C 2 be the (4, 3, 2) binary single-parity-check code of
1 0 00 1] All the information vectors, the codewords generated
[o
G:2 = 0
1 0
1
1 . 1
by the component codes, and the corresponding signal vectors are shown in Figures 7.10, 7.11, and 7.12, respectively. Assuming that the all-zero codeword is transmitted and a 4-component received signal vector R = [ro r\ r2 r3] = [(0.25, -0.5) (0.25, -0.5) (0.25, -0.5) (0.5, 0.3)], where each received signal point rj is represented by the x- and y-coordinates.
186
Error-Control Block Codes for Communications Engineers
1. To estimate the codeword in code C\, the first-stage decoder starts decoding in the subsets 10, 2} and {1, 3}. The closest signal symbols in the subset 10, 2} to Rare 0, 0, 0, and 0 with squared Euclidean distances 0.8125, 0.8125, 0.8125, and 0.34, respectively. The total squared Euclidean distance is 2.7775. The closest signal symbols in the subset {1, 3} to Rare 3, 3, 3, and 1 with squared Euclidean distances 0.3125, 0.3125, 0.3125, and 0.74, respectively. The total squared Euclidean distance is 1.6775. The first-stage decoder picks the signal vector [3 3 3 1] as the decoded signal vector with the smallest squared Euclidean distance, and chooses the all-one codeword in C]. The first row is decoded and is assumed to be correct. The corresponding decoded information symbol is 1. 2. To estimate the codeword in code C 2 , the second-stage decoder starts decoding in the subsets III and 131. The closest signal symbols in the subset II} to Rare 1, 1, 1, and 1 with squared Euclidean distances 2.3125, 2.3125, 2.3125, and 0.74, respectively. The total squared Euclidean distance is 7.6775. The closest signal symbols in the subset 13} to Rare 3,3,3, and 3 with squared Euclidean distances 0.3125, 0.3125,0.3125, and 1.94, respectively. The total squared Euclidean distance is 2.8775. The second-stage decoder picks the signal vector [3 3 3 3] as the decoded signal vector with the smallest squared Euclidean distance, and chooses the all-one codeword in C2 . The second row is decoded and the corresponding decoded information vector is [1 1 1]. The decoded codeword is [1 1 1 1 1 1 1 1], and the corresponding decoded information vector is [1 1 1 1]. It can be seen that the multistage decoder makes an incorrect decision at the end of the first-stage decoding. This incorrect decoding information is used by the second-stage decoder. As a result, the decoded codeword is [1 1 1 1 1 1 1 1]. For the same received vector R, a maximum-likelihood decoder shown in Example 7.10 makes the correct decision and chooses the all-zero codeword as the decoded codeword. The end result is that the multistage decoding technique is a suboptimum decoding technique. 7.3.3
Multistage Trellis Decoding
Since an (n, k, d min ) block code can be represented by an n-stage syndromeformer trellis and can be decoded by means of the Viterbi algorithm, we can implement an alternative multistage decoding scheme to decode the component
Multilevel BLock-Coded Modulation
187
codes in sequence. A Viterbi decoder is used to decode C\ followed by one for C 2 . We use the syndrome-former trellis for C\ and a Viterbi decoder D\ to estimate the codeword in C j and use the syndrome-former trellis for C 2 and a Viterbi decoder D 2 to estimate the codeword in C 2 assuming that the estimated codeword in C\ is correct. The above procedure is repeated L times until all codewords in all the component codes are estimated. Example 7.12 Choose n = 4 and 4-PSK signal constellation shown in Figure 7.8. Let C\ be the (4, 1, 4) binary repetition code of G'l = [I I 1 1] and C 2 be the (4, 3, 2) binary single-parity-check code of G 2=
1 0I 00 1] The syndrome-former trellises of the component codes [o 0
1
0
1
.
1
are shown in Figure 7.17. Assuming that the all-zero codeword is transmitted and a 4-component received signal vector R = ['0'\'2 '3] = [(1.0, -0.1) (1.0, -0.1) (0.9, -1.0) (0.9, -1.0)], where each received signal point 'j is represented by the x- and j-coordinares. ~
7 6 5 4 3
2 1
o
•
•
S1a1.e • 7
•.L::h.;. ".....
•
•
•
!\
•
;. \ , 1 .
.~/.
•
/. 11
•
if.
\\-
0
• 6
•
•
"."',1. ....... •
II. $ Vi ,0 .! J
•
•
0
0
V1'~ 1
vO---.I 31
Figure 8.15 LIRe decoding system.
:
20
:
21 } L } Block 5
c::D--!-.-
Hf---------i~:.J:
27-+-k'LJ--+--~
28 -+--H>o--.I
12}
u...--~.......-..---.... 13
L..-_ _..J
Crossdeinterleaver
:
_-
;
22} R 23
Applications of Block Codes
219
References [11
Shannon, C E., "A Mathematical Theory of Communication," Bell Syst. Tech. j., Vol. 27, No.3, July 1948 , pp. 379-423, and Vol. 27, No.4, October 1948, pp. 623-656.
[2]
Forney, G. D., Jr., Concatenated Codes, Cambridge, MA: MIT Press, 1967.
[3J
Yuen, J. H., Ed., Deep Space Telecommunications Systems Engineering. Plenum Press, 1983.
[4]
Lee, I.. H. C, Convolutional Coding. Fundamentals and Applications, Norwood, MA: Arrech House, 1997.
[5J
Viterbi, A. J., "Error Bounds for Convolutional Codes and an Asyrnprorically Optimum Decoding Algorithm," IEEE Trans. on Information Theory, Vol. IT-13, No.2, April 1967, pp. 260-269.
[6J
Yuen, J. H., er al., "Modulation and Coding for Satellite and Space Communications," IEEE Proc., Vol. 78, No.7, July 1990, pp. 1250-1266.
[7j
GSM Recommendations 05.03, "Channel Coding," Draft version3.4. 0, November 1988.
[8J
Hodges, M. R. L., "The GSM Radio Interface," British Telecom Technol. j: Vol. 8, No. I, January 1990, pp. 31-43.
[9]
Helling, K., er al., "Speech Coder for the European Mobile Radio System," hoc. ICASSP89, Glasgow, UK, 1989, pp. 1065-1069.
[10]
Murora, K., and K. Hirade, "GMSK Modulation for Digital Mobile Radio," IEEE Trans. on Communications, Vol. COM-29, No.7, July 1981, pp. 1044-1050.
[II]
Carasso, M. G., J. B. H. Peek, and J. P. Sinjou, 'The Compact Disc Digital Audio System," Philips Technical Review, Vol. 40, No.6, 1982, pp. 151-156.
[12]
Ramsey, J. L., "Realization of Optimum Inrerleavers," IEEE Trans. on Inftrmation Theory, Vol. IT-16, No.3, May 1970, pp. 338-345.
[13]
Forney, G. D., "Burst-Correcting Codes for the Classic Channel," IEEE Trans. on Communications, Vol. COM-19, No.5, Ocrober 1971, pp. 772-781.
[14]
Hoeve, H., J. Tirnmermans, and L. B. Vries, "Error Correction and Concealment in the Compact Disc System," Philips Technical Review, Vol. 40, No.6, 1982, pp. 166-172.
This page intentionally left blank
A
Appendix
Binary Primitive Polynomials Appendix A presents a list of binary pnmltlve polynomials p(x) = O x m + Pm-Ix m-l + Pm-2 x m-2 + ... + PIx + Poo fd egreem,wh erep} = d an 1, 0 ~ j ~ m - 1, and 2 ~ m ~ 7. Table A.1 Binary Primitive Polynomials of Degree m
m -
p(x) -
--
--------------'
x2 + X + 1 x3 + X + 1 x3 + x 2 + 1
x4 + X + 1 x 4 + x3 + 1 2
x5 + x + 1 x 5 + x3 + 1 x 5 + x3 + x 2 + X + 1 x 5 + x4 + x 2 + X + 1 x 5 + x4 + x3 + X + 1 x5 + x 4 + x 3 + / + 1
x6 + X + 1 x 6 + x4 + x 3 + X + 1 x6 + x 5 + 1 x6 + x 5 + x 2 + X + 1 x6 + x5 + x3 + x2 + 1 x6 + x 5 + x 4 + X + 1
221
222
Error-Control Block Codes for Communications Engineers
Table A.1 (continued) Binary Primitive Polynomials of Degree m m
p(x)
7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7
xl xl xl xl xl xl xl xl xl xl xl xl xl xl xl xl xl xl
+X+1 + x3 + 1 + x3 + x2 + X + 1 + x4 + 1 + x4 + x3 + x2 + 1 + x5 +x2 + X + 1 + x5 + x3 + X + 1 +x5 +x4 +x3 + 1 + x5 + x4 + x3 + x2 + X + 1 + x6 + 1 + x6 + x3 + X + 1 + x6 + x4 + X + 1 + x6 + x4 + x2 + 1 + x6 + x5 + x2 + 1 + x6 + x5 + x3 + x2 + X + 1 + x6 + x5 + x4 + 1 + x6 + x5 + x4 + x2 + X + 1 + x6 + x5 + x4 + x3 + x2 + 1
Appendix
B
Galois Field Tables m
Appendix B presents a list of GF(2 ) tables generated by a binary primitive . I P () po Iynomla x = x m + Pm-Ix m-I + Pm-2 x m-2 + ... + PIX + Po 0 f degree m, where Pi = 0 and 1, 0 ~ j ~ m - 1, and 2 ~ m ~ 6. Each element in m GF(2 ) is expressed as a power of some primitive element a and as a linear . 0 f a, 0 a I , ... ,a m-I . Th e po Iynornra . I repr~sentatlon . 0 f t he com bimatlo~ 1 eement a,l ·t = 0, 1 , ...2 , m - 2 IS. expresse d as a 1 = am-Ia m-I +
am_2am-2 polynomial
ao with binary, coefficients and the coefficients of the representation of a' is expressed as a binary m-tuple
+ ... +
[ao al ... am-d· Table B.1 Elements of GF(2 2) Generated by a Primitive Polynomial pIx) Elements 0 1
0= 1= a= a
2
2-Tuple
o1
a = a
00 10
+1
223
11
=x2 + X + 1
224
Error-Control Block Codes for Communications Engineers
Table B.2 Elements of GF(231 Generated by a Primitive Polynomial p(xl = x 3 + X + 1 Elements
3-Tuple
0=
o
1=
1
a= 2
a
a
001
3 4
100
o1 0
=a2 a +1
110
a
o1 1
a = 2
+ 2 5 a =a + 2 6 a = a +
a
000
= a
a
111
+1
101
Table B.3 Elements of GF(2 41 Generated by a Primitive Polynomial p(x) = x 4 + X + 1 Elements
4-Tuple
0=
0
1=
1
a= 2 a = 3 a = 4 a = 5 a = a a
a a a
2
1 000
o 1 00 001 0
3
000 1 a + 1 a
2
6
3 2 = a + a
7
= a +
8
0000
+ a
1 10 0
o1 1 0 001 1
3
a + 1
1 10 1
2
+
2
+ a +1
1 1 10
+ a + a 2 a 12 = a 3 + a + a + 1
a a a a
a a
9
=
a 3
= a +
10 11
13
=
a a
= a
a
o1 0 1
3
2
o1 1 1
3
2
10 1 1
= a + a +
14=
101 0
3
+
1111 1 001
AppendixB: Galois Field Tables Table 8.4 Elements of GF(25) Generated by a Primitive Polynomial pIx) = x 5 + x 2 + 1 Elements
5-Tuple
r----~-------------____I
0=
o
00000
1=
1
1 0000
a
=
3
=
4
a = a a a a
o 1 000 001 0 a oaa 1 0 a 0 aa 1 1 a 1 aa a1 a1 a 00 1 a 1 1 a1 1 0 a1 a1 1
a
a= 2
a
5
6 l
4
= =
a
3
+
4
= a +
a a
2
3 2 = a + a + 4 3 a = a + a + a 4 10 a =a +
a
B
9
1 a2 + a + 1 a3+ a2+ a
all =
a 12 = a
13=
a
a 16 a
a
3+
+
a3
=a + =a4 +
3
a 14 = a a 15
4+ 4
4
17=a4
a
+
a
2
a2
a3 +
a + 1
+
alB =
a a a a a a a
19 20
21 22
a a
3
+a
+ a3 =a +
2
3
a
1 1 00 1 1 1 a0 a
1
a1 1 0 0 oa 1 1 0 aaa 1 1 1 a1 a1 1 1 1 1a o1 1 1 1 1 aa 1 1
+a
2
2
+
2
23
=
24
= a + a + a + a
25 26
a 4
3
4
3
+ a + a +1 2
= a + a + 4
11111 1 1a1 1
+1
4
4
1 1 1 aa a1 1 1 a oa 1 1 1 1a1 1 1
a+l a
= = = a
1
+
+ a2+ a + 1
1 000 1
+
1110 1
a = 4 2B a = a +
110 1a o1 1 a 1 1 aa 1 0 a 1 a0 1
a
= a
2l
a
a
29 30
=
4
= a +
a
225
226
Error-Control Block Codes for Communications Engineers
Table 8.5 Elements of GF(2 6) Generated by a Primitive Polynomial p(x) = x 6 + X + 1 6-Tuple
Elements 0 1
0= 1= a= 2 a = 3 a = 4 a = 5 5 a = a 6 a = 7 a = 8 a = 9 a = s+ a 'O= a 11 5 a = a + 12 a = 13 a = 14 a = 15 5 a = a + 16 a = 17 5 a = a + 18 a = 19 a = 20 5 a = a + 21 = as + a 22 5 a = a + 23 5 a = a + 24 a = 25 5 a = a + 26 a = 27 a = 28 a = 29 5 a = a + 30 5 a = a + 31 5 a = a + 32 a = a
33
a a a a
2
3
000100 000010
4
a a a a a
4
+
a
3
+
a
2
+
+1
2
3
a+ a a a
a
3
3
+ 4 3 a + a + 4 3 a + a + 4 3 a + a + 4
+
a
a
3
2
a+l
000101 1 1 00 1 0
a
o 1 1 001
a
+1
a
1
2+
+
+ a
2
+ 3 2 a + a + 4 3 2 a + a + a 4 3 a + a a
a
4
a
4
3
2
100101 1 000 1 0 o 1 000 1 1 1 1 000 o 1 1 1 00 001 1 1 0 000111 1 1 001 1 100100
+
--~---~~-~._-_.
+1
1 10 11 1 10 10 11
10100~'
+
+
+1
a
a
+ a
a
a
111100
o1 1 1 1 0 o1 1 1 1 1
a
a+ a
a
4
2
+ + 2 a + 2 a + a
a
2
+
00001 1 1 1 000 1 1 0 1 000
o 1 0 1 00 001 0 1 0
a a
a
1
+
+
+ a
4
3
2
000001 110000 011000 001 1 00 000 1 1 0
a
4
4
000000 100000 010000 001000
a
o 1 001
0
Appendix B: Galois Field Tables Table 8.5 (continued) Elements of GF(26) Generated by a Primitive Polynomial pIx)
5
a 34 = a + a a
36
a
2
3
a +
= 4
a +
= 5
a 37 = a +
0 0 100 1 a +1
110 100
2
0 1 10 10
2
00 1 10 1
a + a a3 + a
4
a 38 =
= x6 + x + 1
6-Tuple
Elements
35
227
3
a + a +
a +1
1 10 1 10
a = a + a + a + a 5 3 a 40 = a + a + a2 + a + 1
2
0 1 10 1 1 1 11 10 1
2
10 1 1 10 0 10 1 1 1
2
1 110 1 1 10 1 10 1 100 110
5
39
4
4
41
3
a = a + a + a + a 42 = a 5 + a 4 + a 3 + a 5
4
5
4
a 43 = a + a + a + a +1 a 44 = a 5 + a3 + a2 + a 45 = a4 + a3 + 46
a =a + a + a a 47 = a 5 + a2 + a + 1 a 48 = a3 + a2 + a4 + a3 +
a 49 = a
50
5
=a +
00 1 0 1 1 a3 +
4
52
0 10 110
4
a +
a 51 = a 5 +
a +1 2
a = a + a + 5 53 3 a = a + a + a a
54
4
a +
=
a 55 = a 5 +
0 100 1 1 11100 1 10 110 0
1 10 10 1
0 10 10 1
2
1 110 10
2
0 1 1 10 1
a + a +1 a3 + a + a
4
3
2
111110
5
4
3
2
011111
5
4
2
111111
a 59 = a + a + a 3 + a +
5
4
2
10 1 11 1
5
4
100 111
a 56 = a
57
a + a + a + a +1
= a + a + a + a + a
a 58 = a + a + a 3 + a + a + 1 3
a 60 = a + a + a + 4
a 61 = a 5 + a + a 62 = a 5 +
i
10 10 10
100 0 1 1 10000 1
~"--------------~
This page intentionally left blank
c
Appendix
Minimal Polynomials of Elements in GF(2 m) Appendix C presents a list of minimal polynomials of elements in GF(2 m ) . The degree of the minimal polynomial cP(x) = cPmxm + cPm_lXm-1 + ... + cPIX + cPo is m, where cPj = 0 and 1, 0 ~ j ~ m and 2 ~ m ~ 6. The . 2 3 4 5 6 finite fields GF(2 ), GF(2 ), GF(2 ), GF(2 ), and GF(2 ) are constructed . ... '1 s x 2 + x + 1, x 3 + x + 1 , x 4 + x + 1, usmg the pnmltlve po Iynomla 2 5 6 x + x + 1, and x + x + 1 with a as a root, respectively. Table C.1 Minimal Polynomials of Elements in GF(2 m ) m
Conjugate Roots of «p(x)
Order of Element
2
a,a 2
3
3
2 4 a,a,a
7
3
365 a,a,a
7
4 4 4
a, a 2, a 4, a 8 a 3, a 6, a 12, a 9 a 5, a 10
15
x4 + X + 1
5
x +x +x +X+1 x2 + X + 1 x 4 + x3 + 1 __
1
1~_~a~_a1~~~~
3
Minimal Polynomial «p(x)
4
3
~~15_~~~~~
229
2
~
230
Error-Control Block Codes for Communications Engineers
Table C.1 (continued) Minimal Polynomials of Elements in GF(2 m )
m
Conjugate Roots of iP(x)
Order of Element
Minimal Polynomial iP(x)
5 5 5 5 5 5
2 4 a, a, a, a 8, a 16
31 31 31 31 31 31
X
6 6
6 6 6 6 6
3
6
5
10
7
14
12
24
17
20
9
18
28
25
a, a, a , a , a a, a , a , a, a
a ,a ,a ,a ,a
19
11
22
13
26
21
15
30
29
27
23
a ,a ,a ,a ,a a ,a ,a ,a ,a
2 4 8 a, a, a, a, a 16, a 32
63
3 6 a, a, a 12, a 24, a 48, a 33 5 a , a 10, a 20, a 40, a 17, a 34
63
7 a, a 14, a 28, a 56, a 49, a 35
a 9, a 18, a 36 11
22
44
25
50
37
13
26
52
41
19
38
a ,a ,a ,a ,a ,a
21
x + x4 + x3 + x2 + 1 x 5 + x4 + x 2 + X + 1 x5 + x 3 + x 2 + X + 1 x5 + x4 + x3 + X + 1 x5 + x3 + 1
x6 + X + 1 x6 + x 4 + x 2 + X + 1 x6 + x 5 + x 2 + X + 1
i
63 63
i
6 6
a 21 ,a 42
3
6
a~a~a~a~a~a~
63
6
a 27, a 54, a 45
7
6
a 31 ,a 62,a 61 ,a 59,a 55,a 47
63
21
+ x2 + 1
5
9 7
a ,a ,a ,a ,a ,a a 15, a 30, a 60, a 57, a 51, a 39
5
X
3
+ x3 + 1 + x2 + 1 3 2 + x5 + x + x + 1
6
x + x4 + x3 + X + 1 i + x 5 + x 4 + x2 + 1 x2 + X + 1 i + x 5 + x4 + X + 1 x3 + X + 1 x6 + x 5 + 1
Appendix
o
Generator Polynomials of Binary, Narrow-Sense, Primitive BCH Codes Appendix D presents a list of generator polynomials of binary, narrow-sense, primitive BCH codes of designed error-correcting power t d. Table D.l lists the code dimension k, codeword length n, the designed Hamming distance n- k 0d = 2td + 1, and the generator polynomial g(x) = x + gn_k_lxn-k-1 + gn_k_2xn-k-2 + .. , + glx + gO of the codes, where gj = 0 and 1, o 5,j~ n - k- 1, and n = 7,15,31,63,127. Table 0.1
Generator Polynomials of Binary, Narrow-Sense, Primitive BCH Codes
n
k
Designed Distance
g(x)
2td + 1 7
4
3
x3 + X + 1
15 15 15
11
3
7
5
5
7
x4 + X + 1 x8 + x 7 + x6 + x 4 + 1 x 10 + x8 + x 5 + x4 + x 2 + X + 1
26 21
3
16
7
31 31 31
5
x5 + x 2 + 1 x 10 + x9 + x8 + x6 + x 5 + x3 + 1 x 15 + x" + x 10 + x9 + x8 + x7 + x 5 + x3 + x 2 + X + 1
231
232
Error-Control Block Codes for Communications Engineers
Table 0.1 (continued) Generator Polynomials of Binary, Narrow-Sense, Primitive BCH Codes Designed Distance
g(x)
57 51 45
3 5 7
x6 + x + 1 x 12 + x 10 + x8 + x 5 + x4 + x3 + 1 x 18 + x 17 + x 16 + x 15 + x 9 + x7 + x 6 + x 3 + x 2 + x + 1
120 113 106
3 5 7
x7 + x 3 + 1 x 14 + x9 + x8 + x6 + x 5 + x 4 + x2 + X + 1 x 21 + x 18 + x 17 + x 15 + x 14 + x 12 + + x 8 + x 7 + x6 + 5 x +X+1
n
k
63 63 63 127 127 127
u, + 1
x"
i
About the Author L. H. Charles Lee received a B.Se. degree in electrical and electronic engineering from Loughborough University, England, in 1979; a M.Se. degree from the University of Manchester Institute of Science and Technology, Manchester, England, in 1985; and the Ph.D. degree in electrical engineering from University of Manchester, England, in 1987. From 1979 to 1984 Dr. Lee was with Marconi Communication Systems Limited, Chelmsford, England, where he was engaged in the design and development of frequency hopping radios. In 1987 and 1988, he was appointed lecturer at Hong Kong Polytechnic. From 1989 to 1991, he was a staff member of the Communications Research Group, Department of Electrical Engineering, University of Manchester, England, where he was involved in the design of channel coding and interleaving for the Pan European GSM digital mobile radio system. Currently, Dr. Lee is with the Department of Electronics, Macquarie University, Sydney, Australia. He has also served as a consultant for various companies in Australia and Asia. Dr. Lee is a Chartered Engineer, a member of the Institution of Electrical Engineers, and a senior member of the Institute of Electrical and Electronic Engineers. He is the author of Convolutional Coding: Fundamentals and Applications (Artech House, 1997). His current research interests include error-control coding theory and digital communications.
233
This page intentionally left blank
Index Abstract algebra, 13-37 fields, 20-30 Galois field arithmetic, 30-32 groups, 13-16 matrices, 36-37 rings, 16-20 vector spaces, 32-36 Additive identity, 16, 20 Additive white Gaussian noise (AWGN) channel, 7 Berlekamp-Massey algorithm decoding in, 138, 161, 163 measurements, 81 variance, 136, 161 Amplitude-shift keying (ASK) modulation,
illustrated, 124 iterative, 126-27, 133, 134, 143,
148-49, 156 RS codes, 143-49 summary, 125-2(, See also Decoding Binary, narrow-sense, primitive BCH codes, 112-16 construction, 115 generator polynomials, 231-32 Hamming distance, 114 length, 113 parameters, 112, 137 triple-error-correcting, 132 See also Bose-Chaudhuri-Hocquenhern (BCH) codes Binary amplitude-shift keying (BASK), 6 Binary BCH codes decoding, 119-31 double-error-correcting, 137 performance, 138 triple-error-correcting, 137 See also Bose-Chaudhuri-Hocquenhem (BCH) codes Binary cyclic codes, 90 error-rrapping decoder for, 104 performance, 109 See also Cyclic codes Binary Golay codes, 107 extended, 107 triple-error-correcting, 107 See also Golay codes
3-4 Applications, 199-218 CfImpan discs, 207-18 mobile communications, 202-7 space communications, 199-202 Array codes, 3 Berlekarnp-Massey algorithm, 120-27,
143-49 in AWGN channels, 138, 161, 163 BCH codes, 120-27 decoding stcps, 131-32, 133, 134, 156 defined, 12,)-24 efficiency, 130 for error-locator polynomial computation, 126, U2, 14'i flow-chan, 124
235
236
Error-Control Block Codes for Communications Engineers
Binary Hamming codes, 56-57 bit error rate, 82 generator matrix, 66, 73 parameters, 81 performance, 82 Binary linear block codes, 43, 48-55 examples, 51-55 parity-check matrix, 49 performance, 79-81 syndrome-former circuit, 49 syndrome-former trellis representation, 48-51 trellis diagram, 71 See also Linear block codes Binary phase-shift keying (BPSK) modulation, 5 4-PSK modulation comparison, 167 bandwidth, 166, 167 coded, 162, 168, 169, 170 coherent, 162 demodulator, 75 uncoded, 162, 168, 169, 170 uncoded, performance, 194 See also Modulation Binary primitive polynomials, 221-22 Binary symmetric channel (BSC), 8-9 cross-over probability, 8 defined, 8 illustrared, 9 Binary symmetric erasure channel (BSEC), 9-10 Block codes, 1 applications, 199-218 cyclic, 85-110 examples, 3 implementation, 3 linear, 39-82 types of, 3 Block encoders block diagram, 40 mapping rule, 40, 41 redundancy digits, 39 See also Encoding Bose-Chaudhuri-Hocquenghem (BCH) codes, 3, 111-38 binary, 119-31 binary, narrow-sense, primitive, 112-16 computer simulation results, 136-38
defined, II I designed Hamming distance, 112 error-free decoding, 153 general description, 111-12 generator polynomial, 111, 139 length, 112 minimum Hamming distance, 119 narrow-sense, 112 parity-check matrix, 117 primitive, 112 Burst channel, 10-11 Burst errors, II Channel capacity, I Channel codewords, 39 Channel decoder, 3, 41 Channel encoder, 3 Channel models, 8-11 binary symmetric channel, 8-9 binary symmetric erasure channel, 9-10 burst channel, 10-1 1 discrete memoryless channel, 8 Chien search. 125 Circuits division, 32 hardware realization, 33 multiplication, 32 syndrome-former, 49-51 Co-channel interference, 203 defined, 203 rario (CIR), 207 Code concatenation, 199, 200, 202 Coded digital communication systems, I-II channel encoder/decoder, 3 data sink, 2-3 data source, 2 demodulator, 7 elements, 2-1 1 illustrated, 2 introduction, 1-2 model illustration, 39, 65, 71.109, 137, 162, 165 modulation, 3-7 modulator, 3 Code polynomials, 86, 89, 90 for codevector, 92 coefficients, 91 See also Polynomials
Index Code rare, j Codewords channel, 39 decoded, 190 incorrectly selected, 81 length, 116 minimum squared Euclidean distance, 182 partitioning, 175 transmitted, 61 Commutative groups, 14 commutative rings, 17 Compact discs, 207-18 CIRC, 212-18 convolutional inrerleaver, 212, 214, 215 decoding, 216-18 encoding, 21 1-16 performance, 21 1 recording/decoding system block diagram, 21 I recording medium, 210 See also Applications Computer simulation results BCH codes, 136-38 linear block codes, 81-82 MBCM, 192-95 RS codes, 161-63 Concatenated coding performance, 202 serial, 200, 20 I Convolutional codes, implementation, 3 rypes of, j Convolutional inrcrleaver, 212, 214, 215 Cross-Interleaved Reed-Solomon Code (CIRe),212-18 block encoding, 216 decoder, 216 decoding system, 218 encoder, 212 encoder structure, 213 run length-limited encoding and, 217 See also Compact discs Cross-interleaver, 212, 215 Cross-over probability, 8 Cyclically shifting, 85, 92, 100 Cyclic codes, 3, 85-110 binary, 90, 104
237 codevector, 86 CRC,106 decoding, 95-107 defined, 85 for detecting burst errors, 106 encoding of, 94-95 for error detection, 106 Golay, 107-8 length, 86 matrix description, 91-94 multiple-error-correcting, 111 polynomial description, 85-91 shortened, 107-8 systematic encoding of, 90 See also Block codes; Linear block codes Cyclic redundancy check (CRC) codes, 106 Data sink, 2-3 Data source, 2 Decoding
Berlekarnp-Massey algorithm, 120-27, 143-49, 151-57 binary BCH codes, 119-31 compact discs, 216-18 cyclic codes, 95-107 error-and-erasure, 77, 159-60 error-only, 154 error-trapping, 10,3-7 Euclid algorithm, 127-31, 150-51, 157-61 linear block codes, 58-76 maximum-likelihood, 63-70, 183-84 maximum-likelihood Virerbi algorithm, 70-76 MBCM, 183-90 Meggitt, 102 minimum distance, 59 multistage, 184-86 multistage trellis, 186-90 RS codes, 143-51 standard array, 58-61 syndrome, 61-63, 95-103 word error probabiliry, 80 See also Encoding Demodularion, 7 Demodulators, 7 BPSK,75 process, 7 See also Modulators
238 Discrete Discrete Division Divisors
Error-Control Block Codes for Communications Engineers
memoryless channel (DMC), 8 noisy channel, 7 circuit, 32 of zero, 18
Eight-Fourteen Modulation (EMF) code, 216 Encoding binary, 94 compact discs, 211-16 cyclic codes, 94-95 MBCM, 170-83 Reed-Solomon, 141, 143 systematic, 94 See also Decoding Erasure-locator polynomials, 152, 155 Error-and-erasure-Iocator polynomial, 160 Error-control codes, II Error-evaluator polynomials, 129, 147, 154 Error-locator polynomials, 121-22, 144, 152 computation with Berlekarnp-Massey algorithm, 126, 132, 145 roots, 127, 133, 134 synthesis, 123 See also Polynomials Errors burst, 11 magnitudes, 145 random, II types of, 11 Error-trapping decoding, 103-7 for binary cyclic code, 104 binary cyclic code performance, 109 defined, 103 example, 106 for shortened cyclic code, 108 See also Decoding Euclid algorithm, 127-31, 150-51 BCH codes, 127-31 decoding, 157-61 decoding steps, 131, 135, 136, 150, 161 defined, 127 cfficiency, 130 extended, 127, 129, 135, 136, 150, 158-59, 161
RS codes, 150-51 table, 128-29 Euclidean distance, 68, 74 between any two nearest coded 4-PSK, 183 minimum squared, 182 running squared, 190 squared, 183, 184 Extension fields, 23, 24-30 construction of, 24-26 defined, 23 properties of, 26-30 Fields, 20-30 characteristics of, 21 defined, 18 extension, 23, 24-30 finite, 21, 22 Galois, 21, 23, 223-27 order of, 21, 22 See also Abstract algebra Finite fields, 21, 22 Finite groups, 14 Galileo mission, 201-2 bir error performance, 202 defined, 20 I See also Space communications applications Galois fields, 23 arithmetic implementation, 30-32 defined,21 tables, 223-27 See also Abstract algebra; Fields Gaussian minimum-shifted keying (GMSK) modulation, 205 Gaussian random variables, 7 Generaror matrix, 42, 44 binary Hamming code, 66, 73 dual code, 92 from generaror polynomial, 91 MBCM,176 RM codes, 55 row/column deletion, 58 SEC Hamming codes, 54 single-parity-check code, 52 standard-echelon-form, 93 systematic linear code, 45 See also Matrices
Index Generator polynomials, 90 BCH codes, Ill, 139 binary, narrow-sense, primitive BCH codes, 231-32 defined, 87 generator matrix from, 91 of minimum degree, 87 RS codes, 143 See also Cyclic codes; Polynomials Golay codes, 107-8 binary, 107 defined, 107 extended binary, 107 Ground field, 25 Groups, 13-16 axioms, 14 commutative, 14 defined, 13 finite, 14 order of, 14 subgroup, 16 See also Abstract algebra GSM digital radio system, 203-7 bit-error rate performance, 207, 210 bits distribution, 209 channel coding and reordering, 208 diagonal interleaving srrucrure, 209 discrete noisy channel, 205 full-rate, 203-7, 209, 210 GMSK modulation, 205 illustrated, 204 performance, 210 reordering and partitioning, 206-7 simulation test results, 207 traffic channel rnulrifrarne/rimeslor structure, 209 See also Applications Hamming distance, 42, 67 BCH code, 112 binary, narrow-sense, primitive BCH codes, 114 branch, 73 metric computations, 66, 67-68, 77 minimum, 46, 47, 56, 116, 119, 142 between symbols, 72 Hamming weight, 42 Hard-decision decoding maximum-likelihood, 65-68, 184
239 maximum-likelihood Viterbi algorithm, 72-73 metric computation, 67 metric table, 66 See also Decoding; Soft-decision decoding Integral domain, 18 Interleaving, II block,201 convolutional, 212, 214, 215 cross, 212, 215 depth, 201 Irreducible polynomials, 24, 26, 29 Key equation, 123 Linear block codes, 39-82 augmenting, 56 binary, 43, 48-55, 77, 79-81 computer simulation results, 81-82 concepts/definitions, 40-42 correction of errors and erasures, 76-79 decoding, 58-76 expurgating, 56 extending, 56 lengthening, 57 matrix description. 42-45 minimum distance, 45-48 modifications, 56-58 performance, 79-81 punctuating, 57 q-ary, 42 Reed-Muller (RM) codes, 54-55 repetition codes, 52 shortening, 57-58 single-error-correcting Hamming codes, 53-54 single-parity-check codes, 52-53 standard array, 59 systematic, 45 See also Block codes; Cyclic codes Linear combination, 34 Logarithmic likelihood function, 64 Matrices, 36--37 description of linear block codes, 42-45 generator, 42, 44-45, 54-55, 66, 73, 91-93, 176
240
Error-Control Block Codes for Communications Engineers
k-by-n, 36, 42 parity-check, 44, 45, 49, 53, 116-19, 111l, 176-77 permutation, 174 syndrome-former, 49 Maximum-likelihood decoding, 63-70 defined, 65, I Il.J hard-decision, 65-68, I 114 soft-decision, 61l-70 See also Decoding Maximum-likelihood Virerbi algorithm decoding, 70-76 defined,70 hard-decision, 72-73 hard-decision decoding steps, 74 soft-decision, 73-76 soft-decision decoding steps, 76 using, 75 See also Decoding Meggitt decoders, 102 Minimal polynomials, 28-29 defined, 28 of elements in GF(2m), 229-30 generated by primitive polynomial, 29, 113, 114, liS See also Polynomials Minimum distance decoding, 59 Minimum Hamming distance, 46, 47, 56, 116 of BCH codes, 119 of RS codes, 140, 142 See also Hamming distance Mobile communications, 202-7 co-channel interference, 203 error-control coding techniques, 202-3 GSM digital radio system, 203-7 limitations, 202 See also Applications Modulation, 3-7 ASK,3-4 BPSK, 5, 75, 166-70 defined,3 GMSK, 205 MBCM, 165-95 PSK, 4-5, 166-67, 171-72, 195 QAM,5-6 TCM, 168, 169 Modulators. 3 defined, 3 output waveform, 6
Module-S addition, 15, 16 Modulo- 5 multiplication, 15 Multilevel block-coded modulation (MBCM), 165-95 2-D, 169-70 advantages of, 192 bit-error probability performance degradation, 194 component codes, 170 computer simulation results, 192-95 decoding methods, 183-90 defined, 161l disadvantages of, 192 encoding/mapping of, 170-83 encoding model, 171 generator matrix, 176 introduction, 165-70 model illustration, Ill') parameters, 193 parity-check matrix, 176-77 performance, 191 signal constellation geometty, 169 three-level. 176, 193 total number of codes used, 172 two-level. 175, 193 See also Modulation Multipath propagation effect, II Multiplication circuit, 30 Multiplicative identity, 21 Multistage decoding, 184-86 decision making, 186 defined, 185 drawback, 192 error probability, 191 MBCM performance with, 191 See also Decoding Multistage trellis decoding, 186-90 Newton's identities, 122-23, 153 Nonprirnirive Reed-Solomon codes, 143
Parity-check matrix, 44 12-by-15 binary, us BCH code, I 17 binary linear block code, 49 MBCM, 176-77 SEC Hamming codes, 53 single-parity-check code, 53 systematic linear code, 45
Index of triple-error-correcting, binary, narrow-sense, primary BCH code, 117 See also Matrices Parity-check polynomials, 88 Partitioning of codewords, 175, 179 of signal vectors, 176, 180 of uncoded bits, 175, 178 Performance binary BCH codes, 138 binary block code, 79-81 binary cyclic code, 109 compact disc system, 211 full-rate GSM system, 210 MBCM,191 RS codes, 163 uncoded coherent 4-PSK, 195 Permutation matrix, 174 Phase-shift keying (PSK) modulation, 4-5 4-PSK, 166, 167, 171, 195 8-PSK, 171, 172 bandwidth, 166, 167 BPSK comparison, 167 set-partitioning, 171, 172 uncoded coherent. 195 See also Modulation Plokin upper bound, 48 Polynomial tings, 19-20 Polynomials, 23-24 code, 86, 89, 90, 91 erasure-locator, 152, 155 error-and-erasure-locator, 160 error-evaluator, 129, 147, 154 error-locator, 121-22, 123, 144, 152 generator, 87, 90 irreducible, 24, 26, 29 minimal, 28-29, 113, 229-30 multiplication of, 30 nonzero, 127 parity-check, 88 primitive, 24, 25, 26, 113, 221-22 quorienr, 31, 128 received, 120 reciprocal, 89 remainder, 128 represenrarion, 25-26 syndrome. 97, 98. 129, 150 Power representation, 24
241
Primitive polynomials. 24, 25. 26 binary, 221-22 Galois fields generated by, 223-27 root, 113, 115 See also Polynomials Principal ideal, 19 Quadrature-amplitude modulation (QAM),5-6 Quotient polynomials, 31. 128 Random errors, II Received polynomials, 120 Reciprocal polynomials. 89 Reed-Muller (RM) codes. 3. 54-55 defined, 54 first-order, 54. 55 generator matrix, 55 parameters, 54 second-order, 55 zero-order, 55 See also Linear block codes Reed-Solomon (RS) codes. 3, 139-63 computer simulation results, 161-63 correction of errors and erasures, 151-61 Cross-Interleaved (CIRe), 212-18 decoding, 143-51 decoding procedure, 147--48 defined, 139 description, 139-43 encoder, 141, 143 error-and-erasure correction, 157 error-correcting power. 140 error-only decoding, 154 generator polynomial, 143 nonprimirive, 143 obtaining, 140 parameters. 162 performance of, 163 primitive, 140-41, 141--42, 162 with symbols, 141 triple-error-correcting primitive, 143, 148, 155 true minimum Hamming distance, 140 Regular pulse excited linear predictive coder (RPE-LPC), 203 Remainder polynomial, 128 Repetition codes, 52
Error-Control Block Codes for Communications Engineers
242
Rings, 16-20 commutative, 17 polynomial, 19-20 ring-with-identity, 17 subring, 19 under addition operation, 16 See also Abstract algebra Scalar multiplication, 33 Shannon's channel capacity limit, 199 Signal symbol rate, 6 Single-error-correcting binary linear block codes, 62 Single-error-correcting Hamming codes, 3,
53-54 generator matrix, 54 parameters, 53 parity check matrix, 53 standard array, 59-60 See also Linear block codes Single-parity-check codes, 52-53 defined, 52 generator matrix, 52 parity-check matrix, 53 See also Linear block codes Soft-decision decoding maximum-likelihood, 68-70 maximum-likelihood Viterbi algorithm,
73-76 metric computation, 70 minimum distance, 69 See also Decoding; Hard-decision decoding Space communications applications,
199-202 Galileo mission, 201-2 Voyager missions, 200-20 I See also Applications Standard array decoding, 58-61 defined, 59 minimum distance, 59 See also Decoding Subgroups, 16 Subrings, 19 Subspace defined, 34 k-dimensional, 36, 91 row space, 36 See also Vector spaces
Syndrome decoding, 61-63, 95-103
binary, 95, 99, 102 defined, 62 table, 63 See also Decoding Syndrome-former circuits, 49-51 Syndrome-former matrix, 49 Syndrome-former trellis diagrams, 50-51 binary nonredundant code, 189 binary repetition code, 187, 189 binary single-parity-check code, 187,
189 See also Trellis diagrams Syndrome polynomials, 150, 160 coefficients, 98, 99 defined, 97 error digit effect, 100 error pattern detector tests, 98 infinite-degree, 129, 146, 151, 157 modified, 157-58, 159 See also Polynomials Syndrome register, 100, 101 correctable error pattern, 103 polynomial coefficients, 100 Syndrome vectors, 120-21, 132, 133, 148 Syndrome weight, 105-6 Systematic code code vector generated by, 45 defined, 45 generator matrix, 45 parity-check matrix, 45 Time-division multiple access (TDMA),
203 Tree codes, 3 Trellis-coded modulation (TCM), 168 introduction, 168 signal constellation geometry, 169 Trellis codes, 3 Trellis diagrams for binary linear block code, 71 defined, 48 determination, 49 syndrome-former, 50-51, 187, 188,
189 Units, 18 Vandermonde determinant, 119 Varsharnov-Gilberr, 48
Index Vector addition, 33 Vectors defined, 33 Hamming distance, 42 Hamming weight, 42 information, 18.' linearly dependent, 35 linearly independent, 35 n-ruple, 35 orthogonal, 36 partitioning, 58 syndrome, 120-21, 132, 133, 148 Vector spaces, 32-36 k-dirnension, 35 n-dirnensional, 35 null (dual) space, 36 over field, 33 subspace, 34, 36 See also Abstract algebra
243 Venn diagrams, 18-19 Verhoeff, 48 Viterbi algorithm, 70-76 Euclidean metric minimizarion, 73 Hamming metric minimization, 72 maximum-decision decoding, 70-76 See also Maximum-likelihood Virerbi algorithm decoding Voyager mission, 200-201 interleaving depth, 201 Reed-Solomon q-ary coded sequence,
200 sysrem performance improvement, 200 See also Space communications application Weight enumerator, 89 Zero element, 16, 20