Linear Prediction Models (779813), страница 3
Текст из файла (страница 3)
. ., P. The Durbinalgorithm starts with a predictor of order zero for which E(0)=rxx(0). Thealgorithm then computes the coefficients of a predictor of order i, using thecoefficients of a predictor of order i−1. In the process of solving for thecoefficients of a predictor of order P, the solutions for the predictorcoefficients of all orders less than P are also obtained:E ( 0) = rxx (0)For i =1, ..., P(8.43)i −1û (i −1) = rxx (i ) − ∑ a k(i −1) rxx (i − k )ki = −û(i −1)(8.45)E (i −1)ai(i ) = kia (ji )= a (ji −1)(8.44)k =1(8.46)− k i ai(−i −j1)1≤ j ≤ i − 1E (i ) = (1 − k i2 ) E (i −1)(8.47)(8.48)8.2.3 Lattice PredictorsThe lattice structure, shown in Figure 8.9, is a cascade connection of similarunits, with each unit specified by a single parameter ki, known as thereflection coefficient.
A major attraction of a lattice structure is its modularform and the relative ease with which the model order can be extended. Afurther advantage is that, for a stable model, the magnitude of ki is boundedby unity (|ki |<1), and therefore it is relatively easy to check a latticestructure for stability. The lattice structure is derived from the forward andbackward prediction errors as follows.
An order-update recursive equationcan be obtained for the forward prediction error by multiplying both sides ofEquation (8.32) by the input vector [x(m), x(m−1), . . . , x(m−i)]:e (i ) (m) = e (i −1) (m) − k i b (i −1) (m − 1)(8.49)Forward, Backward and Lattice Predictors243Similarly, we can obtain an order-update recursive equation for thebackward prediction error by multiplying both sides of Equation (8.33) bythe input vector [x(m–i), x(m–i+1), .
. . , x(m)] asb ( i ) (m) = b (i−1) (m − 1) − ki e ( i−1) (m)(8.50)Equations (8.49) and (8.50) are interrelated and may be implemented by alattice network as shown in Figure 8.8. Minimisation of the squared forwardprediction error of Equation (8.49) over N samples yieldsN −1∑ e(i −1) (m)b(i −1) (m − 1)ki = m = 0N −1∑ (e(i −1) (m) )2m=0e(m)eP(m)(8.51)kPk1–k P–k1z–1x(m)e0(m).
. .. . .z–1b0(m)b1(m)(a)e1(m)e0(m)x(m)zb0(m)eP–1(m)eP(m). . .–k1–k Pk1kP. . .–1b1(m)–1zbP–1(m)bP(m)(b)Figure 8.9 Configuration of (a) a lattice predictor and (b) the inverse latticepredictor.Linear Prediction Models244Note that a similar relation for ki can be obtained through minimisation ofthe squared backward prediction error of Equation (8.50) over N samples.The reflection coefficients are also known as the normalised partialcorrelation (PARCOR) coefficients.8.2.4 Alternative Formulations of Least Square Error PredictionThe methods described above for derivation of the predictor coefficients arebased on minimisation of either the forward or the backward predictionerror. In this section, we consider alternative methods based on theminimisation of the sum of the forward and backward prediction errors.Burg's Method Burg’s method is based on minimisation of the sum of theforward and backward squared prediction errors.
The squared error functionis defined asE (fbi ) =22∑ {[e (i ) (m)] + [b (i ) (m)] }N −1(8.52)m =0Substitution of Equations (8.49) and (8.50) in Equation (8.52) yieldsN −1[] []22E (fbi ) = ∑ e (i −1) (m) − k i b (i −1) (m − 1) + b (i −1) (m − 1) − k i e (i −1) (m) m =0(8.53)(i )Minimisation of E fb with respect to the reflection coefficients ki yieldsN −1ki =2 ∑ e (i −1) (m) b (i −1) (m − 1)m =0∑ {[eN −1m =0(i −1)( m)] + [b2(i −1)(m − 1)]}2(8.54)Forward, Backward and Lattice Predictors245Simultaneous Minimisation of the Backward and ForwardPrediction Errors From Equation (8.28) we have that the backwardpredictor coefficient vector is the reversed version of the forward predictorcoefficient vector.
Hence a predictor of order P can be obtained throughsimultaneous minimisation of the sum of the squared backward and forwardprediction errors defined by the following equation:E (fbP ) =22∑ {[e ( P) (m)] +[b ( P) (m) ] }N −1m=0N −1 2PP = ∑ x ( m) − ∑ a k x ( m − k ) + x (m − P ) − ∑ a k x ( m − P + k ) m=0 k =1k =1(= ( x − Xa )T ( x − Xa )+ x B − X B a)T (x B − X Ba )2(8.55)where X and x are the signal matrix and vector defined by Equations (8.12)and (8.13), and similarly XB and xB are the signal matrix and vector for thebackward predictor.
Using an approach similar to that used in derivation ofEquation (8.16), the minimisation of the mean squared error function ofEquation (8.54) yields(a = X T X + X BT X B)−1 (X T x + X BT x B )(8.56)Note that for an ergodic signal as the signal length N increases Equation(8.56) converges to the so-called normal Equation (8.10).8.2.5 Predictor Model Order SelectionOne procedure for the determination of the correct model order is toincrement the model order, and monitor the differential change in the errorpower, until the change levels off. The incremental change in error powerwith the increasing model order from i–1 to i is defined asû (i ) = E (i −1) − E (i )(8.57)Linear Prediction Models246Normalised mean squaredprediction error1.0080.60.4246810121416182022Figure 8.10 Illustration of the decrease in the normalised mean squaredprediction error with the increasing predictor length for a speech signal.Figure 8.10 illustrates the decrease in the normalised mean square predictionerror with the increasing predictor length for a speech signal.
The order Pbeyond which the decrease in the error power ∆E(P) becomes less than athreshold is taken as the model order.In linear prediction two coefficients are required for modelling eachspectral peak of the signal spectrum. For example, the modelling of a signalwith K dominant resonances in the spectrum needs P=2K coefficients.Hence a procedure for model selection is to examine the power spectrum ofthe signal process, and to set the model order to twice the number ofsignificant spectral peaks in the spectrum.When the model order is less than the correct order, the signal is undermodelled.
In this case the prediction error is not well decorrelated and willbe more than the optimal minimum. A further consequence of undermodelling is a decrease in the spectral resolution of the model: adjacentspectral peaks of the signal could be merged and appear as a single spectralpeak when the model order is too small. When the model order is larger thanthe correct order, the signal is over-modelled. An over-modelled problemcan result in an ill-conditioned matrix equation, unreliable numericalsolutions and the appearance of spurious spectral peaks in the model.Short-Term and Long-Term Predictors2478.3 Short-Term and Long-Term PredictorsFor quasi-periodic signals, such as voiced speech, there are two types ofcorrelation structures that can be utilised for a more accurate prediction,these are:(a) the short-term correlation, which is the correlation of each samplewith the P immediate past samples: x(m−1), .
. ., x(m−P);(b) the long-term correlation, which is the correlation of a sample x(m)with say 2Q+1 similar samples a pitch period T away: x(m–T+Q), . . .,x(m–T–Q).Figure 8.11 is an illustration of the short-term relation of a sample with theP immediate past samples and its long-term relation with the samples apitch period away. The short-term correlation of a signal may be modelledby the linear prediction Equation (8.3).
The remaining correlation, in theprediction error signal e(m), is called the long-term correlation. The longterm correlation in the prediction error signal may be modelled by a pitchpredictor defined asQeˆ(m) =∑ p k e( m − T − k )(8.58)k = −Q?m2Q+1 samples apitch period awayP past samplesFigure 8.11 Illustration of the short-term relation of a sample with the P immediatepast samples and the long-term relation with the samples a pitch period away.Linear Prediction Models248where pk are the coefficients of a long-term predictor of order 2Q+1. Thepitch period T can be obtained from the autocorrelation function of x(m) orthat of e(m): it is the first non-zero time lag where the autocorrelationfunction attains a maximum.
Assuming that the long-term correlation iscorrectly modelled, the prediction error of the long-term filter is acompletely random signal with a white spectrum, and is given byε (m) = e(m) − eˆ(m)= e( m ) −Q(8.59)∑ p k e( m − T − k )k = −QMinimisation of E[e2(m)] results in the following solution for the pitchpredictor: p −Q rxx ( 0)rxx (1)rxx ( 2) p −Q +1 rxx (1)rxx ( 0)rxx (1) rxx (1)rxx ( 0) = rxx ( 2) pQ − 1 p rxx ( 2Q ) rxx ( 2Q − 1) rxx ( 2Q − 2)Qrxx ( 2Q ) − 1 rxx (T − Q ) rxx ( 2Q − 1) rxx (T − Q + 1) rxx ( 2Q − 2) rxx (T + Q − 1) rxx ( 0) rxx (T + Q ) (8.60)An alternative to the separate, cascade, modelling of the short- and longterm correlations is to combine the short- and long-term predictors into asingle model described asx ( m) =PQ∑ a k x ( m − k ) + ∑ p k x ( m − k − T ) + ε ( m)k=1=−Q kshort term prediction(8.61)long term predictionIn Equation (8.61), each sample is expressed as a linear combination of Pimmediate past samples and 2Q+1 samples a pitch period away.Minimisation of E[e2(m)] results in the following solution for the pitchpredictor:MAP Estimation of Predictor Coefficientsr (1) a1 r (0) r ( 0) a2 r (1) a r ( 2)r (1) 3 a =r ( P − 2) P r ( P − 1) p− Q r (T + Q − 1) r (T + Q − 2) r (T + Q − 1) p−Q +1 r (T + Q ) p+ Q r (T − Q − 1) r (T − Q − 2)r ( P − 1)r (Tr ( P − 2)r ( P − 3)r (T + Q − 2)r (T + Q − 3)r (0)+ Q − 1)r (T+ Q − P)r (T+ Q)r (T + Q − 1)r (T + Q − 2)r (T+ Q − P + 1)+ Q − P)r ( 0)r (1)+ Q − P + 1)r (1)r ( 0)− Q − P)r ( 2Q )r ( 2Q − 1)r (Tr (T249r (Tr (T − Q − 1) r (T + Q − 2) r (T + Q − 3) r (T + Q − P ) r ( 2Q ) r ( 2Q − 1) r ( 0)− 1r (1) r ( 2) r (3) r(P) r (T + Q ) r (T + Q − 1) TQr()−(8.62)In Equation (8.62), for simplicity the subscript xx of rxx(k) has been omitted.In Chapter 10, the predictor model of Equation (8.61) is used forinterpolation of a sequence of missing samples.8.4 MAP Estimation of Predictor CoefficientsThe posterior probability density function of a predictor coefficient vector a,given a signal x and the initial samples xI, can be expressed, using Bayes’rule, asf X | A, X I (x | a, x I ) f A| X I (a | x I )f A| X , X I (a | x, x I )=(8.63)f X | X I (x | x I )In Equation (8.63), the pdfs are conditioned on P initial signal samplesxI=[x(–P), x(–P+1), ..., x(–1)].