Linear Prediction Models (779813), страница 2
Текст из файла (страница 2)
As the excitation signal e(m) is assumed to have a flat spectrum, itfollows that the shape of the signal spectrum X(f) is due to the frequencyresponse 1/A(f) of the all-pole predictor model. The inverse linear predictor,Input x(m)x(m–1)z1–1x(m–2)z–a 1–1...x(m–P)z–1–a–a2Pe(m)Figure 8.6 Illustration of the inverse (or whitening) filter.Linear Prediction Coding235as the name implies, transforms a correlated signal x(m) back to anuncorrelated flat-spectrum signal e(m). The inverse filter, also known as theprediction error filter, is an all-zero finite impulse response filter defined ase(m) = x(m) − xˆ (m)P= x( m) − ∑ a k x( m − k )= (ak =1inv T(8.19)) xwhere the inverse filter (ainv)T =[1, −a1, . .
., −aP]=[1, −a], and xT=[x(m), ...,x(m−P)]. The z-transfer function of the inverse predictor model is given byA( z) = 1 −P∑ ak z− k(8.20)k =1PoleZeroMagnitude responseA linear predictor model is an all-pole filter, where the poles model theresonance of the signal spectrum. The inverse of an all-pole filter is an allzero filter, with the zeros situated at the same positions in the pole–zero plotas the poles of the all-pole filter, as illustrated in Figure 8.7. Consequently,the zeros of the inverse filter introduce anti-resonances that cancel out theresonances of the poles of the predictor. The inverse filter has the effect offlattening the spectrum of the input signal, and is also known as a spectralwhitening, or decorrelation, filter.Inverse filter A(f)Predictor 1/A(f)fFigure 8.7 Illustration of the pole-zero diagram, and the frequency responses of anall-pole predictor and its all-zero inverse filter.Linear Prediction Models2368.1.3 The Prediction Error SignalThe prediction error signal is in general composed of three components:(a) the input signal, also called the excitation signal;(b) the errors due to the modelling inaccuracies;(c) the noise.The mean square prediction error becomes zero only if the followingthree conditions are satisfied: (a) the signal is deterministic, (b) the signal iscorrectly modelled by a predictor of order P, and (c) the signal is noise-free.For example, a mixture of P/2 sine waves can be modelled by a predictor oforder P, with zero prediction error.
However, in practice, the predictionerror is nonzero because information bearing signals are random, often onlyapproximately modelled by a linear system, and usually observed in noise.The least mean square prediction error, obtained from substitution ofEquation (8.9) in Equation (8.6), isPE ( P ) = E[e 2 (m)] = rxx (0) − ∑ a k rxx (k )(8.21)k =1where E(P) denotes the prediction error for a predictor of order P.
Theprediction error decreases, initially rapidly and then slowly, with increasingpredictor order up to the correct model order. For the correct model order,the signal e(m) is an uncorrelated zero-mean random process with anautocorrelation function defined asσ e2 = G 2 if m = kE [e(m)e(m − k )]= if m ≠ k0(8.22)where σ e2 is the variance of e(m).8.2 Forward, Backward and Lattice PredictorsThe forward predictor model of Equation (8.1) predicts a sample x(m) froma linear combination of P past samples x(m−1), x(m−2), .
. .,x(m−P).Forward, Backward and Lattice Predictors237Forward predictionx(m – P) to x(m – 1) are used to predict x(m)mx(m) to x(m–P+1) are used to predict x(m–P)Backward predictionFigure 8.8 Illustration of forward and backward predictors.Similarly, as shown in Figure 8.8, we can define a backward predictor, thatpredicts a sample x(m−P) from P future samples x(m−P+1), . . ., x(m) asPxˆ (m − P) = ∑ c k x(m − k + 1)(8.23)k =1The backward prediction error is defined as the difference between theactual sample and its predicted value:b(m) = x(m − P ) − xˆ (m − P)P= x(m − P) − ∑ c k x(m − k + 1)(8.24)k =1From Equation (8.24), a signal generated by a backward predictor is givenbyPx(m − P) = ∑ ck x(m − k + 1)+ b(m)(8.25)k =1The coefficients of the least square error backward predictor, obtained in asimilar method to that of the forward predictor in Section 8.1.1, are given by238rxx (1)rxx ( 2) rxx (0)rxx (0)rxx (1) rxx (1) r ( 2)rxx (1)rxx ( 0) xx r ( P − 1) r ( P − 2) r ( P − 3) xxxxxxLinear Prediction Models rxx ( P − 1) c1 rxx ( P ) rxx ( P − 2) c 2 rxx ( P − 1) rxx ( P − 3) c3 = rxx ( P − 2) rxx ( 0) c P rxx (1) (8.26)Note that the main difference between Equations (8.26) and (8.11) is that thecorrelation vector on the right-hand side of the backward predictor, Equation(8.26) is upside-down compared with the forward predictor, Equation(8.11).
Since the correlation matrix is Toeplitz and symmetric, Equation(8.11) for the forward predictor may be rearranged and rewritten in thefollowing form:rxx (1)rxx ( 2) rxx (0)rxx (0)rxx (1) rxx (1) r ( 2)rxx (1)rxx ( 0) xx r ( P − 1) r ( P − 2) r ( P − 3) xxxxxx rxx ( P − 1) a P rxx ( P ) rxx ( P − 2) a P −1 rxx ( P − 1) rxx ( P − 3) a P −2 = rxx ( P − 2) rxx ( 0) a1 rxx (1) (8.27)A comparison of Equations (8.27) and (8.26) shows that the coefficients ofthe backward predictor are the time-reversed versions of those of theforward predictor c1 a P c 2 a P −1 (8.28)c = c3 = a P − 2 = a B c a P 1 where the vector aB is the reversed version of the vector a.
The relationbetween the backward and forward predictors is employed in the Levinson–Durbin algorithm to derive an efficient method for calculation of thepredictor coefficients as described in Section 8.2.2.Forward, Backward and Lattice Predictors2398.2.1 Augmented Equations for Forward and BackwardPredictorsThe inverse forward predictor coefficient vector is [1, −a1, ..., −aP]=[1, −aT].Equations (8.11) and (8.21) may be combined to yield a matrix equation forthe inverse forward predictor coefficients:T 1 r (0) r xx E ( P) = − a 0 rR xxxx (8.29)Equation (8.29) is called the augmented forward predictor equation.Similarly, for the inverse backward predictor, we can define an augmentedbackward predictor equation as R xx r BT xxB − B 0 r xx a = ( P ) r (0) 1 E (8.30)TBTwhere rxx = [rxx (1),,rxx ( P)] and r xx = [rxx ( P), ,rxx (1)] .
Note that thesuperscript BT denotes backward and transposed. The augmented forwardand backward matrix Equations (8.29) and (8.30) are used to derive anorder-update solution for the linear predictor coefficients as follows.8.2.2 Levinson–Durbin Recursive SolutionThe Levinson–Durbin algorithm is a recursive order-update method forcalculation of linear predictor coefficients. A forward-prediction error filterof order i can be described in terms of the forward and backward predictionerror filters of order i−1 as 1 1 0 (i ) (i −1) (i −1) − a1 − a1 − ai −1 = + k (i ) (i −1) i (i −1) − ai −1 − ai −1 − a1 − a (i ) 0 1 i (8.31)Linear Prediction Models240or in a more compact vector notation as0 1 (i −1)B 1 (i −1) (i ) = − a + ki − a− a 1 0 (8.32)where ki is called the reflection coefficient.
The proof of Equation (8.32) andthe derivation of the value of the reflection coefficient for ki follows shortly.Similarly, a backward prediction error filter of order i is described in termsof the forward and backward prediction error filters of order i–1 as0 1 − a (i )B (i −1)B (i −1) = − a + ki − a 1 1 0 (8.33)To prove the order-update Equation (8.32) (or alternatively Equation(8.33)), we multiply both sides of the equation by the (i + 1) × (i + 1)(i +1)augmented matrix Rxxand use the equality(i )R xx(i +1) R xx= (i )BT rxxrx(xi )B rxx (0) rx(xi )T =(i ) rxx (0) rx(xi )R xx(8.34)to obtain(i ) R xx r (i )BT xx(i )rx(xi )B 1 R xx (i ) = rxx (0) − a rx(xi )BT0 1 rxx (0) rx(xi )T (i −1)B rx(xi )B i−(1)− aka+− i (i )rxx (0) R (xxi ) rxx01(8.35)(i ) Twhere in Equation (8.34) and Equation (8.35) r xx = [rxx (1), ,rxx (i )] , and( i ) BT(i ) Tr xx= [rxx (i ), ,rxx (1)] is the reversed version of r xx.
Matrix–vectormultiplication of both sides of Equation (8.35) and the use of Equations(8.29) and (8.30) yieldsForward, Backward and Lattice PredictorsE(i )0 E (i −1) û(i −1) = 0 (i −1) + k i 0 (i −1) (i −1) û (i −1) E241(i ) where[û (i −1) = 1 − a (i −1)(8.36)]T rx(xi)Bi −1= rxx (i ) − ∑ a k(i −1) rxx (i − k )(8.37)k =1If Equation (8.36) is true, it follows that Equation (8.32) must also be true.The conditions for Equation (8.36) to be true areE (i ) = E (i −1) + k i û(i −1)(8.38)and0 = ∆(i−1) + ki E (i−1)(8.39)From (8.39),ki = −û(i −1)E (i −1)(8.40)Substitution of ∆(i-1) from Equation (8.40) into Equation (8.38) yieldsE (i ) = E (i −1) (1− k i2 )=E(0)i∏ (1− k 2j )(8.41)j =1Note that it can be shown that ∆(i) is the cross-correlation of the forward andbackward prediction errors:û (i −1) = E[b (i −1) (m − 1)e (i −1) (m)]The parameter ∆(i–1) is known as the partial correlation.(8.42)Linear Prediction Models242Durbin’s algorithmEquations (8.43)–(8.48) are solved recursively for i=1, .