Transforms and Filters for Stochastic Processes (779449), страница 4
Текст из файла (страница 4)
. , p - 1,(5.142)h(i) r,,(j - i) = TT&),i=Owith(5.143)The optimal filter is found by solving (5.142).An applicationexample is theestimation of data d(n) from a noisyobservation r ( n ) = Cec(C) d ( n - l ) w(n), where).(Cis a channel andW(.)is noise. By using the optimal filter h(n) designed according to (5.142)the data is recovered with minimal mean square error.+5.6. Linear Optimal Filters1125'\ ,I44Noisew(n)Figure 5.3. Designing linear optimal filters.Variance.
For the variance of the error we havew- 1w- 1(5.144)i=Oi=Owith 0: = E { ld(n12}. Substituting the optimal solution (5.142) into (5.144)yieldsP-1(5.145)Matrix Notation. In matrix notation (5.142) is(5.146)wit h(5.147)(5.148)and126Chapter 5. Tkansforms and FiltersStochasticfor ProcessesFrom (5.146) and (5.145) we obtain the following alternative expressionsfor the minimal variance:=C T ~-r% h(5.150)Special Cases. The following three cases, where the desired signal is adelayed version of the clean input signal ~ ( n are) , of special interest:(i) Filtering: d ( n ) = z(n).(ii) Interpolation: d ( n ) = z(n+ D),D < 0.+(iii) Prediction: d ( n ) = z(n D),Dvalue.> 0. Here the goal is to predicta futureFor the three cases mentioned above the Wiener-Hopf equation iscP-1h(i) rw(j - i) = r m ( j+ D),j = 0, 1,..
. , p - 1.(5.151)i=OUncorrelated Noise. If the noise W(.)andrTd(m) = r,,(mis uncorrelated to z(n), we have+ D),(5.153)and from (5.151) we deriveP-1ch(i[rz2(j-i)+rww(j-i)])=r,,(j+D),j = 0 , 1 , . . . , p - l. (5.154)i=OIn matrix notation we get[R,, + R w w l h = r z z ( D )(5.155)hT = [h(O),h(l), . . . , h(p - l)],(5.156)with5.6. Linear Optimal Filters127andRxx=rxx(P-......rzz(-l)r x x (0)rxx(0)r x x (1)1)Tzz(P -rzz(--Pr x x (-P...1)+ 1)+ 2).(5.158)r z z (0)For the correlation matrix R,, the corresponding definition holds.The minimal variance isa:min =0; -T ~ ( Dh )(5.159)5.6.2One-Step Linear PredictionOne-step linear predictorsare used in manyapplicationssuchas speechand image coding (DPCM, ADPCM, LPC, ...), in spectral estimation, andin feature extraction for speech recognition.
Basically, they may be regardedas a special case of Wiener-Hopf filtering.Figure 5.4. One-step linear prediction.We consider the system in Figure 5.4.A comparison with Figure5.3 showsthat the optimal predictor can be obtained from the Wiener-Hopf equationsfor the special case D = 1 with d ( n ) = z(n l),while no additive noise isassumed, w ( n ) = 0. Note that the filter U(.)is related to the Wiener-Hopffilter h(n) as U(.) = -h(n - 1).
With+Pqn)=-C.@)z ( n - i),(5.160)i= 1where p is the length of the FIR filter a ( n ) ,the error becomese(.)= ).(X-g(.)= z(n)+ C a ( i )z ( n - i).Pi= 1(5.161)Chapter 5. Tkansforms and FiltersStochasticfor Processes128Minimizing the error with respectto thefilter coefficients yields the equationsP-C .(i)j = 1 , 2 , .
. .,P,r z z ( j- i ) = r,,(j),(5.162)i=lwhich are known as the normal equations of linearprediction. Inmatrixnotation they arethat isR z z a = -rzz(1)(5.164)aT = [.(l), . . . ,.(p)].(5.165)withAccording to (5.159) we get for the minimal variance:Autoregressive Processes and the Yule-Walker Equations. We consider an autoregressive process of order p (AR(p) process). As outlined inSection 5.3, such a process is generated by exciting a stable recursive filter witha stationary white noise process W(.). The system function of the recursivesystem is supposed to be21U ( 2 )=+ icd .(i)P17z-i.(P) # 0.(5.167)The input-output relation of the recursive system may be expressed via thedifference equationcPz(n) = W(.)-.(i) z(n - i ) .(5.168)i= 121n order to keep in linewith the notationused in the literature, thecoefficientsp ( i ) , i =1 , . .
. , p introduced in (5.34) are replaced by the coefficients - a ( i ) , i = 1,. . . , p .1295.6. Linear Optimal FiltersFor the autocorrelation sequence of the process z(n) we thus derive+r z z ( m ) = E {z*(n)z(n m ) }(5.169)c .(i)P= r z w ( m )-r z z ( m- i).i=lThe cross correlation sequence r z w ( m )is-r z w ( m ) = E {z*(n)w(n+m)}ccc=U*(i)i=l+rww(i m )(5.170)U26(i+77A)=0; U * ( - m ) ,where u(n)is the impulse response of the recursive filter. Since U(.)(u(n)= 0 for n < 0), we deriveis causal(5.171)By combining (5.169) and (5.171) we finally getc a ( i ) ?-,,(mc a(i)rzz(mP-- i),m > 0,- i),m = 0,i= 1rzz(m) =0; -P(5.172)i= 1m < 0.c,(-m),The equations (5.172) are known as the Yule-Walkerequations.
In matrixform they areTzz(0)Tzz(-l)Tzz(-2)Tzz (1)Tzz (0)Tzz(-1)T z z ( P )T z z ( P- 1) T z z ( P - 1)* *Tzz(--P).. .Tzz(0)(5.173)As can be inferred from (5.173), we obtain the coefficients a ( i ) , i = 1,.. . , pby solving (5.163). By observing the power of the prediction error we can alsodetermine the power of the input process. From (5.166) and (5.172) we have(5.174)130Chapter 5. Tkansforms and FiltersStochasticfor ProcessesThus, all parameters of an autoregressive process can be exactly determinedfrom the parameters of a one-step linear predictor.Prediction Error Filter. The output signal of the so-called prediction errorfilter is the signal e ( n ) in Figure 5.4 with the coefficients U(.) according to(5.163). Introducing the coefficient a(0) = 1, e ( n ) is given byPe ( n ) = C a ( i ) z(n - i),a(0) =(5.175)1.i=OThe system function of the prediction error filter isPPi=li=O(5.176)In the special case that ~ ( nis)an autoregressive process, the predictionerror filter A ( z ) is the inverse system to the recursive filter U ( z ) t)u(n).This also means that the output signal of the prediction error filter is awhite noise process.
Hence, the prediction error filter performs a whiteningtransform and thus constitutes an alternative to the methods considered inSection 5.4. If ).(Xis not truly autoregressive, the whitening transform iscarried out at least approximately.Minimum Phase Property of the Prediction Error Filter. Ourinvestigation of autoregressive processes showed that the prediction errorfilter A ( z ) is inverse to the recursive filter U ( z ) .Since a stable filter does nothave poles outside the unitcircle of the z-plane, thecorresponding predictionerror filter cannot have zeros outside the unit circle. Even if ).(Xis not anautoregressive process, we obtain a minimum phase prediction error filter,because the calculation of A ( z ) onlytakesintoaccountthesecond-orderstatistics, which do not contain any phase information, cf. (1.105).5.6.3Filter Design on the Basis of Finite DataEnsemblesIn the previous sections we assumed stationary processes and considered thecorrelation sequences to be known.In practice,however, linear predictors mustbe designed on the basis of a finite number of observations.Inorder to determinethepredictorfilter a(.) frommeasured data%(l),2(2), .
. . ,z ( N ) ,we now describe the prediction errorPi= 15.6. Linear Optimal Filters131via the following matrix equation:e=Xa+x,(5.177)where a contains the predictor coefficients, and X and 2 contain the inputdata. The term X a describes the convolution of the data with the impulseresponse a ( n ) .The criterionllell = I I Xa xll L min(5.178)+leads to the following normal equation:XHXa=-XHx.(5.179)Here, the properties of the predictor are dependent on the definition of Xand X . In the following, two relevant methods will be discussed.Autocorrelation Method. The autocorrelationmethodfollowing estimation of the autocorrelation sequence:. N-lml+ p (m)= Nl-c.*(n) .(nis basedon+ m).the(5.180)n=l+LtC’(m)As can be seen,is a biased estimate of the true autocorrelationsequence r,,(m), which means that E{+$tC’(m)}# r z z ( m ) .
Thus,theautocorrelationmethod yields a biased estimate of the parameters of anautoregressive process. However, the correlation matrix kzc’ built from+k?(rn)has a Toeplitz structure, which enables us to efficiently solve theequation. ( A C ) = - p2c2 ’ (1)R,,(5.181)by means of the Levinson-Durbin recursion [89, 471 or the Schur algorithm[130]. Textbooks that cover this topic are, for instance, [84, 99, 1171.The autocorrelation method canalso be viewed as the solution to theproblem (5.178) withz(N).