Wiener Filters (779825), страница 2
Текст из файла (страница 2)
The blockestimation method is appropriate for processing of signals that can beconsidered as time-invariant over the duration of the block.6.2.1 QR Decomposition of the Least Square Error EquationAn efficient and robust method for solving the least square error Equation(6.19) is the QR decomposition (QRD) method. In this method, the N × Psignal matrix Y is decomposed into the product of an N × N orthonormalmatrix Q and a P × P upper-triangular matrix R asR QY = 0(6.21)186Wiener FiltersTTwhere 0 is the ( N − P) × P null matrix, Q Q = QQ = I , and the uppertriangular matrix R is of the form r00 0 0R = 0 0r01r11000r02r12r2200r03r13r23r330 r0 P −1 r1P −1 r2 P −1 r3 P −1 rP −1P −1 (6.22)Substitution of Equation (6.21) in Equation (6.18) yieldsTTR R R QQ T w = Q x00 0(6.23)From Equation (6.23) we haveR w = Q x0(6.24)From Equation (6.24) we haveR w = xQ(6.25)where the vector xQ on the right hand side of Equation (6.25) is composedof the first P elements of the product Qx.
Since the matrix R is uppertriangular, the coefficients of the least square error filter can be obtainedeasily through a process of back substitution from Equation (6.25), startingwith the coefficient w P −1 = x Q ( P − 1) / rP −1P −1 .The main computational steps in the QR decomposition are thedetermination of the orthonormal matrix Q and of the upper triangularmatrix R.
The decomposition of a matrix into QR matrices can be achievedusing a number of methods, including the Gram-Schmidt orthogonalisationmethod, the Householder method and the Givens rotation method.187Interpretation of Wiener Filters as Projection in Vector SpaceErrorsignale(m)e = e(m–1)e(m–2)Cleansignalx(m)x = x(m–1)x(m–2)x^ =y(m)y = y(m–1)2y(m–2)^x(m)^x(m–1)^x(m–2)NoisysignalNoisysignaly(m–1)y = y(m–2)1y(m–3)Figure 6.3 The least square error projection of a desired signal vector x onto aplane containing the input signal vectors y1 and y2 is the perpendicular projectionof x shown as the shaded vector.6.3 Interpretation of Wiener Filters as Projection in Vector SpaceIn this section, we consider an alternative formulation of Wiener filterswhere the least square error estimate is visualized as the perpendicularminimum distance projection of the desired signal vector onto the vectorspace of the input signal.
A vector space is the collection of an infinitenumber of vectors that can be obtained from linear combinations of anumber of independent vectors.In order to develop a vector space interpretation of the least squareerror estimation problem, we rewrite the matrix Equation (6.11) and expressthe filter output vector xˆ as a linear weighted combination of the columnvectors of the input signal matrix as188Wiener Filters xˆ (0) y (0) y (−1) y (1 − P) xˆ (1) y (1) y (0) y (2 − P) xˆ (2) y (2) y (1) y (3 − P) = w0 + w1 + + wP −1 xˆ ( N − 2) y ( N − 2) y ( N − 3) y ( N − 1 − P) xˆ ( N − 1) y ( N − 1) y ( N − 2) y( N − P) (6.26)In compact notation, Equation (6.26) may be written asxˆ = w0 y 0 + w1 y1 + + wP −1 y P −1(6.27)In Equation (6.27) the signal estimate xˆ is a linear combination of P basisvectors [y0, y1, .
. ., yP–1], and hence it can be said that the estimate xˆ is inthe vector subspace formed by the input signal vectors [y0, y1, . . ., yP–1].In general, the P N-dimensional input signal vectors [y0, y1, . . ., yP–1]in Equation (6.27) define the basis vectors for a subspace in an Ndimensional signal space. If P, the number of basis vectors, is equal to N,the vector dimension, then the subspace defined by the input signal vectorsencompasses the entire N-dimensional signal space and includes the desiredsignal vector x. In this case, the signal estimate xˆ = x and the estimationerror is zero.
However, in practice, N>P, and the signal space defined bythe P input signal vectors of Equation (6.27) is only a subspace of the Ndimensional signal space. In this case, the estimation error is zero only ifthe desired signal x happens to be in the subspace of the input signal,otherwise the best estimate of x is the perpendicular projection of the vectorx onto the vector space of the input signal [y0, y1, . . ., yP–1]., as explained inthe following example.Example 6.1 Figure 6.3 illustrates a vector space interpretation of asimple least square error estimation problem, where yT=[y(2), y(1), y(0), y(–1)] is the input signal, xT=[x(2), x(1), x(0)] is the desired signal andwT=[w0, w1] is the filter coefficient vector.
As in Equation (6.26), the filteroutput can be written asAnalysis of the Least Mean Square Error Signal xˆ (2) y (2) y (1) xˆ (1) = w0 y (1) + w1 y (0) xˆ (0) y (0) y (−1) 189(6.28)In Equation (6.28), the input signal vectors y1T =[y(2), y(1), y(0)] andy 2T =[y(1), y(0), y(−1)] are 3-dimensional vectors. The subspace defined bythe linear combinations of the two input vectors [y1, y2] is a 2-dimensionalplane in a 3-dimensional signal space. The filter output is a linearcombination of y1 and y2, and hence it is confined to the plane containingthese two vectors.
The least square error estimate of x is the orthogonalprojection of x on the plane of [y1, y2] as shown by the shaded vector xˆ . Ifthe desired vector happens to be in the plane defined by the vectors y1 andy2 then the estimation error will be zero, otherwise the estimation error willbe the perpendicular distance of x from the plane containing y1 and y2.6.4 Analysis of the Least Mean Square Error SignalThe optimality criterion in the formulation of the Wiener filter is the leastmean square distance between the filter output and the desired signal. Inthis section, the variance of the filter error signal is analysed. Substitutingthe Wiener equation Ryyw=ryx in Equation (6.5) gives the least mean squareerror:E[e 2 (m)] = rxx (0) − w T r yx= rxx (0) − w T R yy w(6.29)Now, for zero-mean signals, it is easy to show that in Equation (6.29) theterm wTRyyw is the variance of the Wiener filter output xˆ (m) :σ x2ˆ = E [ xˆ 2 (m)] = w T R yy w(6.30)Therefore Equation (6.29) may be written asσ e2 = σ x2 − σ x2ˆ(6.31)190Wiener Filters222222where σ x =E[ x (m)], σ xˆ =E [ xˆ (m)] and σ e =E[e (m)] are the variancesof the desired signal, the filter estimate of the desired signal and the errorsignal respectively.
In general, the filter input y(m) is composed of a signalcomponent xc(m) and a random noise n(m):y ( m) = x c ( m ) + n ( m )(6.32)where the signal xc(m) is the part of the observation that is correlated withthe desired signal x(m), and it is this part of the input signal that may betransformable through a Wiener filter to the desired signal. Using Equation(6.32) the Wiener filter error may be decomposed into two distinctcomponents:Pe ( m ) = x ( m ) − ∑ wk y ( m − k )k =0P P= x ( m ) − ∑ wk x c ( m − k ) − ∑ wk n ( m − k ) k =0k =0(6.33)ore ( m ) = e x ( m ) +e n ( m )(6.34)where ex(m) is the difference between the desired signal x(m) and the outputof the filter in response to the input signal component xc(m), i.e.P −1e x ( m ) = x ( m ) − ∑ wk x c ( m − k )(6.35)k =0and en(m) is the error in the output due to the presence of noise n(m) in theinput signal:P −1e n ( m ) = − ∑ wk n ( m − k )(6.36)k =0The variance of filter error can be rewritten asσ e2 = σ e2x + σ e2n(6.37)191Formulation of Wiener Filter in the Frequency DomainNote that in Equation (6.34), ex(m) is that part of the signal that cannot berecovered by the Wiener filter, and represents distortion in the signaloutput, and en(m) is that part of the noise that cannot be blocked by theWiener filter.
Ideally, ex(m)=0 and en(m)=0, but this ideal situation ispossible only if the following conditions are satisfied:(a) The spectra of the signal and the noise are separable by a linearfilter.(b) The signal component of the input, that is xc(m), is linearlytransformable to x(m).(c) The filter length P is sufficiently large. The issue of signal and noiseseparability is addressed in Section 6.6.6.5 Formulation of Wiener Filters in the Frequency DomainIn the frequency domain, the Wiener filter output Xˆ ( f ) is the product of theinput signal Y(f) and the filter frequency response W(f):Xˆ ( f ) = W ( f )Y ( f )(6.38)The estimation error signal E(f) is defined as the difference between thedesired signal X(f) and the filter output Xˆ ( f ),E ( f ) = X ( f ) − Xˆ ( f )= X ( f ) − W ( f )Y ( f )(6.39)and the mean square error at a frequency f is given by[]2E E ( f ) = E (X ( f ) −W ( f )Y ( f ) )* (X ( f ) −W ( f )Y ( f ) )(6.40)where E[· ] is the expectation function, and the symbol * denotes thecomplex conjugate.
Note from Parseval’s theorem that the mean squareerror in time and frequency domains are related by192Wiener FiltersN −1∑em =01/ 22( m) =∫ E( f )2df(6.41)−1 / 2To obtain the least mean square error filter we set the complex derivative ofEquation (6.40) with respect to filter W(f) to zero∂ E [| E ( f ) | 2 ]= 2W ( f ) PYY ( f ) − 2 PXY ( f ) = 0∂W ( f )(6.42)where PYY(f)=E[Y(f)Y*(f)] and PXY(f)=E[X(f)Y*(f)] are the power spectrumof Y(f), and the cross-power spectrum of Y(f) and X(f) respectively. FromEquation (6.42), the least mean square error Wiener filter in the frequencydomain is given asP (f)W ( f ) = XY(6.43)PYY ( f )Alternatively, the frequency-domain Wiener filter Equation (6.43) can beobtained from the Fourier transform of the time-domain Wiener Equation(6.9):P −1∑ ∑ wk ryy (m − k )e − jωm = ∑ ryx (n)e − jωmm k =0(6.44)mFrom the Wiener–Khinchine relation, the correlation and power-spectralfunctions are Fourier transform pairs.















