Linear Prediction Models (Vaseghi - Advanced Digital Signal Processing and Noise Reduction), страница 5
Описание файла
Файл "Linear Prediction Models" внутри архива находится в папке "Vaseghi - Advanced Digital Signal Processing and Noise Reduction". PDF-файл из архива "Vaseghi - Advanced Digital Signal Processing and Noise Reduction", который расположен в категории "". Всё это находится в предмете "теория управления" из 5 семестр, которые можно найти в файловом архиве МГТУ им. Н.Э.Баумана. Не смотря на прямую связь этого архива с МГТУ им. Н.Э.Баумана, его также можно найти и в других разделах. Архив можно найти в разделе "книги и методические указания", в предмете "теория управления" в общих файлах.
Просмотр PDF-файла онлайн
Текст 5 страницы из PDF
Since the noise is additive, wehavef Y | A, X ( y | aˆ , x ) = f N ( y − x )=1()N /22πσ n2 1exp −( y − x ) T ( y − x )2 2σ n(8.86)Assuming that the input of the predictor model is a zero-mean Gaussianprocess with variance σ e2 , the pdf of the signal x given an estimate of thepredictor coefficient vector a isf Y | A, X ( x | aˆ ) ==(1)N /22πσ e21(2πσ e2 )N / 21 T exp −e e2 2σ e1exp −x T Aˆ T Aˆ x 2 2σ e(8.87)ˆ x as in Equation (8.69). Substitution of Equations (8.86) andwhere e = A(8.87) in Equation (8.85) yieldsf X | A,Y (x | aˆ , y )=11f Y | A ( y | aˆ ) (2πσ nσ e )N11exp −( y − x ) T ( y − x )−x T Aˆ T Aˆ x 222σ e 2σ n(8.88)In Equation (8.88), for a given signal y and coefficient vector aˆ , fY|A( y| aˆ ) isa constant. From Equation (8.88), the ML signal estimate is obtained bymaximising the log-likelihood function asSignal Restoration Using Linear Prediction Models257∂(ln f X | A,Y (x | aˆ, y ))= ∂ − 1 2 x T Aˆ T Aˆ x − 1 2 ( y − x ) T ( y − x) = 0∂a∂x 2σ e2σ n(8.89)which gives(xˆ = σ e2 σ n2 Aˆ T Aˆ + σ e2 I)−1y(8.90)The signal estimate of Equation (8.90) can be used to obtain an updatedestimate of the predictor parameter.
Assuming that the signal is a zero meanGaussian process, the estimate of the predictor parameter vector a is givenby−1(8.91)aˆ ( xˆ ) = Xˆ T XˆXˆ T xˆ() ()Equations (8.90) and (8.91) form the basis for an iterative signalrestoration/parameter estimation method.8.6.1 Frequency-Domain Signal Restoration Using PredictionModelsThe following algorithm is a frequency-domain implementation of the linearprediction model-based restoration of a signal observed in additive whitenoise.Initialisation: Set the initial signal estimate to noisy signal xˆ 0 = y ,For iterations i = 0, 1, ...Step 1 Estimate the predictor parameter vector aˆi :(aˆi ( xˆ i ) = Xˆ iT Xˆ i)−1(Xˆ iT xˆ i )(8.92)Step 2 Calculate an estimate of the model gain G using the Parseval'stheorem:Linear Prediction Models2581NN −1∑f =0Gˆ 2P1− ∑ aˆ k ,i e− j 2πfk / N2=N −1∑ y 2 (m) − Nσˆ n2m =0(8.93)k =1where aˆ k,i are the coefficient estimates at iteration i, and N σˆ n2 is theenergy of white noise over N samples.Step 3 Calculate an estimate of the power spectrum of speech model:PˆX i X ( f ) =iGˆ 22P(8.94)1−∑ aˆ k ,i e − j 2πfk / Nk =1Step 4 Calculate the Wiener filter frequency response:Wˆ i ( f ) =PˆX i X i ( f )PˆX i X i ( f ) + PˆNi Ni ( f )(8.95)where Pˆ Ni N i ( f ) = σˆ n2 is an estimate of the noise power spectrum.Step 5 Filter the magnitude spectrum of the noisy speech asXˆ i+1( f )=Wˆi ( f )Y ( f )(8.96)Restore the time domain signal xˆ i +1 by combining Xˆ i +1 ( f ) with thephase of noisy signal and the complex signal to time domain.Step 6 Goto step 1 and repeat until convergence, or for a specified numberof iterations.Figure 8.13 illustrates a block diagram configuration of a Wiener filter usinga linear prediction estimate of the signal spectrum.
Figure 8.14 illustrates theresult of an iterative restoration of the spectrum of a noisy speech signal.Signal Restoration Using Linear Prediction Modelsy(m)=x(m)+n(m)259Linear predictionanalysisa^Wiener filterW( f )Speechactivitydetector^x(m)PNN ( f )Noise estimatorFigure 8.13 Iterative signal restoration based on linear prediction model of speech.Origninal noisyOriginal noise-freeRestored : 2 IterationsRestored : 4 IterationsFigure 8.14 Illustration of restoration of a noisy signal with iterative linear predictionbased method.8.6.2 Implementation of Sub-Band Linear Prediction WienerFiltersAssuming that the noise is additive, the noisy signal in each sub-band ismodelled asy k ( m) = x k ( m) + n k ( m)(8.97)The Wiener filter in the frequency domain can be expressed in terms of thepower spectra, or in terms of LP model frequency responses, of the signaland noise process asLinear Prediction Models260Wk ( f ) =PX ,k ( f )PY ,k ( f )g X2 ,k=AX ,k ( f )AY ,k ( f )2(8.98)2g Y2 ,kwhere PX,k(f) and PY,k(f) are the power spectra of the clean signal and thenoisy signal for the kth subband respectively.
From Equation (8.98) thesquare-root Wiener filter is given byWk1 / 2 ( f ) =g X ,kAY ,k ( f )AX ,k ( f )g Y ,k(8.99)The linear prediction Wiener filter of Equation (8.99) can be implemented inthe time domain with a cascade of a linear predictor of the clean signal,followed by an inverse predictor filter of the noisy signal as expressed bythe following relations (see Figure 8.15):Pz k (m) = ∑ a Xk (i ) z k (m − i ) +i =1gXy k ( m)gY(8.100)Pxˆ k ( m )=∑ aYk (i ) z k ( m − i )(8.101)i =0where xˆ k ( m) is the restored estimate of xk(m) the clean speech signal andzk(m) is an intermediate signal.Noisysignalg X gY1Ax(f)Pzk (m)=∑ a Xk (i) zk (m − i)+ yk ( m)i =1RestoredsignalAY(f)Px k ( m) =∑aYk(i ) z k ( m − i )i =0Figure 8.15 A cascade implementation of the LP squared-root Wiener filter.Summary2618.7 SummaryLinear prediction models are used in a wide range of signal processingapplications from low-bit-rate speech coding to model-based spectralanalysis.
We began this chapter with an introduction to linear predictiontheory, and considered different methods of formulation of the predictionproblem and derivations of the predictor coefficients. The main attraction ofthe linear prediction method is the closed-form solution of the predictorcoefficients, and the availability of a number of efficient and relativelyrobust methods for solving the prediction equation such as the Levinson–Durbin method. In Section 8.2, we considered the forward, backward andlattice predictors. Although the direct-form implementation of the linearpredictor is the most convenient method, for many applications, such astransmission of the predictor coefficients in speech coding, it isadvantageous to use the lattice form of the predictor.
This is because thelattice form can be conveniently checked for stability, and furthermore aperturbation of the parameter of any section of the lattice structure has alimited and more localised effect. In Section 8.3, we considered a modifiedform of linear prediction that models the short-term and long-termcorrelations of the signal. This method can be used for the modelling ofsignals with a quasi-periodic structure such as voiced speech. In Section 8.4,we considered MAP estimation and the use of a prior pdf for derivation ofthe predictor coefficients. In Section 8.5, the sub-band linear predictionmethod was formulated.
Finally in Section 8.6, a linear prediction modelwas applied to the restoration of a signal observed in additive noise.BibliographyAKAIKE H. (1970) Statistical Predictor Identification, Annals of the Instituteof Statistical Mathematics. 22, pp. 203–217.AKAIKE H. (1974) A New Look at Statistical Model Identification, IEEETrans. on Automatic Control, AC-19, pp. 716–723, Dec.ANDERSON O.D. (1976) Time Series Analysis and Forecasting, The BoxJenkins Approach. Butterworth, London.AYRE A.J. (1972) Probability and Evidence Columbia University Press.BOX G.E.P and JENKINS G.M.
(1976) Time Series Analysis: Forecasting andControl. Holden-Day, San Francisco, California.BURG J.P. (1975) Maximum Entropy Spectral Analysis. P.h.D. thesis,Stanford University, Stanford, California.262Linear Prediction ModelsCOHEN J. and Cohen P. (1975) Applied Multiple Regression/CorrelationAnalysis for the Behavioral Sciences. Halsted, New York.DRAPER N.R. and Smith H. (1981) Applied Regression Analysis, 2nd Ed.Wiley, New York.DURBIN J. (1959) Efficient Estimation of Parameters in Moving AverageModels. Biometrica, 46, pp.
306–317.DURBIN J. (1960) The Fitting of Time Series Models. Rev. Int. Stat. Inst.,28, pp. 233–244.FULLER W.A. (1976) Introduction to Statistical Time Series. Wiley, NewYork.HANSEN J.H. and CLEMENTS M.A. (1987). Iterative Speech Enhancementwith Spectral Constrains. IEEE Proc. Int. Conf. on Acoustics, Speechand Signal Processing ICASSP-87, 1, pp. 189–192, Dallas, April.HANSEN J.H. and CLEMENTS M.A.
(1988). Constrained Iterative SpeechEnhancement with Application to Automatic Speech Recognition. IEEEProc. Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP88, 1, pp. 561–564, New York, April.HOCKING R.R. (1996): The Analysis of Linear Models. Wiley.KOBATAKE H., INARI J. and KAKUTA S. (1978) Linear prediction Coding ofSpeech Signals in a High Ambient Noise Environment. IEEE Proc. Int.Conf. on Acoustics, Speech and Signal Processing, pp. 472–475, April.LIM J.S. and OPPENHEIM A.V. (1978) All-Pole Modelling of DegradedSpeech. IEEE Trans. Acoustics, Speech and Signal Processing, ASSP26, 3, pp. 197-210, June.LIM J.S. and OPPENHEIM A.V. (1979) Enhancement and BandwidthCompression of Noisy Speech, Proc.
IEEE, 67, pp. 1586-1604.MAKOUL J.(1975) Linear Prediction: A Tutorial review. Proceedings of theIEEE, 63, pp. 561-580.MARKEL J.D. and GRAY A.H. (1976) Linear Prediction of Speech. SpringerVerlag, New York.RABINER L.R. and SCHAFER R.W. (1976) Digital Processing of SpeechSignals. Prentice-Hall, Englewood Cliffs, NJ.TONG H. (1975) Autoregressive Model Fitting with Noisy Data by Akaike'sInformation Criterion. IEEE Trans.
Information Theory, IT-23, pp.409–48.STOCKHAM T.G., CANNON T.M. and INGEBRETSEN R.B. (1975) BlindDeconvolution Through Digital Signal Processing. IEEE Proc. 63, 4,pp. 678–692..