Front Matter and Index (779802), страница 2
Текст из файла (страница 2)
Thetheory and application of signal processing is concerned with theidentification, modelling and utilisation of patterns and structures in asignal process. The observation signals are often distorted, incomplete andnoisy. Hence, noise reduction and the removal of channel distortion is animportant part of a signal processing system.
The aim of this book is toprovide a coherent and structured presentation of the theory andapplications of statistical signal processing and noise reduction methods.This book is organised in 15 chapters.Chapter 1 begins with an introduction to signal processing, andprovides a brief review of signal processing methodologies andapplications. The basic operations of sampling and quantisation arereviewed in this chapter.Chapter 2 provides an introduction to noise and distortion. Severaldifferent types of noise, including thermal noise, shot noise, acoustic noise,electromagnetic noise and channel distortions, are considered.
The chapterconcludes with an introduction to the modelling of noise processes.Chapter 3 provides an introduction to the theory and applications ofprobability models and stochastic signal processing. The chapter beginswith an introduction to random signals, stochastic processes, probabilisticmodels and statistical measures. The concepts of stationary, non-stationaryand ergodic processes are introduced in this chapter, and some importantclasses of random processes, such as Gaussian, mixture Gaussian, Markovchains and Poisson processes, are considered. The effects of transformationof a signal on its statistical distribution are considered.Chapter 4 is on Bayesian estimation and classification.
In this chapterthe estimation problem is formulated within the general framework ofBayesian inference. The chapter includes Bayesian theory, classicalestimators, the estimate–maximise method, the Cramér–Rao bound on theminimum−variance estimate, Bayesian classification, and the modelling ofthe space of a random signal.
This chapter provides a number of exampleson Bayesian estimation of signals observed in noise.xviiiPrefaceChapter 5 considers hidden Markov models (HMMs) for nonstationary signals. The chapter begins with an introduction to the modellingof non-stationary signals and then concentrates on the theory andapplications of hidden Markov models. The hidden Markov model isintroduced as a Bayesian model, and methods of training HMMs and usingthem for decoding and classification are considered. The chapter alsoincludes the application of HMMs in noise reduction.Chapter 6 considers Wiener Filters. The least square error filter isformulated first through minimisation of the expectation of the squarederror function over the space of the error signal.
Then a block-signalformulation of Wiener filters and a vector space interpretation of Wienerfilters are considered. The frequency response of the Wiener filter isderived through minimisation of mean square error in the frequencydomain. Some applications of the Wiener filter are considered, and a casestudy of the Wiener filter for removal of additive noise provides usefulinsight into the operation of the filter.Chapter 7 considers adaptive filters. The chapter begins with the statespace equation for Kalman filters. The optimal filter coefficients arederived using the principle of orthogonality of the innovation signal.
Therecursive least squared (RLS) filter, which is an exact sample-adaptiveimplementation of the Wiener filter, is derived in this chapter. Then thesteepest−descent search method for the optimal filter is introduced. Thechapter concludes with a study of the LMS adaptive filters.Chapter 8 considers linear prediction and sub-band linear predictionmodels. Forward prediction, backward prediction and lattice predictors arestudied. This chapter introduces a modified predictor for the modelling ofthe short−term and the pitch period correlation structures. A maximum aposteriori (MAP) estimate of a predictor model that includes the priorprobability density function of the predictor is introduced.
This chapterconcludes with the application of linear prediction in signal restoration.Chapter 9 considers frequency analysis and power spectrum estimation.The chapter begins with an introduction to the Fourier transform, and therole of the power spectrum in identification of patterns and structures in asignal process. The chapter considers non−parametric spectral estimation,model-based spectral estimation, the maximum entropy method, and high−resolution spectral estimation based on eigenanalysis.Chapter 10 considers interpolation of a sequence of unknown samples.This chapter begins with a study of the ideal interpolation of a band-limitedsignal, a simple model for the effects of a number of missing samples, andthe factors that affect interpolation.
Interpolators are divided into twoPrefacexixcategories: polynomial and statistical interpolators. A general form ofpolynomial interpolation as well as its special forms (Lagrange, Newton,Hermite and cubic spline interpolators) are considered. Statisticalinterpolators in this chapter include maximum a posteriori interpolation,least squared error interpolation based on an autoregressive model,time−frequency interpolation, and interpolation through search of anadaptive codebook for the best signal.Chapter 11 considers spectral subtraction.
A general form of spectralsubtraction is formulated and the processing distortions that result formspectral subtraction are considered. The effects of processing-distortions onthe distribution of a signal are illustrated. The chapter considers methodsfor removal of the distortions and also non-linear methods of spectralsubtraction. This chapter concludes with an implementation of spectralsubtraction for signal restoration.Chapters 12 and 13 cover the modelling, detection and removal ofimpulsive noise and transient noise pulses. In Chapter 12, impulsive noiseis modelled as a binary−state non-stationary process and several stochasticmodels for impulsive noise are considered.
For removal of impulsive noise,median filters and a method based on a linear prediction model of the signalprocess are considered. The materials in Chapter 13 closely follow Chapter12. In Chapter 13, a template-based method, an HMM-based method and anAR model-based method for removal of transient noise are considered.Chapter 14 covers echo cancellation. The chapter begins with anintroduction to telephone line echoes, and considers line echo suppressionand adaptive line echo cancellation. Then the problem of acoustic echoesand acoustic coupling between loudspeaker and microphone systems areconsidered. The chapter concludes with a study of a sub-band echocancellation systemChapter 15 is on blind deconvolution and channel equalisation.
Thischapter begins with an introduction to channel distortion models and theideal channel equaliser. Then the Wiener equaliser, blind equalisation usingthe channel input power spectrum, blind deconvolution based on linearpredictive models, Bayesian channel equalisation, and blind equalisationfor digital communication channels are considered. The chapter concludeswith equalisation of maximum phase channels using higher-order statistics.Saeed VaseghiJune 2000FREQUENTLY USED SYMBOLS AND ABBREVIATIONSAWGNARMAARAakaaijαi(t)bpsb(m)b(m)βi(t)c xx (m)c XX (k1 ,k 2 ,,k N )C XX (ω 1 ,ω 2 , ,ω k −1 )De(m)E[x]ff X ( x)f X ,Y ( x, y )f X Y ( x y)f X ;Θ ( x;θ )f X S ,M ( x s, M )Φ(m,m–1)hhmaxhminhinvH(f)Additive white Gaussian noiseAutoregressive moving average processAutoregressive processMatrix of predictor coefficientsLinear predictor coefficientsLinear predictor coefficients vectorProbability of transition from state i to state j in aMarkov modelForward probability in an HMMBits per secondBackward prediction errorBinary state signalBackward probability in an HMMCovariance of signal x(m)kth order cumulant of x(m)kth order cumulant spectra of x(m)Diagonal matrixEstimation errorExpectation of xFrequency variableProbability density function for process XJoint probability density function of X and YProbability density function of X conditioned onYProbability density function of X with θ as aparameterProbability density function of X given a statesequence s of an HMM M of the process XState transition matrix in Kalman filterFilter coefficient vector, Channel responseMaximum−phase channel responseMinimum−phase channel responseInverse channel responseChannel frequency responsexxiiHinv(f)HIJ|J|K(m)LSELSARλΛMAPMAMLMMSEmmkMµµxn(m)n(m)ni(m)N(f)N*(f)N( f )N ( x, µ xx , Σ xx )O(· )PpdfpmfPX ( xi )PX ,Y ( xi , y j )PX Y (xi y j )PNN ( f )PXX ( f )Frequently Used Symbols and AbbreviationsInverse channel frequency responseObservation matrix, Distortion matrixIdentity matrixFisher’s information matrixJacobian of a transformationKalman gain matrixLeast square errorLeast square AR interpolationEigenvalueDiagonal matrix of eigenvaluesMaximum a posterior estimateMoving average processMaximum likelihood estimateMinimum mean squared error estimateDiscrete time indexkth order momentA model, e.g.















