Transforms and Filters for Stochastic Processes (Mertins - Signal Analysis (Revised Edition))
Описание файла
Файл "Transforms and Filters for Stochastic Processes" внутри архива находится в папке "Mertins - Signal Analysis (Revised Edition)". PDF-файл из архива "Mertins - Signal Analysis (Revised Edition)", который расположен в категории "". Всё это находится в предмете "цифровая обработка сигналов (цос)" из 8 семестр, которые можно найти в файловом архиве МГТУ им. Н.Э.Баумана. Не смотря на прямую связь этого архива с МГТУ им. Н.Э.Баумана, его также можно найти и в других разделах. Архив можно найти в разделе "книги и методические указания", в предмете "цифровая обработка сигналов" в общих файлах.
Просмотр PDF-файла онлайн
Текст из PDF
Signal Analysis: Wavelets,Filter Banks, Time-Frequency Transforms andApplications. Alfred MertinsCopyright 0 1999 John Wiley & Sons LtdPrint ISBN 0-471-98626-7 Electronic ISBN 0-470-84183-4Chapter 5Transforms and Filtersfor Stochastic ProcessesIn this chapter, we consider the optimal processing of random signals. Westart with transforms that have optimal approximation properties,in theleast-squares sense, for continuous and discrete-time signals, respectively.Then we discuss the relationships between discrete transforms, optimal linearestimators, and optimal linear filters.5.1The Continuous-Time Karhunen-Lo'eveTransformAmong all linear transforms, the Karhunen-Lo bve transform (KLT) is theone which best approximates a stochastic process in the least squares sense.Furthermore, the KLT is a signal expansion with uncorrelated coefficients.These properties make it interesting for many signal processing applicationssuch as coding and pattern recognition.
The transform can be formulated forcontinuous-time and discrete-time processes. In this section, we sketch thecontinuous-time case [81], [l49 ].The discrete-time case will be discussed inthe next section in greater detail.Consider a real-valued continuous-time random process z ( t ) , a101< t < b.102Chapter 5. TransformsFiltersandStochasticfor ProcessesWe may not assume that every sample function of the random process lies inLz(a,b) and can be represented exactly via a series expansion.
Therefore, aweaker condition is formulated, which states that we are looking for a seriesexpansion that represents the stochastic process in the mean:’NThe “unknown” orthonormal basis {vi@); i = 1 , 2 , . . .} has to be derivedfrom the properties of the stochastic process. For this, we require that thecoefficients= (z,Pi) =2ilbZ(t)Pi(t) dt(5.2)of the series expansion are uncorrelated. This can be expressed as!=xj &j.The kernel of the integralrepresentationin(5.3)functionT,,(t,U)= E { 4 t ) 4.))is the autocorrelation(5.4)*We see that (5.3) is satisfied ifComparing (5.5) with the orthonormality relationrealize thatll.i.m=limit in the mean[38].Sij= S, cpi(t)c p j ( t ) dt, web1035.2.
The Discrete Karhunen-LobeTransformmust hold in order to satisfy (5.5). Thus, the solutions c p j ( t ) , j = 1 , 2 , .. .of theintegralequation(5.6) form the desired orthonormal basis. Thesefunctions are also called eigenfunctions of the integral operator in (5.6). Thevalues X j , j = 1 , 2 , .. . are the eigenvalues. If the kernel ~,,(t,U ) is positivedefinite, that is, if SJT,,(~,U)Z(~)Z(U)d t du > 0 for all ~ ( tE )La(a,b),thenthe eigenfunctions form a complete orthonormal basis for L ~ ( ub)., Furtherproperties and particular solutions of the integral equation are for instancediscussed in [149].Signals can be approximated by carrying out the summation in (5.1) onlyfor i = 1 , 2 , ..
. , M with finite M . The mean approximation error producedthereby is the sumof those eigenvalues X j whose corresponding eigenfunctionsare not used for the representation. Thus, we obtain an approximation withminimal mean square error if those eigenfunctions are used which correspondto the largest eigenvalues.Inpractice, solving anintegralequationrepresents a majorproblem.Therefore the continuous-time KLT is of minor interest with regard to practical applications. However, theoretically, that is, without solving the integralequation,thistransform is an enormous help. We can describe stochasticprocesses by means of uncorrelated coefficients, solveestimation orrecognitionproblems for vectors with uncorrelated components and then interpret theresults for the continuous-time case.5.2The DiscreteKarhunen-LoheTransformWe consider a real-valued zero-mean random processX =[?l,XEIR,.XnThe restriction to zero-mean processes means no loss of generality, since anyprocess 2: with mean m, can be translated into a zero-mean process X byx=z-m2.(5.8)With an orthonormal basis U = ( ~ 1 , ..
. , U,}, the process can be writtenasx=ua,(5.9)where the representationa = [ a l , .. . ,a,] T(5.10)104Chapter 5. Tkansforms and FiltersStochasticfor Processesis given bya = uT X.(5.11)As for the continuous-time case, we derive the KLT by demanding uncorrelated coefficients:E {aiaj} = X ji , j = 1 , . . . ,n.Sij,(5.12)The scalars X j , j = 1 , . . . ,n are unknown real numbers with X j 2 0.
From(5.9) and (5.12) we obtainE {urx x T u j } = X ji , j = 1 , . . . , n.Sij,(5.13)Wit hR,, = E(5.14){.X.'}this can be written asUT R,,uj = X jSi, ,i, j = 1 , . . . ,n.(5.15)We observe that because of uTuj = S i j , equation (5.15) is satisfied if thevectors uj,j = 1, . . . ,n are solutions to the eigenvalue problemR,,uj = Xjuj,j = 1 , . . . ,n.(5.16)Since R,, is a covariance matrix, the eigenvalue problem has the followingproperties:1. Only real eigenvalues X i exist.2. A covariance matrix is positive definite or positive semidefinite, that is,for all eigenvalues we have Xi 2 0.3.
Eigenvectors that belong to different eigenvalues are orthogonal to oneanother.4. If multiple eigenvalues occur, their eigenvectors are linearly independentand can be chosen to be orthogonal to one another.Thus, we see that n orthogonal eigenvectors always exist. By normalizingthe eigenvectors, we obtain the orthonormal basis of the Karhunen-LoBvetransform.Complex-Valued Processes.condition (5.12) becomesForcomplex-valuedprocessesXE(En71055.2. The Discrete Karhunen-Lobe TransformThis yields the eigenvalue problemR,,uj= X j u j , j = 1 , .
. . ,nwith the covariance matrixR,, = E {zz"} .Again, the eigenvalues are real and non-negative. The eigenvectors are orthogonal to one another such that U = [ u l , . . ,U,] is unitary.From the uncorrelatedness of the complex coefficients we cannot conclude that their real andimaginary partsare also uncorrelated; that is,E {!J%{ai}9 { a j } }= 0, i, j = 1,. . . ,n is not implied.Best Approximation Property of the KLT. We henceforth assume thatthe eigenvalues are sorted such that X 1 2 .
. . 2 X,. From (5.12) we get forthe variances of the coefficients:E { Jail2}= x i ,i = 1 , ...,R..(5.17)For the mean-square error of an approximationmD=Caiui,m< n,(5.18)i=lwe obtain(5.19)=5 xi.i=m+lIt becomes obvious that an approximationwith thoseeigenvectors u1,. . . , um,which belong to the largest eigenvectors leads to a minimal error.In order to show that the KLT indeed yields the smallest possible erroramong all orthonormal linear transforms, we look at the maximization ofC z l E { J a i l }under the condition J J u i=J J1.
With ai = U ~ thisZ means106Chapter 5. Tkansforms and Filters for Stochastic ProcessesFigure 5.1. Contour lines of the pdf of a process z = [zl,zZIT.where yi are Lagrange multipliers. Setting the gradient to zero yieldsR X X U i= yiui,(5.21)which is nothing but the eigenvalue problem (5.16) with yi = Xi.Figure 5.1 gives a geometric interpretation of the properties of the KLT.We see that u1 points towards the largest deviation from the center of gravitym.Minimal Geometric Mean Property of the KLT. For any positivedefinite matrix X = X i j , i, j = 1, . .
. ,n the following inequality holds [7]:(5.22)Equality is given if X is diagonal. Since the KLT leads to a diagonalcovariance matrix of the representation, this means that the KLT leads torandom variables with a minimal geometric mean of the variances. From this,again, optimal properties in signal coding can be concluded [76].The KLT of White Noise Processes. For the special case that R,, isthe covariance matrix of a white noise process withR,, = o2 Iwe haveX1=X2=...= X n = 0 2 .Thus, the KLT is not unique in this case. Equation (5.19) shows that a whitenoise process can be optimally approximated with any orthonormal basis.1075.2.
The Discrete Karhunen-LobeTransformRelationships between Covariance Matrices. In the following we willbriefly list some relationships between covariance matrices. WithA=E{aaH}=[A1...01,(5.23)we can write (5.15) asA = UHR,,U.(5.24)Observing U H = U-', We obtainR,, = U A U H .(5.25)Assuming that all eigenvalues are larger than zero, A-1 is given byFinally, for R;: we obtainR;: = U K I U H .(5.27)Application Example. In pattern recognition it is important to classifysignals by means of a fewconcise features. The signals considered in thisexample aretakenfrominductive loops embedded in the pavement of ahighway in order to measure the change of inductivity while vehicles pass overthem.
The goal is to discriminate different types of vehicle (car, truck, bus,etc.). In the following, we will consider the two groups car and truck. Afterappropriate pre-processing (normalization of speed, length, and amplitude)weobtain the measured signals shown in Figure 5.2, which are typical examplesof the two classes.
The stochastic processes considered are z1 (car) and z2(truck). The realizations are denoted as izl, i z ~ i,= 1 . . . N .In a first step, zero-mean processes are generated:(5.28)The mean values can be estimated by.N(5.29)108Chapter 5. Tkansforms and Filters for Stochastic Processes10.8t0.68 0.40.2'00.20.40.60.8110.81 0.60.68 0.4c 0.40.20- Original- Original.. .. . .. Approximation0.20.40.6. .. .. . .