c13-6 (Numerical Recipes in C)
Описание файла
Файл "c13-6" внутри архива находится в папке "Numerical Recipes in C". PDF-файл из архива "Numerical Recipes in C", который расположен в категории "". Всё это находится в предмете "цифровая обработка сигналов (цос)" из 8 семестр, которые можно найти в файловом архиве МГТУ им. Н.Э.Баумана. Не смотря на прямую связь этого архива с МГТУ им. Н.Э.Баумана, его также можно найти и в других разделах. Архив можно найти в разделе "книги и методические указания", в предмете "цифровая обработка сигналов" в общих файлах.
Просмотр PDF-файла онлайн
Текст из PDF
564Chapter 13.Fourier and Spectral Applicationsfor specific situations, and arm themselves with a variety of other tricks. We suggest thatyou do likewise, as your projects demand.CITED REFERENCES AND FURTHER READING:Hamming, R.W. 1983, Digital Filters, 2nd ed. (Englewood Cliffs, NJ: Prentice-Hall).Oppenheim, A.V., and Schafer, R.W. 1989, Discrete-Time Signal Processing (Englewood Cliffs,NJ: Prentice-Hall).Rice, J.R. 1964, The Approximation of Functions (Reading, MA: Addison-Wesley); also 1969,op. cit., Vol.
2.Rabiner, L.R., and Gold, B. 1975, Theory and Application of Digital Signal Processing (EnglewoodCliffs, NJ: Prentice-Hall).13.6 Linear Prediction and Linear PredictiveCodingWe begin with a very general formulation that will allow us to make connectionsto various special cases. Let {yα0 } be a set of measured values for some underlyingset of true values of a quantity y, denoted {yα }, related to these true values bythe addition of random noise,yα0 = yα + nα(13.6.1)(compare equation 13.3.2, with a somewhat different notation). Our use of a Greeksubscript to index the members of the set is meant to indicate that the data pointsare not necessarily equally spaced along a line, or even ordered: they might be“random” points in three-dimensional space, for example. Now, suppose we want toconstruct the “best” estimate of the true value of some particular point y? as a linearcombination of the known, noisy, values.
WritingXy? =d?α yα0 + x?(13.6.2)αwe want to find coefficients d?α that minimize, in some way, the discrepancy x? . Thecoefficients d?α have a “star” subscript to indicate that they depend on the choice ofpoint y? . Later, we might want to let y? be one of the existing yα ’s.
In that case,our problem becomes one of optimal filtering or estimation, closely related to thediscussion in §13.3. On the other hand, we might want y? to be a completely newpoint. In that case, our problem will be one of linear prediction.A natural way to minimize the discrepancy x? is in the statistical mean squaresense.
If angle brackets denote statistical averages, then we seek d?α ’s that minimize*2 +X 2d?α (yα + nα ) − y?x? =α(13.6.3)XX =(hyα yβ i + hnα nβ i)d?α d?β − 2hy? yα i d?α + y?2αβαSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Antoniou, A.
1979, Digital Filters: Analysis and Design (New York: McGraw-Hill).Parks, T.W., and Burrus, C.S. 1987, Digital Filter Design (New York: Wiley).13.6 Linear Prediction and Linear Predictive Coding565Setting the derivative of equation (13.6.3) with respect to the d?α ’s equal to zero,one readily obtains the set of linear equations,X[φαβ + ηαβ ] d?β = φ?α(13.6.5)βIf we write the solution as a matrix inverse, then the estimation equation (13.6.2)becomes, omitting the minimized discrepancy x? ,X−1φ?α [φµν + ηµν ]αβ yβ0(13.6.6)y? ≈αβFrom equations (13.6.3) and (13.6.5) one can also calculate the expected mean squarevalue of the discrepancy at its minimum, denoted x2? 0 ,x2?0 X X−1= y?2 −d?β φ?β = y?2 −φ?α [φµν + ηµν ]αβ φ?ββ(13.6.7)αβ A final general result tells how much the mean square discrepancy x2? isincreased if we use the estimation equation (13.6.2) not with the best values d?β , butwith some other values db?β .
The above equations then implyX 2 2(db?α − d?α ) [φαβ + ηαβ ] (db?β − d?β )(13.6.8)x? = x? 0 +αβSince the second term is a pure quadratic form, we see that the increase in thediscrepancy is only second order in any error made in estimating the d?β ’s.Connection to Optimal FilteringIf we change “star” to a Greek index, say γ, then the above formulas describeoptimal filtering, generalizing the discussion of §13.3.
One sees, for example, thatif the noise amplitudes nα go to zero, so likewise do the noise autocorrelationsηαβ , and, canceling a matrix times its inverse, equation (13.6.6) simply becomesyγ = yγ0 . Another special case occurs if the matrices φαβ and ηαβ are diagonal.In that case, equation (13.6.6) becomesyγ =φγγy0φγγ + ηγγ γ(13.6.9)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Here we have used the fact that noise is uncorrelated with signal, e.g., hnα yβ i = 0.The quantities hyα yβ i and hy? yα i describe the autocorrelation structure of theunderlying data.
We have already seen an analogous expression, (13.2.2), for thecase of equally spaced data points on a line; we will meet correlation several timesagain in its statistical sense in Chapters 14 and 15. The quantities hnα nβ i describe theautocorrelation properties of the noise. Often, for point-to-point uncorrelated noise,we have hnα nβ i = n2α δαβ .
It is convenient to think of the various correlationquantities as comprising matrices and vectors, φαβ ≡ hyα yβ iφ?α ≡ hy? yα iηαβ ≡ hnα nβ i or n2α δαβ (13.6.4)566Chapter 13.Fourier and Spectral ApplicationsLinear PredictionClassical linear prediction specializes to the case where the data points yβare equally spaced along a line, yi , i = 1, 2, . . ., N , and we want to use Mconsecutive values of yi to predict an M + 1st. Stationarity is assumed. That is, theautocorrelation hyj yk i is assumed to depend only on the difference |j − k|, and noton j or k individually, so that the autocorrelation φ has only a single index,φj ≡ hyi yi+j i ≈N−j1 Xyi yi+jN −j(13.6.10)i=1Here, the approximate equality shows one way to use the actual data set values toestimate the autocorrelation components.
(In fact, there is a better way to make theseestimates; see below.) In the situation described, the estimation equation (13.6.2) isyn =MXdj yn−j + xn(13.6.11)j=1(compare equation 13.5.1) and equation (13.6.5) becomes the set of M equations forthe M unknown dj ’s, now called the linear prediction (LP) coefficients,MXφ|j−k| dj = φk(k = 1, .
. . , M )(13.6.12)j=1Notice that while noise is not explicitly included in the equations, it is properlyaccounted for, if it is point-to-point uncorrelated: φ0 , as estimated by equation(13.6.10) using measured values yi0 , actually estimates the diagonal part of φαα +ηαα ,above. The mean square discrepancy x2n is estimated by equation (13.6.7) asx2n = φ0 − φ1 d1 − φ2 d2 − · · · − φM dM(13.6.13)To use linear prediction, we first compute the dj ’s, using equations (13.6.10)and (13.6.12). We then calculate equation (13.6.13) or, more concretely, apply(13.6.11) to the known record to get an idea of how large are the discrepancies xi .If the discrepancies are small, then we can continue applying (13.6.11) right on intoSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).which is readily recognizable as equation (13.3.6) with S 2 → φγγ , N 2 → ηγγ . Whatis going on is this: For the case of equally spaced data points, and in the Fourierdomain, autocorrelations become simply squares of Fourier amplitudes (WienerKhinchin theorem, equation 12.0.12), and the optimal filter can be constructedalgebraically, as equation (13.6.9), without inverting any matrix.More generally, in the time domain, or any other domain, an optimal filter (onethat minimizes the square of the discrepancy from the underlying true value in thepresence of measurement noise) can be constructed by estimating the autocorrelationmatrices φαβ and ηαβ , and applying equation (13.6.6) with ? → γ.
(Equation13.6.8 is in fact the basis for the §13.3’s statement that even crude optimal filteringcan be quite effective.)13.6 Linear Prediction and Linear Predictive Coding567zN −NXdj z N−j = 0(13.6.14)j=1have all N of its roots inside the unit circle,|z| ≤ 1(13.6.15)There is no guarantee that the coefficients produced by equation (13.6.12) will havethis property.