c13-4 (Numerical Recipes in C), страница 2
Описание файла
Файл "c13-4" внутри архива находится в папке "Numerical Recipes in C". PDF-файл из архива "Numerical Recipes in C", который расположен в категории "". Всё это находится в предмете "цифровая обработка сигналов (цос)" из 8 семестр, которые можно найти в файловом архиве МГТУ им. Н.Э.Баумана. Не смотря на прямую связь этого архива с МГТУ им. Н.Э.Баумана, его также можно найти и в других разделах. Архив можно найти в разделе "книги и методические указания", в предмете "цифровая обработка сигналов" в общих файлах.
Просмотр PDF-файла онлайн
Текст 2 страницы из PDF
In other words, thestandard deviation is always 100 percent of the value, independent of N ! How canthis be? Where did all the information go as we added points? It all went intoproducing estimates at a greater number of discrete frequencies fk . If we sample alonger run of data using the same sampling rate, then the Nyquist critical frequencyfc is unchanged, but we now have finer frequency resolution (more fk ’s) within theNyquist frequency interval; alternatively, if we sample the same length of data with afiner sampling interval, then our frequency resolution is unchanged, but the Nyquistrange now extends up to a higher frequency. In neither case do the additional samplesreduce the variance of any one particular frequency’s estimated PSD.You don’t have to live with PSD estimates with 100 percent standard deviations,however.
You simply have to know some techniques for reducing the variance ofthe estimates. Here are two techniques that are very nearly identical mathematically,though different in implementation. The first is to compute a periodogram estimatewith finer discrete frequency spacing than you really need, and then to sum theperiodogram estimates at K consecutive discrete frequencies to get one “smoother”estimate at the mid frequency of those K. The variance of that summed estimatewill be smaller than the estimate itself by a factor of exactly√ 1/K, i.e., the standarddeviation will be smaller than 100 percent by a factor 1/ K.
Thus, to estimate thepower spectrum at M + 1 discrete frequencies between 0 and fc inclusive, you beginby taking the FFT of 2M K points (which number had better be an integer power oftwo!). You then take the modulus square of the resulting coefficients, add positiveand negative frequency pairs, and divide by (2M K)2 , all according to equation(13.4.5) with N = 2M K.
Finally, you “bin” the results into summed (not averaged)groups of K. This procedure is very easy to program, so we will not bother to givea routine for it. The reason that you sum, rather than average, K consecutive pointsis so that your final PSD estimate will preserve the normalization property that thesum of its M + 1 values equals the mean square value of the function.A second technique for estimating the PSD at M + 1 discrete frequencies inthe range 0 to fc is to partition the original sampled data into K segments each of2M consecutive sampled points.
Each segment is separately FFT’d to produce aperiodogram estimate (equation 13.4.5 with N ≡ 2M ). Finally, the K periodogramestimates are averaged at each frequency. It is this final averaging√ that reduces thevariance of the estimate by a factor K (standard deviation by K). This secondtechnique is computationally more efficient than the first technique above by a modestfactor, since it is logarithmically more efficient to take many shorter FFTs than onelonger one. The principal advantage of the second technique, however, is that only2M data points are manipulated at a single time, not 2KM as in the first technique.This means that the second technique is the natural choice for processing long runsof data, as from a magnetic tape or other data record. We will give a routine laterfor implementing this second technique, but we need first to return to the matters of55313.4 Power Spectrum Estimation Using the FFTleakage and data windowing which were brought up after equation (13.4.7) above.Data Windowing211sin(πs)W (s) = 2= 2N sin(πs/N )N2N−1 X 2πisk/N e(13.4.8)k=0The reason for the leakage at large values of s, is that the square window functionturns on and off so rapidly.
Its Fourier transform has substantial componentsat high frequencies. To remedy this situation, we can multiply the input datacj , j = 0, . . . , N − 1 by a window function wj that changes more gradually fromzero to a maximum and then back to zero as j ranges from 0 to N . In this case, theequations for the periodogram estimator (13.4.4–13.4.5) becomeDk ≡N−1Xcj wj e2πijk/Nk = 0, . . . , N − 1(13.4.9)j=0P (0) = P (f0 ) =12|D0 |Wssi1 h22|Dk | + |DN−k |P (fk ) =Wss21 P (fc ) = P (fN/2 ) =DN/2 Wssk = 1, 2, . . . ,N−12(13.4.10)where Wss stands for “window squared and summed,”Wss ≡ NN−1Xj=0wj2(13.4.11)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).The purpose of data windowing is to modify equation (13.4.7), which expressesthe relation between the spectral estimate Pk at a discrete frequency and the actualunderlying continuous spectrum P (f) at nearby frequencies. In general, the spectralpower in one “bin” k contains leakage from frequency components that are actuallys bins away, where s is the independent variable in equation (13.4.7). There is, aswe pointed out, quite substantial leakage even from moderately large values of s.When we select a run of N sampled points for periodogram spectral estimation,we are in effect multiplying an infinite run of sampled data cj by a window functionin time, one that is zero except during the total sampling time N ∆, and is unity duringthat time.
In other words, the data are windowed by a square window function. Bythe convolution theorem (12.0.9; but interchanging the roles of f and t), the Fouriertransform of the product of the data with this square window function is equal to theconvolution of the data’s Fourier transform with the window’s Fourier transform. Infact, we determined equation (13.4.7) as nothing more than the square of the discreteFourier transform of the unity window function.554Chapter 13.Fourier and Spectral Applicationsand fk is given by (13.4.6).
The more general form of (13.4.7) can now be writtenin terms of the window function wj as2N−1 X 2πisk/N ewk k=0Z21 N/2≈cos(2πsk/N )w(k − N/2) dk Wss −N/21W (s) =WssHere the approximate equality is useful for practical estimates, and holds for anywindow that is left-right symmetric (the usual case), and for s N (the case ofinterest for estimating leakage into nearby bins). The continuous function w(k−N/2)in the integral is meant to be some smooth function that passes through the points wk .There is a lot of perhaps unnecessary lore about choice of a window function, andpractically every function that rises from zero to a peak and then falls again has beennamed after someone. A few of the more common (also shown in Figure 13.4.1) are: j − 12 N ≡ “Bartlett window”wj = 1 − 1N (13.4.13)2(The “Parzen window” is very similar to this.)wj =12πj1 − cos≡ “Hann window”2N(13.4.14)(The “Hamming window” is similar but does not go exactly to zero at the ends.)wj = 1 −j − 12 N1N22≡ “Welch window”(13.4.15)We are inclined to follow Welch in recommending that you use either (13.4.13)or (13.4.15) in practical work.
However, at the level of this book, there iseffectively no difference between any of these (or similar) window functions. Theirdifference lies in subtle trade-offs among the various figures of merit that can beused to describe the narrowness or peakedness of the spectral leakage functionscomputed by (13.4.12). These figures of merit have such names as: highest sidelobelevel (dB), sidelobe fall-off (dB per octave), equivalent noise bandwidth (bins), 3-dBbandwidth (bins), scallop loss (dB), worst case process loss (dB). Roughly speaking,the principal trade-off is between making the central peak as narrow as possibleversus making the tails of the distribution fall off as rapidly as possible.
Fordetails, see (e.g.) [2] . Figure 13.4.2 plots the leakage amplitudes for several windowsalready discussed.There is particularly a lore about window functions that rise smoothly fromzero to unity in the first small fraction (say 10 percent) of the data, then stay atunity until the last small fraction (again say 10 percent) of the data, during whichthe window function falls smoothly back to zero. These windows will squeeze alittle bit of extra narrowness out of the main lobe of the leakage function (never asSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.