Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C (523184), страница 44
Текст из файла (страница 44)
This catastrophe is not usually unexpected: When you finda power series in a book (or when you work one out yourself), you will generallyalso know the radius of convergence. An insidious problem occurs with series thatconverge everywhere (in the mathematical sense), but almost nowhere fast enoughto be useful in a numerical method. Two familiar examples are the sine functionand the Bessel function of the first kind,sin x =∞(−1)k 2k+1x(2k + 1)!(5.1.2)∞# x $n (− 14 x2 )k2k!(k + n)!(5.1.3)k=0Jn (x) =k=0Both of these series converge for all x. But both don’t even start to convergeuntil k |x|; before this, their terms are increasing.
This makes these seriesuseless for large x.Accelerating the Convergence of SeriesThere are several tricks for accelerating the rate of convergence of a series (or,equivalently, of a sequence of partial sums). These tricks will not generally help incases like (5.1.2) or (5.1.3) while the size of the terms is still increasing.
For serieswith terms of decreasing magnitude, however, some accelerating methods can bestartlingly good. Aitken’s δ 2 -process is simply a formula for extrapolating the partialsums of a series whose convergence is approximately geometric. If Sn−1 , Sn , Sn+1are three successive partial sums, then an improved estimate isSn ≡ Sn+1 −(Sn+1 − Sn )2Sn+1 − 2Sn + Sn−1(5.1.4)You can also use (5.1.4) with n + 1 and n − 1 replaced by n + p and n − prespectively, for any integer p. If you form the sequence of Si ’s, you can apply(5.1.4) a second time to that sequence, and so on. (In practice, this iteration willonly rarely do much for you after the first stage.) Note that equation (5.1.4) shouldbe computed as written; there exist algebraically equivalent forms that are muchmore susceptible to roundoff error.For alternating series (where the terms in the sum alternate in sign), Euler’stransformation can be a powerful tool.
Generally it is advisable to do a small1675.1 Series and Their Convergencenumber n − 1 of terms directly, then apply the transformation to the rest of the seriesbeginning with the nth term. The formula (for n even) is∞(−1)s us = u0 − u1 + u2 . .
. − un−1 +s=0∞(−1)ss=02s+1[∆s un ](5.1.5)Here ∆ is the forward difference operator, i.e.,∆un ≡ un+1 − un∆2 un ≡ un+2 − 2un+1 + un(5.1.6)∆ un ≡ un+3 − 3un+2 + 3un+1 − un3etc.Of course you don’t actually do the infinite sum on the right-hand side of (5.1.5),but only the first, say, p terms, thus requiring the first p differences (5.1.6) obtainedfrom the terms starting at un .Euler’s transformation can be applied not only to convergent series. In somecases it will produce accurate answers from the first terms of a series that is formallydivergent. It is widely used in the summation of asymptotic series.
In this caseit is generally wise not to sum farther than where the terms start increasing inmagnitude; and you should devise some independent numerical check that the resultsare meaningful.There is an elegant and subtle implementation of Euler’s transformation dueto van Wijngaarden [1]: It incorporates the terms of the original alternating seriesone at a time, in order. For each incorporation it either increases p by 1, equivalentto computing one further difference (5.1.6), or else retroactively increases n by 1,without having to redo all the difference calculations based on the old n value! Thedecision as to which to increase, n or p, is taken in such a way as to make theconvergence most rapid.
Van Wijngaarden’s technique requires only one vector ofsaved partial differences. Here is the algorithm:#include <math.h>void eulsum(float *sum, float term, int jterm, float wksp[])Incorporates into sum the jterm’th term, with value term, of an alternating series. sum isinput as the previous partial sum, and is output as the new partial sum. The first call to thisroutine, with the first term in the series, should be with jterm=1. On the second call, termshould be set to the second term of the series, with sign opposite to that of the first call, andjterm should be 2.
And so on. wksp is a workspace array provided by the calling program,dimensioned at least as large as the maximum number of terms to be incorporated.{int j;static int nterm;float tmp,dum;if (jterm == 1) {nterm=1;*sum=0.5*(wksp[1]=term);} else {tmp=wksp[1];wksp[1]=term;for (j=1;j<=nterm-1;j++) {dum=wksp[j+1];Initialize:Number of saved differences in wksp.Return first estimate.Update saved quantities by van Wijngaarden’s algorithm.168Chapter 5.Evaluation of Functionswksp[j+1]=0.5*(wksp[j]+tmp);tmp=dum;}wksp[nterm+1]=0.5*(wksp[nterm]+tmp);if (fabs(wksp[nterm+1]) <= fabs(wksp[nterm]))Favorable to increase p,*sum += (0.5*wksp[++nterm]);and the table becomes longer.elseFavorable to increase n,*sum += wksp[nterm+1];the table doesn’t become longer.}}The powerful Euler technique is not directly applicable to a series of positiveterms. Occasionally it is useful to convert a series of positive terms into an alternatingseries, just so that the Euler transformation can be used! Van Wijngaarden has givena transformation for accomplishing this [1]:∞vr =r=1∞(−1)r−1 wr(5.1.7)r=1wherewr ≡ vr + 2v2r + 4v4r + 8v8r + · · ·(5.1.8)Equations (5.1.7) and (5.1.8) replace a simple sum by a two-dimensional sum, eachterm in (5.1.7) being itself an infinite sum (5.1.8).
This may seem a strange way tosave on work! Since, however, the indices in (5.1.8) increase tremendously rapidly,as powers of 2, it often requires only a few terms to converge (5.1.8) to extraordinaryaccuracy. You do, however, need to be able to compute the vr ’s efficiently for“random” values r. The standard “updating” tricks for sequential r’s, mentionedabove following equation (5.1.1), can’t be used.Actually, Euler’s transformation is a special case of a more general transformation of power series. Suppose that some known function g(z) has the seriesg(z) =∞bn z n(5.1.9)n=0and that you want to sum the new, unknown, series∞f(z) =cn b n z n(5.1.10)n=0Then it is not hard to show (see [2]) that equation (5.1.10) can be written asf(z) =∞[∆(n)c0 ]n=0g(n) nzn!(5.1.11)which often converges much more rapidly.
Here ∆(n)c0 is the nth finite-differenceoperator (equation 5.1.6), with ∆(0)c0 ≡ c0 , and g(n) is the nth derivative of g(z).The usual Euler transformation (equation 5.1.5 with n = 0) can be obtained, forexample, by substitutingg(z) =1= 1 − z + z2 − z3 + · · ·1+z(5.1.12)5.2 Evaluation of Continued Fractions169into equation (5.1.11), and then setting z = 1.Sometimes you will want to compute a function from a series representationeven when the computation is not efficient. For example, you may be using the valuesobtained to fit the function to an approximating form that you will use subsequently(cf.
§5.8). If you are summing very large numbers of slowly convergent terms, payattention to roundoff errors! In floating-point representation it is more accurate tosum a list of numbers in the order starting with the smallest one, rather than startingwith the largest one. It is even better to group terms pairwise, then in pairs of pairs,etc., so that all additions involve operands of comparable magnitude.CITED REFERENCES AND FURTHER READING:Goodwin, E.T.
(ed.) 1961, Modern Computing Methods, 2nd ed. (New York: Philosophical Library), Chapter 13 [van Wijngaarden’s transformations]. [1]Dahlquist, G., and Bjorck, A. 1974, Numerical Methods (Englewood Cliffs, NJ: Prentice-Hall),Chapter 3.Abramowitz, M., and Stegun, I.A. 1964, Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55 (Washington: National Bureau of Standards; reprinted 1968 byDover Publications, New York), §3.6.Mathews, J., and Walker, R.L.
1970, Mathematical Methods of Physics, 2nd ed. (Reading, MA:W.A. Benjamin/Addison-Wesley), §2.3. [2]5.2 Evaluation of Continued FractionsContinued fractions are often powerful ways of evaluating functions that occurin scientific applications. A continued fraction looks like this:f(x) = b0 +a1b1 +a2b2 +b3 +(5.2.1)a3a4a5b4 +b5 +···Printers prefer to write this asf(x) = b0 +a1a2a3a4a5···b1 + b2 + b3 + b4 + b5 +(5.2.2)In either (5.2.1) or (5.2.2), the a’s and b’s can themselves be functions of x, usuallylinear or quadratic monomials at worst (i.e., constants times x or times x2 ). Forexample, the continued fraction representation of the tangent function istan x =x x2 x2 x2···1− 3− 5− 7−(5.2.3)Continued fractions frequently converge much more rapidly than power seriesexpansions, and in a much larger domain in the complex plane (not necessarilyincluding the domain of convergence of the series, however).
Sometimes thecontinued fraction converges best where the series does worst, although this is not170Chapter 5.Evaluation of Functionsa general rule. Blanch [1] gives a good review of the most useful convergence testsfor continued fractions.There are standard techniques, including the important quotient-difference algorithm, for going back and forth between continued fraction approximations, powerseries approximations, and rational function approximations. Consult Acton [2] foran introduction to this subject, and Fike [3] for further details and references.How do you tell how far to go when evaluating a continued fraction? Unlikea series, you can’t just evaluate equation (5.2.1) from left to right, stopping whenthe change is small. Written in the form of (5.2.1), the only way to evaluate thecontinued fraction is from right to left, first (blindly!) guessing how far out tostart.