Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C (523184), страница 45
Текст из файла (страница 45)
This is not the right way.The right way is to use a result that relates continued fractions to rationalapproximations, and that gives a means of evaluating (5.2.1) or (5.2.2) from leftto right. Let fn denote the result of evaluating (5.2.2) with coefficients throughan and bn . Thenfn =AnBn(5.2.4)where An and Bn are given by the following recurrence:A−1 ≡ 1A0 ≡ b0Aj = bj Aj−1 + aj Aj−2B−1 ≡ 0B0 ≡ 1Bj = bj Bj−1 + aj Bj−2j = 1, 2, . . . , n(5.2.5)This method was invented by J. Wallis in 1655 (!), and is discussed in his ArithmeticaInfinitorum [4].
You can easily prove it by induction.In practice, this algorithm has some unattractive features: The recurrence (5.2.5)frequently generates very large or very small values for the partial numerators anddenominators Aj and Bj . There is thus the danger of overflow or underflow of thefloating-point representation. However, the recurrence (5.2.5) is linear in the A’s andB’s. At any point you can rescale the currently saved two levels of the recurrence,e.g., divide Aj , Bj , Aj−1, and Bj−1 all by Bj . This incidentally makes Aj = fjand is convenient for testing whether you have gone far enough: See if fj and fj−1from the last iteration are as close as you would like them to be. (If Bj happens tobe zero, which can happen, just skip the renormalization for this cycle.
A fancierlevel of optimization is to renormalize only when an overflow is imminent, savingthe unnecessary divides. All this complicates the program logic.)Two newer algorithms have been proposed for evaluating continued fractions.Steed’s method does not use Aj and Bj explicitly, but only the ratio Dj = Bj−1 /Bj .One calculates Dj and ∆fj = fj − fj−1 recursively usingDj = 1/(bj + aj Dj−1 )(5.2.6)∆fj = (bj Dj − 1)∆fj−1(5.2.7)Steed’s method (see, e.g., [5]) avoids the need for rescaling of intermediate results.However, for certain continued fractions you can occasionally run into a situation5.2 Evaluation of Continued Fractions171where the denominator in (5.2.6) approaches zero, so that Dj and ∆fj are verylarge. The next ∆fj+1 will typically cancel this large change, but with loss ofaccuracy in the numerical running sum of the fj ’s.
It is awkward to program aroundthis, so Steed’s method can be recommended only for cases where you know inadvance that no denominator can vanish. We will use it for a special purpose inthe routine bessik (§6.7).The best general method for evaluating continued fractions seems to be themodified Lentz’s method [6]. The need for rescaling intermediate results is avoidedby using both the ratiosCj = Aj /Aj−1 ,Dj = Bj−1 /Bj(5.2.8)and calculating fj byfj = fj−1 Cj Dj(5.2.9)From equation (5.2.5), one easily shows that the ratios satisfy the recurrence relationsDj = 1/(bj + aj Dj−1 ),Cj = bj + aj /Cj−1(5.2.10)In this algorithm there is the danger that the denominator in the expression for Dj ,or the quantity Cj itself, might approach zero.
Either of these conditions invalidates(5.2.10). However, Thompson and Barnett [5] show how to modify Lentz’s algorithmto fix this: Just shift the offending term by a small amount, e.g., 10−30 . If youwork through a cycle of the algorithm with this prescription, you will see that fj+1is accurately calculated.In detail, the modified Lentz’s algorithm is this:• Set f0 = b0 ; if b0 = 0 set f0 = tiny.• Set C0 = f0 .• Set D0 = 0.• For j = 1, 2, . .
.Set Dj = bj + aj Dj−1 .If Dj = 0, set Dj = tiny.Set Cj = bj + aj /Cj−1 .If Cj = 0 set Cj = tiny.Set Dj = 1/Dj .Set ∆j = Cj Dj .Set fj = fj−1 ∆j .If |∆j − 1| < eps then exit.Here eps is your floating-point precision, say 10−7 or 10−15 . The parameter tinyshould be less than typical values of eps|bj |, say 10−30 .The above algorithm assumes that you can terminate the evaluation of thecontinued fraction when |fj − fj−1 | is sufficiently small. This is usually the case,but by no means guaranteed. Jones [7] gives a list of theorems that can be used tojustify this termination criterion for various kinds of continued fractions.There is at present no rigorous analysis of error propagation in Lentz’s algorithm.However, empirical tests suggest that it is at least as good as other methods.172Chapter 5.Evaluation of FunctionsManipulating Continued FractionsSeveral important properties of continued fractions can be used to rewrite themin forms that can speed up numerical computation.
An equivalence transformationan → λan ,bn → λbn ,an+1 → λan+1(5.2.11)leaves the value of a continued fraction unchanged. By a suitable choice of the scalefactor λ you can often simplify the form of the a’s and the b’s. Of course, youcan carry out successive equivalence transformations, possibly with different λ’s, onsuccessive terms of the continued fraction.The even and odd parts of a continued fraction are continued fractions whosesuccessive convergents are f2n and f2n+1 , respectively.
Their main use is that theyconverge twice as fast as the original continued fraction, and so if their terms are notmuch more complicated than the terms in the original there can be a big savings incomputation. The formula for the even part of (5.2.2) isfeven = d0 +c1c2···d1 + d2 +(5.2.12)where in terms of intermediate variablesα1 =a1b1αn =an,bn bn−1(5.2.13)n≥2we haved 0 = b0 ,c1 = α 1 ,d1 = 1 + α2cn = −α2n−1 α2n−2, dn = 1 + α2n−1 + α2n ,n≥2(5.2.14)You can find the similar formula for the odd part in the review by Blanch [1].
Oftena combination of the transformations (5.2.14) and (5.2.11) is used to get the bestform for numerical work.We will make frequent use of continued fractions in the next chapter.CITED REFERENCES AND FURTHER READING:Abramowitz, M., and Stegun, I.A. 1964, Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55 (Washington: National Bureau of Standards; reprinted 1968 byDover Publications, New York), §3.10.Blanch, G. 1964, SIAM Review, vol. 6, pp. 383–421.
[1]Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), Chapter 11. [2]Cuyt, A., and Wuytack, L. 1987, Nonlinear Methods in Numerical Analysis (Amsterdam: NorthHolland), Chapter 1.Fike, C.T. 1968, Computer Evaluation of Mathematical Functions (Englewood Cliffs, NJ: PrenticeHall), §§8.2, 10.4, and 10.5. [3]Wallis, J.
1695, in Opera Mathematica, vol. 1, p. 355, Oxoniae e Theatro Shedoniano. Reprintedby Georg Olms Verlag, Hildeshein, New York (1972). [4]5.3 Polynomials and Rational Functions173Thompson, I.J., and Barnett, A.R. 1986, Journal of Computational Physics, vol. 64, pp. 490–509.[5]Lentz, W.J. 1976, Applied Optics, vol. 15, pp.
668–671. [6]Jones, W.B. 1973, in Padé Approximants and Their Applications, P.R. Graves-Morris, ed. (London: Academic Press), p. 125. [7]5.3 Polynomials and Rational FunctionsA polynomial of degree N is represented numerically as a stored array ofcoefficients, c[j] with j= 0, . .
. , N . We will always take c[0] to be the constantterm in the polynomial, c[N ] the coefficient of xN ; but of course other conventionsare possible. There are two kinds of manipulations that you can do with a polynomial:numerical manipulations (such as evaluation), where you are given the numericalvalue of its argument, or algebraic manipulations, where you want to transformthe coefficient array in some way without choosing any particular argument. Let’sstart with the numerical.We assume that you know enough never to evaluate a polynomial this way:p=c[0]+c[1]*x+c[2]*x*x+c[3]*x*x*x+c[4]*x*x*x*x;or (even worse!),p=c[0]+c[1]*x+c[2]*pow(x,2.0)+c[3]*pow(x,3.0)+c[4]*pow(x,4.0);Come the (computer) revolution, all persons found guilty of such criminalbehavior will be summarily executed, and their programs won’t be! It is a matterof taste, however, whether to writep=c[0]+x*(c[1]+x*(c[2]+x*(c[3]+x*c[4])));orp=(((c[4]*x+c[3])*x+c[2])*x+c[1])*x+c[0];If the number of coefficients c[0..n] is large, one writesp=c[n];for(j=n-1;j>=0;j--) p=p*x+c[j];orp=c[j=n];while (j>0) p=p*x+c[--j];Another useful trick is for evaluating a polynomial P (x) and its derivativedP (x)/dx simultaneously:p=c[n];dp=0.0;for(j=n-1;j>=0;j--) {dp=dp*x+p; p=p*x+c[j];}174Chapter 5.Evaluation of Functionsorp=c[j=n];dp=0.0;while (j>0) {dp=dp*x+p; p=p*x+c[--j];}which yields the polynomial as p and its derivative as dp.The above trick, which is basically synthetic division [1,2] , generalizes to theevaluation of the polynomial and nd of its derivatives simultaneously:void ddpoly(float c[], int nc, float x, float pd[], int nd)Given the nc+1 coefficients of a polynomial of degree nc as an array c[0..nc] with c[0]being the constant term, and given a value x, and given a value nd>1, this routine returns thepolynomial evaluated at x as pd[0] and nd derivatives as pd[1..nd].{int nnd,j,i;float cnst=1.0;pd[0]=c[nc];for (j=1;j<=nd;j++) pd[j]=0.0;for (i=nc-1;i>=0;i--) {nnd=(nd < (nc-i) ? nd : nc-i);for (j=nnd;j>=1;j--)pd[j]=pd[j]*x+pd[j-1];pd[0]=pd[0]*x+c[i];}for (i=2;i<=nd;i++) {After the first derivative, factorial constants come in.cnst *= i;pd[i] *= cnst;}}As a curiosity, you might be interested to know that polynomials of degreen > 3 can be evaluated in fewer than n multiplications, at least if you are willingto precompute some auxiliary coefficients and, in some cases, do an extra addition.For example, the polynomialP (x) = a0 + a1 x + a2 x2 + a3 x3 + a4 x4(5.3.1)where a4 > 0, can be evaluated with 3 multiplications and 5 additions as follows:P (x) = [(Ax + B)2 + Ax + C][(Ax + B)2 + D] + E(5.3.2)where A, B, C, D, and E are to be precomputed byA = (a4 )1/4B=a3 − A34A3D = 3B 2 + 8B 3 +a1 A − 2a2 BA2a2− 2B − 6B 2 − DA2E = a0 − B 4 − B 2 (C + D) − CDC=(5.3.3)5.3 Polynomials and Rational Functions175Fifth degree polynomials can be evaluated in 4 multiplies and 5 adds; sixth degreepolynomials can be evaluated in 4 multiplies and 7 adds; if any of this strikesyou as interesting, consult references [3-5].