Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C (523184), страница 42
Текст из файла (страница 42)
. , 2N − 1(4.5.29)aHowever, the solution of the resulting set of algebraic equations for the coefficients aj and bjin terms of the moments µj is in general extremely ill-conditioned. Even in double precision,it is not unusual to lose all accuracy by the time N = 12. We thus reject any procedurebased on the moments (4.5.29).Sack and Donovan [5] discovered that the numerical stability is greatly improved if,instead of using powers of x as a set of basis functions to represent the pj ’s, one uses someother known set of orthogonal polynomials πj (x), say. Roughly speaking, the improvedstability occurs because the polynomial basis “samples” the interval (a, b) better than thepower basis when the inner product integrals are evaluated, especially if its weight functionresembles W (x).So assume that we know the modified moments& bνj =πj (x)W (x)dxj = 0, 1, .
. . , 2N − 1(4.5.30)awhere the πj ’s satisfy a recurrence relation analogous to (4.5.6),π−1 (x) ≡ 0π0 (x) ≡ 1(4.5.31)πj+1 (x) = (x − αj )πj (x) − βj πj−1 (x)j = 0, 1, 2, . . .and the coefficients αj , βj are known explicitly. Then Wheeler [6] has given an efficientO(N 2 ) algorithm equivalent to that of Sack and Donovan for finding aj and bj via a setof intermediate quantitiesσk,l = pk |πl k, l ≥ −1(4.5.32)Initializeσ−1,l = 0σ0,l = νla0 = α0 +l = 1, 2, . . .
, 2N − 2l = 0, 1, . . . , 2N − 1ν1ν0(4.5.33)b0 = 0Then, for k = 1, 2, . . . , N − 1, computeσk,l = σk−1,l+1 − (ak−1 − αl )σk−1,l − bk−1 σk−2,l + βlσk−1,l−1l = k, k + 1, . . . , 2N − k − 1σk−1,kσk,k+1ak = αk −+σk−1,k−1σk,kσk,kbk =σk−1,k−1(4.5.34)Note that the normalization factors can also easily be computed if needed:p0 |p0 = ν0pj |pj = bj pj−1 |pj−1 j = 1, 2, .
. .(4.5.35)You can find a derivation of the above algorithm in Ref. [7].Wheeler’s algorithm requires that the modified moments (4.5.30) be accurately computed.In practical cases there is often a closed form, or else recurrence relations can be used. The4.5 Gaussian Quadratures and Orthogonal Polynomials159algorithm is extremely successful for finite intervals (a, b). For infinite intervals, the algorithmdoes not completely remove the ill-conditioning. In this case, Gautschi [8,9] recommendsreducing the interval to a finite interval by a change of variable, and then using a suitablediscretization procedure to compute the inner products. You will have to consult thereferences for details.We give the routine orthog for generating the coefficients aj and bj by Wheeler’salgorithm, given the coefficients αj and βj , and the modified moments νj .
For consistencywith gaucof, the vectors α, β, a and b are 1-based. Correspondingly, we increase the indicesof the σ matrix by 2, i.e., sig[k,l] = σk−2,l−2 .#include "nrutil.h"void orthog(int n, float anu[], float alpha[], float beta[], float a[],float b[])Computes the coefficients aj and bj , j = 0, . . . N − 1, of the recurrence relation for monicorthogonal polynomials with weight function W (x) by Wheeler’s algorithm.
On input, the arraysalpha[1..2*n-1] and beta[1..2*n-1] are the coefficients αj and βj , j = 0, . . . 2N −2, ofthe recurrence relation for the chosen basis of orthogonal polynomials. The modified momentsνj are input in anu[1..2*n]. The first n coefficients are returned in a[1..n] and b[1..n].{int k,l;float **sig;int looptmp;sig=matrix(1,2*n+1,1,2*n+1);looptmp=2*n;for (l=3;l<=looptmp;l++) sig[1][l]=0.0;Initialization, Equation (4.5.33).looptmp++;for (l=2;l<=looptmp;l++) sig[2][l]=anu[l-1];a[1]=alpha[1]+anu[2]/anu[1];b[1]=0.0;for (k=3;k<=n+1;k++) {Equation (4.5.34).looptmp=2*n-k+3;for (l=k;l<=looptmp;l++) {sig[k][l]=sig[k-1][l+1]+(alpha[l-1]-a[k-2])*sig[k-1][l]b[k-2]*sig[k-2][l]+beta[l-1]*sig[k-1][l-1];}a[k-1]=alpha[k-1]+sig[k][k+1]/sig[k][k]-sig[k-1][k]/sig[k-1][k-1];b[k-1]=sig[k][k]/sig[k-1][k-1];}free_matrix(sig,1,2*n+1,1,2*n+1);}As an example of the use of orthog, consider the problem [7] of generating orthogonalpolynomials with the weight function W (x) = − log x on the interval (0, 1).
A suitable setof πj ’s is the shifted Legendre polynomialsπj =(j!)2Pj (2x − 1)(2j)!(4.5.36)The factor in front of Pj makes the polynomials monic. The coefficients in the recurrencerelation (4.5.31) are1αj =j = 0, 1, . . .2(4.5.37)1βj =j = 1, 2, . . .−24(4 − j )while the modified moments areνj =1(−1)j (j!)2j(j + 1)(2j)!j =0j ≥1(4.5.38)160Chapter 4.Integration of FunctionsA call to orthog with this input allows one to generate the required polynomials to machineaccuracy for very large N , and hence do Gaussian quadrature with this weight function. BeforeSack and Donovan’s observation, this seemingly simple problem was essentially intractable.Extensions of Gaussian QuadratureThere are many different ways in which the ideas of Gaussian quadrature havebeen extended.
One important extension is the case of preassigned nodes: Somepoints are required to be included in the set of abscissas, and the problem is to choosethe weights and the remaining abscissas to maximize the degree of exactness of thethe quadrature rule. The most common cases are Gauss-Radau quadrature, whereone of the nodes is an endpoint of the interval, either a or b, and Gauss-Lobattoquadrature, where both a and b are nodes. Golub [10] has given an algorithm similarto gaucof for these cases.The second important extension is the Gauss-Kronrod formulas.
For ordinaryGaussian quadrature formulas, as N increases the sets of abscissas have no pointsin common. This means that if you compare results with increasing N as a way ofestimating the quadrature error, you cannot reuse the previous function evaluations.Kronrod [11] posed the problem of searching for optimal sequences of rules, eachof which reuses all abscissas of its predecessor. If one starts with N = m, say,and then adds n new points, one has 2n + m free parameters: the n new abscissasand weights, and m new weights for the fixed previous abscissas.
The maximumdegree of exactness one would expect to achieve would therefore be 2n + m − 1.The question is whether this maximum degree of exactness can actually be achievedin practice, when the abscissas are required to all lie inside (a, b). The answer tothis question is not known in general.Kronrod showed that if you choose n = m + 1, an optimal extension canbe found for Gauss-Legendre quadrature. Patterson [12] showed how to computecontinued extensions of this kind. Sequences such as N = 10, 21, 43, 87, . . . arepopular in automatic quadrature routines [13] that attempt to integrate a function untilsome specified accuracy has been achieved.CITED REFERENCES AND FURTHER READING:Abramowitz, M., and Stegun, I.A.
1964, Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55 (Washington: National Bureau of Standards; reprinted 1968 byDover Publications, New York), §25.4. [1]Stroud, A.H., and Secrest, D. 1966, Gaussian Quadrature Formulas (Englewood Cliffs, NJ:Prentice-Hall). [2]Golub, G.H., and Welsch, J.H. 1969, Mathematics of Computation, vol. 23, pp. 221–230 andA1–A10. [3]Wilf, H.S. 1962, Mathematics for the Physical Sciences (New York: Wiley), Problem 9, p. 80. [4]Sack, R.A., and Donovan, A.F.
1971/72, Numerische Mathematik, vol. 18, pp. 465–478. [5]Wheeler, J.C. 1974, Rocky Mountain Journal of Mathematics, vol. 4, pp. 287–296. [6]Gautschi, W. 1978, in Recent Advances in Numerical Analysis, C. de Boor and G.H. Golub, eds.(New York: Academic Press), pp. 45–72.
[7]Gautschi, W. 1981, in E.B. Christoffel, P.L. Butzer and F. Fehér, eds. (Basel: Birkhauser Verlag),pp. 72–147. [8]Gautschi, W. 1990, in Orthogonal Polynomials, P. Nevai, ed. (Dordrecht: Kluwer AcademicPublishers), pp. 181–216. [9]1614.6 Multidimensional IntegralsGolub, G.H. 1973, SIAM Review, vol. 15, pp. 318–334. [10]Kronrod, A.S. 1964, Doklady Akademii Nauk SSSR, vol. 154, pp. 283–286 (in Russian). [11]Patterson, T.N.L.
1968, Mathematics of Computation, vol. 22, pp. 847–856 and C1–C11; 1969,op. cit., vol. 23, p. 892. [12]Piessens, R., de Doncker, E., Uberhuber, C.W., and Kahaner, D.K. 1983, QUADPACK: A Subroutine Package for Automatic Integration (New York: Springer-Verlag). [13]Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag),§3.6.Johnson, L.W., and Riess, R.D. 1982, Numerical Analysis, 2nd ed.
(Reading, MA: AddisonWesley), §6.5.Carnahan, B., Luther, H.A., and Wilkes, J.O. 1969, Applied Numerical Methods (New York:Wiley), §§2.9–2.10.Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York:McGraw-Hill), §§4.4–4.8.4.6 Multidimensional IntegralsIntegrals of functions of several variables, over regions with dimension greaterthan one, are not easy. There are two reasons for this. First, the number of functionevaluations needed to sample an N -dimensional space increases as the N th powerof the number needed to do a one-dimensional integral. If you need 30 functionevaluations to do a one-dimensional integral crudely, then you will likely need onthe order of 30000 evaluations to reach the same crude level for a three-dimensionalintegral.
Second, the region of integration in N -dimensional space is defined byan N − 1 dimensional boundary which can itself be terribly complicated: It neednot be convex or simply connected, for example. By contrast, the boundary of aone-dimensional integral consists of two numbers, its upper and lower limits.The first question to be asked, when faced with a multidimensional integral,is, “can it be reduced analytically to a lower dimensionality?” For example,so-called iterated integrals of a function of one variable f(t) can be reduced toone-dimensional integrals by the formula&x&tn&t3&t2dtn−1 · · ·dt2f(t1 )dt1000& x1=(x − t)n−1 f(t)dt(n − 1)! 0dtn0(4.6.1)Alternatively, the function may have some special symmetry in the way it dependson its independent variables.