Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C (523184), страница 39
Текст из файла (страница 39)
The polynomial extrapolation implicit in the second call toqromo deals with a polynomial in 1/x, not in x.To deal with an integral that has an integrable power-law singularity at its lowerlimit, one also makes a change of variable. If the integrand diverges as (x − a)γ ,0 ≤ γ < 1, near x = a, use the identity& (b−a)1−γ& bγ11f(x)dx =t 1−γ f(t 1−γ + a)dt(b > a)(4.4.3)1−γ 0aIf the singularity is at the upper limit, use the identity& (b−a)1−γ& bγ11f(x)dx =t 1−γ f(b − t 1−γ )dt1−γ 0a(b > a)(4.4.4)If there is a singularity at both limits, divide the integral at an interior breakpointas in the example above.Equations (4.4.3) and (4.4.4) are particularly simple in the case of inversesquare-root singularities, a case that occurs frequently in practice:& √b−a& bf(x)dx =2tf(a + t2 )dt(b > a)(4.4.5)a0for a singularity at a, and& b&f(x)dx =a√b−a2tf(b − t2 )dt(b > a)(4.4.6)0for a singularity at b.
Once again, we can implement these changes of variabletransparently to the user by defining substitute routines for midpnt which make thechange of variable automatically:#include <math.h>#define FUNC(x) (2.0*(x)*(*funk)(aa+(x)*(x)))float midsql(float (*funk)(float), float aa, float bb, int n)This routine is an exact replacement for midpnt, except that it allows for an inverse square-rootsingularity in the integrand at the lower limit aa.{float x,tnm,sum,del,ddel,a,b;static float s;int it,j;b=sqrt(bb-aa);a=0.0;if (n == 1) {The rest of the routine is exactly like midpnt and is omitted.146Chapter 4.Integration of FunctionsSimilarly,#include <math.h>#define FUNC(x) (2.0*(x)*(*funk)(bb-(x)*(x)))float midsqu(float (*funk)(float), float aa, float bb, int n)This routine is an exact replacement for midpnt, except that it allows for an inverse square-rootsingularity in the integrand at the upper limit bb.{float x,tnm,sum,del,ddel,a,b;static float s;int it,j;b=sqrt(bb-aa);a=0.0;if (n == 1) {The rest of the routine is exactly like midpnt and is omitted.One last example should suffice to show how these formulas are derived ingeneral.
Suppose the upper limit of integration is infinite, and the integrand falls offexponentially. Then we want a change of variable that maps e−x dx into (±)dt (withthe sign chosen to keep the upper limit of the new variable larger than the lowerlimit). Doing the integration gives by inspectiont = e−xx = − log tor(4.4.7)so that&&x=∞t=e−af(x)dx =x=af(− log t)t=0dtt(4.4.8)The user-transparent implementation would be#include <math.h>#define FUNC(x) ((*funk)(-log(x))/(x))float midexp(float (*funk)(float), float aa, float bb, int n)This routine is an exact replacement for midpnt, except that bb is assumed to be infinite(value passed not actually used). It is assumed that the function funk decreases exponentiallyrapidly at infinity.{float x,tnm,sum,del,ddel,a,b;static float s;int it,j;b=exp(-aa);a=0.0;if (n == 1) {The rest of the routine is exactly like midpnt and is omitted.CITED REFERENCES AND FURTHER READING:Acton, F.S.
1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), Chapter 4.4.5 Gaussian Quadratures and Orthogonal Polynomials147Dahlquist, G., and Bjorck, A. 1974, Numerical Methods (Englewood Cliffs, NJ: Prentice-Hall),§7.4.3, p. 294.Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag),§3.7, p. 152.4.5 Gaussian Quadratures and OrthogonalPolynomialsIn the formulas of §4.1, the integral of a function was approximated by the sumof its functional values at a set of equally spaced points, multiplied by certain aptlychosen weighting coefficients.
We saw that as we allowed ourselves more freedomin choosing the coefficients, we could achieve integration formulas of higher andhigher order. The idea of Gaussian quadratures is to give ourselves the freedom tochoose not only the weighting coefficients, but also the location of the abscissas atwhich the function is to be evaluated: They will no longer be equally spaced. Thus,we will have twice the number of degrees of freedom at our disposal; it will turn outthat we can achieve Gaussian quadrature formulas whose order is, essentially, twicethat of the Newton-Cotes formula with the same number of function evaluations.Does this sound too good to be true? Well, in a sense it is. The catch is afamiliar one, which cannot be overemphasized: High order is not the same as highaccuracy. High order translates to high accuracy only when the integrand is verysmooth, in the sense of being “well-approximated by a polynomial.”There is, however, one additional feature of Gaussian quadrature formulas thatadds to their usefulness: We can arrange the choice of weights and abscissas to makethe integral exact for a class of integrands “polynomials times some known functionW (x)” rather than for the usual class of integrands “polynomials.” The functionW (x) can then be chosen to remove integrable singularities from the desired integral.Given W (x), in other words, and given an integer N , we can find a set of weightswj and abscissas xj such that the approximation&abW (x)f(x)dx ≈Nwj f(xj )(4.5.1)j=1is exact if f(x) is a polynomial.
For example, to do the integral&1−1exp(− cos2 x)√dx1 − x2(4.5.2)(not a very natural looking integral, it must be admitted), we might well be interestedin a Gaussian quadrature formula based on the choiceW (x) = √11 − x2(4.5.3)in the interval (−1, 1). (This particular choice is called Gauss-Chebyshev integration,for reasons that will become clear shortly.)148Chapter 4.Integration of FunctionsNotice that the integration formula (4.5.1) can also be written with the weightfunction W (x) not overtly visible: Define g(x) ≡ W (x)f(x) and vj ≡ wj /W (xj ).Then (4.5.1) becomes&abg(x)dx ≈Nvj g(xj )(4.5.4)j=1Where did the function W (x) go? It is lurking there, ready to give high-orderaccuracy to integrands of the form polynomials times W (x), and ready to deny highorder accuracy to integrands that are otherwise perfectly smooth and well-behaved.When you find tabulations of the weights and abscissas for a given W (x), you haveto determine carefully whether they are to be used with a formula in the form of(4.5.1), or like (4.5.4).Here is an example of a quadrature routine that contains the tabulated abscissasand weights for the case W (x) = 1 and N = 10.
Since the weights and abscissasare, in this case, symmetric around the midpoint of the range of integration, thereare actually only five distinct values of each:float qgaus(float (*func)(float), float a, float b)Returns the integral of the function func between a and b, by ten-point Gauss-Legendre integration: the function is evaluated exactly ten times at interior points in the range of integration.{int j;float xr,xm,dx,s;static float x[]={0.0,0.1488743389,0.4333953941,The abscissas and weights.0.6794095682,0.8650633666,0.9739065285};First value of each arraystatic float w[]={0.0,0.2955242247,0.2692667193,not used.0.2190863625,0.1494513491,0.0666713443};xm=0.5*(b+a);xr=0.5*(b-a);s=0;Will be twice the average value of the function, since thefor (j=1;j<=5;j++) {ten weights (five numbers above each used twice)dx=xr*x[j];sum to 2.s += w[j]*((*func)(xm+dx)+(*func)(xm-dx));}return s *= xr;Scale the answer to the range of integration.}The above routine illustrates that one can use Gaussian quadratures withoutnecessarily understanding the theory behind them: One just locates tabulated weightsand abscissas in a book (e.g., [1] or [2]).
However, the theory is very pretty, and itwill come in handy if you ever need to construct your own tabulation of weights andabscissas for an unusual choice of W (x). We will therefore give, without any proofs,some useful results that will enable you to do this. Several of the results assume thatW (x) does not change sign inside (a, b), which is usually the case in practice.The theory behind Gaussian quadratures goes back to Gauss in 1814, whoused continued fractions to develop the subject. In 1826 Jacobi rederived Gauss’sresults by means of orthogonal polynomials. The systematic treatment of arbitraryweight functions W (x) using orthogonal polynomials is largely due to Christoffel in1877.
To introduce these orthogonal polynomials, let us fix the interval of interestto be (a, b). We can define the “scalar product of two functions f and g over a4.5 Gaussian Quadratures and Orthogonal Polynomials149weight function W ” as&f|g ≡bW (x)f(x)g(x)dx(4.5.5)aThe scalar product is a number, not a function of x. Two functions are said to beorthogonal if their scalar product is zero. A function is said to be normalized if itsscalar product with itself is unity. A set of functions that are all mutually orthogonaland also all individually normalized is called an orthonormal set.We can find a set of polynomials (i) that includes exactly one polynomial oforder j, called pj (x), for each j = 0, 1, 2, .
. ., and (ii) all of which are mutuallyorthogonal over the specified weight function W (x). A constructive procedure forfinding such a set is the recurrence relationp−1 (x) ≡ 0p0 (x) ≡ 1(4.5.6)pj+1 (x) = (x − aj )pj (x) − bj pj−1(x)j = 0, 1, 2, . . .wherexpj |pj pj |pj pj |pj bj =pj−1 |pj−1aj =j = 0, 1, . . .(4.5.7)j = 1, 2, . . .The coefficient b0 is arbitrary; we can take it to be zero.The polynomials defined by (4.5.6) are monic, i.e., the coefficient of theirleading term [xj for pj (x)] is unity.
If we divide each pj (x) by the constant[pj |pj ]1/2 we can render the set of polynomials orthonormal. One also encountersorthogonal polynomials with various other normalizations. You can convert froma given normalization to monic polynomials if you know that the coefficient ofxj in pj is λj , say; then the monic polynomials are obtained by dividing each pjby λj . Note that the coefficients in the recurrence relation (4.5.6) depend on theadopted normalization.The polynomial pj (x) can be shown to have exactly j distinct roots in theinterval (a, b).
Moreover, it can be shown that the roots of pj (x) “interleave” thej − 1 roots of pj−1 (x), i.e., there is exactly one root of the former in between eachtwo adjacent roots of the latter. This fact comes in handy if you need to find all theroots: You can start with the one root of p1 (x) and then, in turn, bracket the rootsof each higher j, pinning them down at each stage more precisely by Newton’s ruleor some other root-finding scheme (see Chapter 9).Why would you ever want to find all the roots of an orthogonal polynomialpj (x)? Because the abscissas of the N -point Gaussian quadrature formulas (4.5.1)and (4.5.4) with weighting function W (x) in the interval (a, b) are precisely the rootsof the orthogonal polynomial pN (x) for the same interval and weighting function.This is the fundamental theorem of Gaussian quadratures, and lets you find theabscissas for any particular case.150Chapter 4.Integration of FunctionsOnce you know the abscissas x1 , .