Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 57
Текст из файла (страница 57)
First draw a curve of f(x) and try to determine where you willexpect to encounter difficulties.7.5-4 Find an approximation togood to six decimal places using adaptive quadrature.7.5-5 Write a program for an adaptive Simpson-rule-based quadrature routine subject to therestrictions given below.1. User input will consist of the function f(x), a finite interval [a,b], and an absoluteerror criterion ε.*7.6EXTRAPOLATION TO THE LIMIT3332. The subroutine should divide the interval [a,b] into two equal parts and applyformulas (7.56a), (7.56b), and (7.60) to obtain S,and E for each part.3.
If E satisfies the required error conditions on a subinterval, store otherwise halvethat interval and repeat step 2.4. Continue subdividing as necessary up to a maximum of four nested subdivisions.5. Output should consist of(i) An integer variable IFLAG = 1 if the error test was satisfied on a set of intervalscovering [a,b], and IFLAG = 2 if the error test was not satisfied on one or moresubintervals.(ii) If IFLAG = 1, print P =If IFLAG = 2, print the partial sum PPon those intervals where the errortest was satisfied and a list of intervals [xi, xi+1 ] on which the test was notsatisfied.7.5-6 Verify the statement in the text that if the error (7.60) is satisfied on each of the Nsubintervals which cover the interval [a,b], then P =will satisfy the required errorcondition (7.55) over the whole interval [a,b].*7.6 EXTRAPOLATION TO THE LIMITIn the preceding sections, we spent considerable effort in deriving expressions for the error of the various rules for approximate integration anddifferentiation.
To summarize: With L(f) the integral of f(x) over someinterval [a,b], or the value of some derivative of f(x) at some point a, weconstructed an approximation Lh (f) to L(f), which depends on a parameter h and which satisfiesMore explicitly, we usually proved thatL(f) = L h (f) + ch r f(s)(ξ)where c is some constant, r and s are positive integers, and ξ = ξ (h) is anunknown point in some interval. We pointed out that a direct bound forthe size of the error term requires knowledge of the size of |f(s)( ξ)|, whichvery often cannot be obtained accurately enough (if at all) to be of anyuse.Nevertheless, such an error term tells us at what rate Lh(f) approachesL(f) (as h 0).
This knowledge can be used at times to estimate the errorfrom successive values of Lh (f). The possibility of such estimates wasbriefly mentioned in Sec. 1.6; in Sec. 3.4, we discussed a specific example,the Aitken ∆2 process, and another example is given in the preceding Sec.7.5.As a simple example, consider the approximationto the valueD(f) = f´(a)334DIFFERENTIATION AND INTEGRATIONof the first derivative of f(x) at x = a. If f(x) has three continuousderivatives, then, according to (7.8) or (7.1l),D(f) = Dh(f) - 1/6h2f´´´(ξ)Since ξ(h)a as hsome ξ with |ξ - a| < |h|0, and f´´´(x) is continuous, we havef´´´(ξ)f´´´(a)as h0Hencegoes to zero faster than h2.
Using the order notation introduced in Sec. 1.6,we therefore get thatD(f) = D h (f) + C 1 h 2 + o(h 2 )(7.61)where the constant C1 = -f´´´(a)/6 does not depend on h.A numerical example might help to bring out the significance of Eq.(7.61). With f(x) = sin x and a = 1, we getD(f) = 0.540402C1 = 0.090050In Table 7.2, we have listed D h (f), the error Eh (f) = -h 2 f´´´(ξ)/6, and itstwo components, C 1 h 2 and o(h 2 ), for various values of h.
(To avoidround-off-error noise interference, all entries in this table were computedin double-precision arithmetic, then rounded.) As this table shows, C1 h 2becomes quickly the dominant component in the error since, althoughC1h2 goes to zero (with h), the o(h2) component goes to zero faster. But thisimplies that we can get a good estimate for the dominant error componentC1h2 as follows: Substitute 2h for h in (7.61) to getD(f) = D2h (f) + 4C1 h 2 + o(h2 )On subtracting this equation from (7.61), we obtain0 = Dh(f) - D2h(f) - 3C1h2 + o(h2)or(7.62)This last equation states that, for sufficiently small h, the computablenumber(7.63)is a good estimate for the usually unknown dominant error componentC1h2.
This is nicely illustrated in Table 7.2, where we have also listed thenumbers (7.63).*7.6EXTRAPOLATION TO THE LIMIT335Table 7.2h6.43.21.60.80.40.20.1D h (f)Eh(f)C1h2o(h 2 )(Dh - D2h)/3Rh0.009839-0.0098560.3375450.4844860.5260090.5367070.5394020.5304630.5501580.2027570.0558160.0142930.0035940.0009003.6884640.9221160.2305290.0576320.0144080.0036020.000901-3.158001-0.371957-0.027772-0.001816-0.000115-O.OOOOO7-0.0000005-0.0656520.1158000.0489800.0138410.0035660.000898-0.572.373.543.883.97The catch in these considerations is, of course, the phrase “forsufficiently small h.” Indeed, we see from Table 7.2 that, in our numericalexample, (D h , - D 2 h )/3 is good only as an order-of-magnitude estimatewhen h = 1.6, while for h = 3.2, (Dh - D2h )/3 is not even in the ball park.Hence the number (7.63) should not be accepted indiscriminately as anestimate for the error.
Rather, one should protect oneself against drasticmistakes by a simple check, based on the following argument: If C1h2 isindeed the dominant error component, i.e., if the o(h2 ) is “small” compared with C1h2, then, from (7.62),Hence alsoThereforeIn words, if C1 h 2 is the dominant error component, then the computableratio of differences(7.64)should be about 4. This is quite evident, for our numerical example, inTable 7.2, where we have also listed the ratios Rh.Once one believes that (7.63) is a good estimate for the error in Dh(f),having reassured oneself by checking that Rh 4, then one can expect(7.65)to be a much better approximation to D(f) than is D h (f).
In particular,336DIFFERENTIATION AND INTEGRATIONone then believes that(7.66)In order to see how much better an approximation Dh1(f) might be, wenow obtain a more detailed description of the error termfor Dh(f). For the sake of variety, we use Taylor series rather than divideddifferences for this. If f(x) has five continuous derivatives, then, onexpanding both f(a + h) and f (a - h) in a partial Taylor series aroundx = a, we getSubtract the second equation from the first; then divide by 2h to getHenceD(f) = Dh (f) + C1 h 2 + C2 h 4 + o(h4 )(7.67)where the constantsdo not depend on h.
Therefore, on substituting 2h for h in (7.67), we getD(f) = D2h (f) + 4C1 h 2 + 16C 2 h 4 + o(h4 )(7.68)Subtracting 1/3 of (7.68) from 4/3 of (7.67) now givesD(f) = Dh1(f) + C21h4 + o(h4)withsince, by (7.65),(7.69)*7.6EXTRAPOLATION TO THE LIMIT337A comparison of (7.69) with (7.67) shows that D h 1 (j) is a higher-orderapproximation to D(f) than is D h (f): If C1 0, then D(f) - Dh(f) goesto zero (with h) only as fast as h2, while D(f) - Dh1(f) goes to zero at leastas fast as h4.This process of obtaining from two lower-order approximations ahigher-order approximation is usually called extrapolation to the limit, or tozero-grid size. (See Exercise 7.6-3 for an explanation of this terminology.)Extrapolation to the limit is in no way limited to approximations witherror.
We get, for example, from (7.69), by setting h = 2h, thatD(f) = D2h 1 (f) + 16C21h4 + o(h4)Hence, on subtracting this from (7.69) and rearranging, we obtainTherefore, settingwe get thatD(f) = Dh 2 (f) + o(h4 )showing Dh (f) to be an even higher order approximation to D(f) than isDh1(f). More explicitly, it can be shown that2D(f) = Dh2(f) + C32h6 + o(h6)(7.70)if f(x) is sufficiently smooth.
But note that, for any particular value of h,D h 2 (f) cannot be expected to be a better approximation to D(f) than isDh1(f) unlessis a good estimate for the error in D h 1 (f), that is, unless C2 1 h 4 is thedominant part of the error in Dh1(f). This will be the case only ifHence this condition should be checked before believing thatWe have listed in Table 7.3 the results of applying extrapolation to thelimit twice to the sequence of Dh1(f) calculated for Table 7.2. We have alsolisted the various values of Rh 1 All calculations were carried out withrounding to six places after the decimal point.338DIFFERENTIATION AND INTEGRATIONTable 7.36.43.21.60.80.40.20.10.009839-0.0098560.3375450.4844860.5260090.5367070.539402-0.572.373.543.883.97-0.0755080.4533450.5334660.5398500.5402730.5403006.112.515.115.70.4886020.5388070.5402760.5403010.540302Finally, there is nothing sacred about the number 2 used above for allextrapolations.
Indeed, if q is any fixed number, then we get, for example,from (7.67) thatD(f) = D qh (f) + q 2 C1 h 2 + q 4 C2 h 4 + o(h4 )Subtracting this from (7.67) and rearranging then givesHence, withwe find thatD(f) = D h,q (f) - q 2 C 2 h 4 + o(h 4 )showing D h,q (f) to be ancalculate from Table 7.2 thatapproximation to D(f).
For example, wewhich is in error by only seven units in the last place.We have collected the salient points of the preceding discussion in thefollowing algorithm.Algorithm 7.1: Extrapolation to the limit Given the means of calculating an approximation Lh(f) to the number L(f) for every h > 0, whereLh(f) is known to satisfyL(f) = L h (f) + Ch r + o(h r )all h > 0with C a constant independent of h, and r a positive number.*7.6EXTRAPOLATlON TO THE LIMIT339Pick an h, and a number q > 1 (for example, q = 2) and calculatefrom the two numbers Lh (f) and Lqh (f). ThenL(f) = L h , q (f) + o(h r )so that, for sufficiently small h,|L(f) - L h , q (f)| < |L h , q (f) - L h (f)|(7.7 1)Before putting any faith in (7.71), ascertain that, at least,for some p > 1 (for example, p = q).EXERCISES7.6-1 With f(x) = x + x2 + x5 and a = 0, calculate Dh(f) and Dh1(f) for various values of h.Why is Dh1(f) always a worse approximation to D(f) = f´(0) than is Dh(f)? (Use high enoughprecision arithmetic to rule out roundoff as the culprit or get an explicit expression for Dh andDh1 in terms of h.)7.62 Using extrapolation to the limit, find f´(0.4) for the data given.xsinh x = f(x)0.3980.3990.4000.4010.4020.4085910.4096710.4107520.4118340.412915In this case the extrapolated value is a poorer approximation.
Explain why this is so. [ Note:The correct value of f´(0.4) is 1.081072.]7.63 Show that extrapolation to the limit can be based on analytic substitution. Specifically,with the notation of Algorithm 7.1, show thatwhere the approximation p(x) to g(x) = Lx(f) is obtained by finding A and B such thatp(x) - A + Bx ragrees with g(x) at x = h and x = qh. How does this explain the name “extrapolation to thelimit”?340DIFFERENTIATION AND INTEGRATION*7.7 ROMBERG INTEGRATIONExtrapolation to the limit is probably best known for its use with thecomposite trapezoid rule where it is known as Romberg integration.