Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 52
Текст из файла (страница 52)
The functionf(x) is ex, and with a = 0, the exact values of f´(a) and f´´(a) are obviouslyone. We see from the table that the Dh and Dh2 continue to improve as hdiminishes until h = 0.01. After this, the results worsen. For h = 0.0001,there is a loss of four significant figures in D h and of seven significantfigures in Dh2.
The only remedy for this loss of significance is to increasethe number of significant digits to which f(x) is computed as h becomessmaller. This will normally be impossible on most computers. Moreover,f(x) will itself normally be the result of other computations which haveintroduced other numerical errors.Table 7.1hEXP(h)1.00.10.010.0010.00010.27182817E0.11051708E0.10100501E0.10010005E0.100W999EDhE X P( - h )01010101010.36787944E0.90483743E0.99004984E0.99900050E0.99990001E00000000000.11752012E0.10016673E0.10000161E0.99999458E0.99994244EDh201010100000.10861612E0.10008334E0.10000169E099837783E0.14901161E01010100017.1NUMERICAL DIFFERENTIATION301To analyze this phenomenon, consider formula (7.11), which givesIn calculations, we will in fact use the numbers f( a + h) + E+ andf (a - h) + E- instead of the numbers f(a + h) and f(a - h), because ofroundoff.
Therefore we computeHence, with (7.11),(7.16)The error in the computed approximation f´comp to f´(a) is therefore seen toconsist of two parts, one part due to roundoff, and the other part dueto discretization. If f´´´(x) is bounded, then the discretization error goes tozero asbut the round-off error grows if we assume (as we must inpractice) that E+ - E- does not decrease (but see Exercise 7.1-5).We define the optimum value of h as that value for which the sum ofthe magnitudes of the round-off error and of the discretization error isminimized.
To illustrate the procedure for finding an optimum value of h,let us consider the problem above of computing f´(0) when f(x) = ex. Letus assume that the error in computing ex is ± 1 · 10-8 and that E+ - Eremains finite and equal approximately to ± 2 · 10-8. Then, from (7.16),the round-off error R is approximatelyThe discretization error T is approximatelysince f´´´(ξ) is approximately one. To find the optimum h we must thereforeminimizeTo find the value of h for which g(h) is a minimum, we differentiate g(h)with respect to h and find its zero. Thus302DIFFERENTIATION AND INTEGRATIONand its positive solution ish3 = 3 · 10-8orThis is the optimum value of h.
The student can verify by examining Table7.1 that the best value of h falls between 0.01 and 0.001.Formulas for numerical differentiation as derived in this section arevery useful in the study of methods for the numerical solution of differential equations (see Chaps. 8 and 9). But the above analysis shows theseformulas to be of limited utility for the approximate calculation of derivatives. The analysis shows that we can combat the round-off-error effect byusing “sufficiently” high precision arithmetic.
But this is impossible whenf(x) is known only approximately at finitely many points.If the numerical calculation of derivatives cannot be avoided, it isusually more advantageous to estimate D(f) by D(p k ), with p k (x) theleast-squares approximation to f(x) by polynomials of low degree (see Sec.6.4). A very promising alternative is the approximation of D(f) by D(g3),where g3(x) is the cubic spline interpolating f(x) at a number of points, orbest approximating f(x) in the least-squares sense.EXERCISES7.1-1 From the following table find f´(l.4), using (7.7), (7.8) and (7.10).
Also find f´´(l.4),using (7.14). Compare your results with the results f´(1.4) = cosh 1.4 = 2.1509 and f´´(1.4) =sinh 1.4 = 1.9043, which are correct to the places given.xf(x)1.21.31.41.51.61.50951.69841.90432.12932.37567.1-2 From the following table of values of f(x) = sinh x, find f´(0.400), using (7.8) withh = 0.001 and h = 0.002. Which of these is the more accurate? The correct result isf ´(0.4) = cosh 0.4 = 1.081072.x0.3980.3990.4000.4010.402f(x)0.4085910.4096710.4107520.4118340.4129157.2NUMERICAL INTEGRATION: SOME BASIC RULES3037.1-3 In Eq. (7.16) let f(x) = sinh x and assume that the round-off error in computing sinh xremains constant, so that E+ - E- = 0.5 . 10 -7 .
Determine the optimum value of h to beused if formula (7.8) is used to compute f´(0).7.1-4 Derive a formula for f´´´(a) by differentiating (7.1) three times, choosing k = 3 andsetting a = x0, xl = a - h, x2 = a + h, x3 = a + 2 h. Also derive the error term for thisformula.7.1-5 On your computer, calculate the sequence of numbersa n - f[2 - 2 -n, 2 + 2 - n ]n = 1, 2, 3, . . .where f(x) = ln x. Without round-off effects,According to the discussion in this section,because of roundoff.
Does this really happen? If not, why not? Does this invalidate thediscussion in the text?7.1-6 Verify the formula (7.8) by expanding f(a + h) and f(a - h) into Taylor series aboutthe point a.7.1-7 Derive the formula (7.14) for f´´(a) using Taylor series expansions.7.2 NUMERICAL INTEGRATION: SOME BASIC RULESThe problem of numerical integration, or numerical quadrature, is that ofestimating the number(7.17)This problem arises when the integration cannot be carried out exactly orwhen f(x) is known only at a finite number of points.For this, we follow the outline given at the beginning of this chapter.We approximate I(f) by I(pk), where p k(x) is the polynomial of degree< k which agrees with f(x) at the points x0, .
. . , xk. The approximation isusually written as a rule, i.e., as a weighted sumI(p k ) = A 0 f(x 0 ) + A 1 f(x 1 ) + · · · + A k f(x k )of the function values f(x0), . . . , f(xk). The weights could be calculated asAi = I(li ), with li (x) the ith Lagrange polynomial.Assume now that the integrand f(x) is sufficiently smooth on someinterval [c,d] containing a and b so that we can write, as in (2.37),whereThen the error in our estimate I(pk) for I(f) is(7.18)304DIFFERENTIATION AND INTEGRATIONf [x0, .
. . , xk, x] being a continuous, hence integrable, function of x, byTheorem 2.5.This error term can, at times, be simplified. If, for example,is ofone sign on (a,b), then, by the mean-value theorem for integrals (see Sec.1.7),(7.19)If, in addition, f(x) is k + 1 times continuously differentiable on (c,d), weget from (7.18) and (7.19) that(7.20)Even ifis not of one sign, certain simplifications in the errorterm (7.18) are possible.
A particularly desirable instance of this kindoccurs when(7.21)In such a case, we can make use of the identityf[ x0, . . . , xk, x] = f[x0, . . . , xk, xk+1] + f[x0 , . . . xk+1, x,](x - xk+1)which is valid for arbitrary xk+1, to get thatsinceIf we now can choose xk+1 in such a way thatis of one sign on (a,b), and if f(x) is (k + 2) times continuously differentiable, then it follows (as before) that(7.22)Note that the derivative of f(x) appearing in (7.22) is of one order higherthan the one in (7.20). As in numerical differentiation, this indicates that(7.22) is of higher order than (7.20).We now consider specific examples. Let k = 0.
Thenf(x) = f(x 0 ) + f[x0, x](x - x0 )7.2NUMERICAL INTEGRATION: SOME BASIC RULES305HenceI(p k ) = (b - a)f(x 0 )If x0 = a, then this approximation becomesI(f)R = (b - a)f ( a )(7.23)the so-called rectangle rule (see Fig. 7.2). Since, in this case,is of one sign on (a,b), the error ER of the rectangle rule can be computedfrom (7.20). One gets(7.24)If x 0 = (a + b)/2, thenfails to be of one sign. But thenwhile (x - x0 )2 is of one sign. Hence, in this case, the error in I(pk) can becomputed from (7.22), with x1 = x0. One gets(7.25)the midpoint rule.Next, let k = 1.
ThenTo get= (x - x0 ) (x - x1 ) of one sign on (a,b), we choose x0 =a, x1 = b. Then, by (7.20),or(7.26)the trapezoid(al) rule (see Fig. 7.2).Now let k = 2. Then306DIFFERENTIATION AND INTEGRATIONFigure 7.2 Numerical integration.Note, for distinct x 0 , x 1 , x 2 in (a,b),( x - x0 )(x - x1)(x - x2 )is not of one sign on (a,b). But if we choose x0 = a, x1 = ( a + b)/2,x2 = b, then one can show by direct integration or by symmetry argumentsthatThe error is of the form (7.22). If we now choose x3 = x1 = (a + b)/2,thenis of one sign on (a, 6). Hence it then follows from (7.18) and (7.22) thatOne calculates directlyso that the error for this formula becomes7.2NUMERICAL INTEGRATION: SOME BASIC RULES307We now calculate I(p2) directly to obtain the formula corresponding to thecase k = 2 with the choice of interpolating points x0 = a, x1 = ( a + b)/2,x2 = b.
It is convenient to write the interpolating polynomial in the formThenButderiving (7.26). Soas we just found out when(7.27)using the fact that by symmetry of the divided differenceBut now,f [a, b](b - a) = f(b) - f(a)whileSubstituting these expressions into (7.27) gives usWe thus arrive at the justly famous Simpson's rule together with its308DIFFERENTIATION AND INTEGRATIONassociated error(7.28)Finally let k = 3. ThenBy choosing x0 = x1 = a, x2 = x3 = b we can be assured that( x - a) 2 (x - b)2 is of one sign on (a,b) and hence from (7.20) that theerror can be expressed asTo derive the integration formula corresponding to the choice of pointsx0 = x1 = a, x2 = x3 = b we first observe thatp 3 (x) = f[a] + f[a, a](x - a) + f[a, a, b](x - a) 2+f[a, a, b, b](x - a)2(x - b)so that(7.29a)From Sec. 2.7 on Osculatory Interpolation we find thatf[a,a] = f´(a)f[a, a, b] = {f[a,b] - f’(a)}/(b - a)f[a, a, b, b] = (f´(b) - 2f[a,b] + f´(a)}/(b - a)2Substituting into (7.29a) and simplifying we have7.2NUMERICAL INTEGRATION: SOME BASIC RULES309Finally replacing f[a,b] by (f(b) - f(a))/(b - a) and rearranging inpowers of (b - a) we arrive at the formula(7.29b)which, for obvious reasons, is known as the corrected trapezoid rule.