Heath - Scientific Computing (523150), страница 71
Текст из файла (страница 71)
Integration, on the other hand, is a smoothing process and is inherently stable in this respect. The contrast between differentiation and integration should notbe surprising, since they are inverse processes to each other. The difference between themis illustrated in Fig. 8.4, which shows two functions that have very similar definite integralsbut very different derivatives.................
....... ............................ . .............. ........................................ ....... ...................................................................... ...................................................................................... ..................................................................................................................................................... ..
........................................................................................................................................................................................................................................................................................................Figure 8.4: Two functions whose integrals are similar but whose derivatives are not.When approximating the derivative of a function whose values are known only at adiscrete set of points, a good approach is to fit some smooth function to the given discretedata and then differentiate the approximating function to approximate the derivatives ofthe original function.
If the given data are sufficiently smooth, then interpolation may beappropriate; but if the given data are noisy, then a smoothing approximating function, such262CHAPTER 8. NUMERICAL INTEGRATION AND DIFFERENTIATIONas a least squares spline, is more appropriate.8.7.1Finite Difference ApproximationsAlthough finite difference formulas are generally inappropriate for discrete or noisy data,they are very useful for approximating derivatives of a smooth function that is known analytically or can be evaluated accurately for any given argument.
We now develop some finitedifference formulas that will be useful in our study of the numerical solution of differentialequations.Given a smooth function f : R → R, we wish to approximate its first and second derivatives at a point x. Consider the Taylor series expansionsf (x + h) = f (x) + f 0 (x)h +f 00 (x) 2 f 000 (x) 3h +h + ···26f (x − h) = f (x) − f 0 (x)h +f 00 (x) 2 f 000 (x) 3h −h + ···.26andSolving for f 0 (x) in the first series, we obtain the forward difference formulaf 0 (x) =≈f (x + h) − f (x) f 00 (x)−h + ···h2f (x + h) − f (x),hwhich gives an approximation that is first-order accurate since the dominant term in theremainder of the series is O(h).
Similarly, from the second series we derive the backwarddifference formulaf 0 (x) =≈f (x) − f (x − h) f 00 (x)+h + ···h2f (x) − f (x − h),hwhich is also first-order accurate. Subtracting the second series from the first gives thecentered difference formulaf 0 (x) =≈f (x + h) − f (x − h) f 000 (x) 2−h + ···2h6f (x + h) − f (x − h),2hwhich is second-order accurate. Finally, adding the two series together gives a centereddifference formula for the second derivativef 00 (x) =≈f (x + h) − 2f (x) + f (x − h) f iv (x) 2−h + ···h212f (x + h) − 2f (x) + f (x − h),h28.8.
RICHARDSON EXTRAPOLATION263which is also second-order accurate. By using function values at additional points, x ± 2h,x ± 3h, . . . , we can derive similar finite difference approximations with still higher accuracyor for higher-order derivatives.Note that higher-accuracy difference formulas require more function values. Whetherthese translate into higher overall cost depends on the particular situation, since a moreaccurate formula may permit the use of a larger stepsize and correspondingly fewer steps. Inchoosing a value for h, rounding error must also be considered in addition to the truncationerror given by the series expansion (see Example 1.11).8.7.2Automatic DifferentiationA number of alternatives are available for computing derivatives of a function, includingfinite difference approximations and closed-form evaluation using formulas determined eitherby hand or by a computer algebra package.
Each of these methods has significant drawbacks,however: manual differentiation is tedious and error-prone; symbolic derivatives tend tobe unwieldy for complicated functions; and finite difference approximations require thesometimes delicate choice of a stepsize, and their accuracy is limited by discretization error.Another alternative, at least for any function expressed by a computer program, isautomatic differentiation, often abbreviated as AD. The basic idea of AD is simple: acomputer program consists of basic arithmetic operations and elementary functions, eachof whose derivatives is easily computed. Thus, the function computed by the program is, ineffect, a composite of many simple functions whose derivatives can be propagated throughthe program by repeated use of the chain rule, effectively computing the derivative of thefunction step by step along with the function itself.
The result is the true derivative of theoriginal function, subject only to rounding error but suffering no discretization error.Though AD is conceptually simple, its practical implementation is more complicated,requiring careful analysis of the input program and clever strategies for reducing the potentially explosive complexity of the resulting derivative code. Fortunately, most of thesepractical impediments have been successfully overcome, and a number of effective softwarepackages are now available for automatic differentiation.
Some of these packages accept aFortran or C input code and then output a second code for computing the desired derivatives, whereas other packages use operator overloading to perform derivative computationsautomatically in addition to the function evaluation. When applicable, AD can be mucheasier, more efficient, and more accurate than other methods for computing derivatives.AD can also be useful for determining the sensitivity of the output of a program to perturbations in its input parameters. Such information might otherwise be obtainable onlythrough many repeated runs of the program, which could be prohibitively expensive forlarge, complex programs.8.8Richardson ExtrapolationIn many problems, such as numerical integration or differentiation, we compute an approximate value for some quantity based on some stepsize.
Ideally, we would like to obtainthe limiting value as the stepsize approaches zero, but we cannot take the stepsize to bearbitrarily small because of excessive cost or rounding error. Based on values for nonzero264CHAPTER 8. NUMERICAL INTEGRATION AND DIFFERENTIATIONstepsizes, however, we may be able to estimate what the value would be for a stepsize ofzero.Let F (h) denote the value obtained with stepsize h.
If we compute the value of F forsome nonzero stepsizes, and if we know the theoretical behavior of F (h) as h → 0, thenwe can extrapolate from the known values to obtain an approximate value for F (0). Thisextrapolated value should have a higher-order accuracy than the values on which it is based.We emphasize, however, that the extrapolated value, though an improvement, is still onlyan approximation, not the exact solution, and its accuracy is still limited by the stepsizeand arithmetic precision used.To be more specific, suppose thatF (h) = a0 + a1 hp + O(hr )as h → 0 for some p and r, with r > p.
We assume that we know the values of p and r, butnot a0 or a1 . Indeed, F (0) = a0 is the quantity we seek. Suppose that we have computedF for two stepsizes, say, h and qh for some q > 1. Then we haveF (h) = a0 + a1 hp + O(hr )andF (qh) = a0 + a1 (qh)p + O(hr ).This system of two linear equations in the two unknowns a0 and a1 is easily solved to obtaina0 = F (h) +F (h) − F (qh)+ O(hr ).qp − 1Thus, the accuracy of the improved value, a0 , is O(hr ) rather than only O(hp ).If F (h) is known for several values of h, then the extrapolation process can be repeatedto produce still more accurate approximations, up to the limitations imposed by finiteprecision arithmetic. For example, if we have computed F for the values h, 2h, and 4h,then the extrapolated value based on h and 2h can be combined with the extrapolated valuebased on 2h and 4h in a further extrapolation to produce a still more accurate estimate forF (0).Example 8.7 Richardson Extrapolation.
To illustrate Richardson extrapolation, weuse it to improve the accuracy of a finite difference approximation to the derivative of thefunction sin(x) at the point x = 1. Using the first-order accurate, forward difference formuladerived in Section 8.7.1, we have for this problemF (h) = a0 + a1 h + O(h2 ),which means that p = 1 and r = 2 in this case. Using stepsizes of h = 0.25 and 2h = 0.5(i.e., q = 2), we getsin(1.25) − sin(1)F (h) == 0.430055,0.25andsin(1.5) − sin(1)F (2h) == 0.312048.0.58.8. RICHARDSON EXTRAPOLATION265The extrapolated value is then given byF (0) = a0 = F (h) +F (h) − F (2h)= 2F (h) − F (2h) = 0.548061.2−1For comparison, the correctly rounded result is given by cos(1) = 0.540302. In this examplethe extrapolation is linear, as can be seen on the left in Fig.