Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 65
Текст из файла (страница 65)
Ifthe accuracy as determined by (8.63) is not sufficient, it is better to reducethe step size than to correct more than once.In a general-purpose routine for solving differential equations, theerror estimate is used in the following manner: Let us assume that we wishto keep the local error per unit step bounded as in (8.41) so thatand that starting values have been provided. We proceed as follows:1.2.3.4.Use (8.47) to obtainComputeUse (8.59) to obtainComputeCompute |Dn+1| from (8.63).If E1 < |Dn+1 |/h < E2 , proceed to the next integration step, using thesame value of h.5.
If |Dn+1|/h > E2, the step size h is too large and should be reduced. Itis customary to replace h by h/2, recompute four starting values, andthen return to step 1.6. If |Dn+1 |/h < E1 , more accuracy is being obtained than is necessary.Hence we can save computer time by replacing h by 2h, recomputingfour new starting values at intervals of length 2h, and returning tostep 1.8.9THE ADAMS-MOULTON METHOD385In using predictor-corrector methods with variable step size as outlinedabove, it is necessary to (a) have a method for obtaining the necessarystarting values initially; (b) have a method for obtaining the necessaryvalues of y at half steps when the interval is halved; and (c) have a methodfor obtaining the necessary values of y when the interval is doubled.Special formulas can be worked out for each of these three situations.These formulas add considerably to the complexity of a program.
However, a fairly ideal combination is to use the fourth-order Runge-Kuttamethod (8.37), together with a fourth-order predictor-corrector pair such as(8.47) and (8.59). The Runge-Kutta method can then be used for startingthe solution initially, for halving, and for doubling, while the predictor-corrector pair can be used for normal continuation when the step size is keptfixed.Before leaving this section, it should be pointed out that there aremany other predictor-corrector formulas, and in particular that the following formulas due to Milne are often used:(8.64a)(8.64b)Equation (8.64a) was derived in Sec. 8.6, and (8.64b ) is based on Simpson’srule for numerical integration.
Proceeding as in the Adams-Moulton formulas, we can show that a local-error estimate is provided by(8.65)The error estimate for the Milne method appears to be somewhat morefavorable than for the Adams-Moulton method, but as we shall see, (8.646)is subject to numerical instability in some cases.While the literature is abundant with methods for integrating differential equations, the most popular in the United States are the fourth-orderRunge-Kutta method and predictor-corrector methods such as those ofAdams-Moulton or Milne (8.64). Although no one method will performuniformly better than another method on all problems, it is appropriate topoint out the advantages and disadvantages of each of these types forgeneral-purpose work.Runge-Kutta methods have the important advantage that they areself-starting.
In addition, they are stable, provide good accuracy, and, as acomputer program, occupy a relatively small amount of core storage.Standard RK methods provide no estimate of the local error, so that theuser has no way of knowing whether the step h being used is adequate.One can, of course, use the step size control methods described in Sec. 8.6,but this is expensive in machine time. The second major disadvantage of386THE SOLUTION OF DIFFERENTIAL EQUATIONSthe fourth-order Runge-Kutta method is that it requires four functionevaluations per integration step, compared with only two using the fourthorder predictor-corrector methods.
On some problems Runge-Kuttamethods will require almost twice as much computing time.Predictor-corrector methods provide an automatic error estimate ateach step, thus allowing the program to select an optimum value of h for arequired accuracy. They are also fast since they require only two functionevaluations per step. On the other hand, predictor-corrector subroutinesare very complicated to write, they require special techniques for startingand for doubling and halving the step size, and they may be subject tonumerical instability (see Sec. 8.11).For many years Runge-Kutta methods were used almost exclusively inthe United States for general-purpose work, but recently predictor-corrector methods have been gaining in popularity.In the past few years much more sophisticated general-purposemethods using both variable orders and variable steps have been developed.
The Adams methods described previously are the most widely usedin variable-order-variable-step methods. The objective of these methods isto automatically select the proper order and the proper step which willminimize the amount of work required to achieve a specified accuracy fora given problem. Other important advantages of these methods are thatthey are self-starting since a low-order method can be used at the start, andthey can easily be adjusted to supply missing values when the step size ischanged. A complete description of a subroutine called DIFSUB based onan Adams variable-order-variable-step method is given in Gear [30, pp.158-167].
A subroutine called DVOGER, also based on Gear’s method, isavailable in the IMSL programs and has been adapted to run on mostmodern computers.Example 8.7 Solve the problem of Example 8.5 with h = 1/32, using the Adams-Moultonpredictor-corrector formulas. Compare the results with those of Example 8.5.The program and the machine results are given below. In this case we list xn, yn,(corrected value); the local-error estimate D n ; and the actual error e n . On comparingthese results with those of Example 8.5, we notice a decided improvement in accuracy,particularly as x approaches 1, where the results are correct to seven or eight significantfigures.
The local-error estimate D n appears to be relatively constant and in generalsomewhat smaller than the actual error en. On closer examination, however, we find thatin steps 5 to 13 the results are correct to only six significant figures. The explanation forthis is that the values of yn for these steps are an order of magnitude smaller than theyare asSince D n is an absolute error test, it does not indicate the number ofsignificant digits of accuracy in the result.
This is a typical situation when working infloating-point arithmetic. When working with numbers which are either very large orvery small compared with 1, a better indicator of the number of significant digits ofaccuracy is provided by a relative test than by an absolute test. A relative error test forthe Adams-Moulton formula, for instance, would bein place of (8.63).8.9THE ADAMS-MOULTON METHOD387FORTRAN PROGRAM FOR EXAMPLE 8.7C ADAMS-MOULTON METHODINTEGER I,N,NSTEPSREAL ERROR,F(4),H,XBEGIN,XN,YBEGIN,YNCSOLN(X) = EXP(X) - 1. - XCC ** INITIALIZEPRINT 600600 FORMAT('lADAMS-MOULTONMETHOD'/*'0',3X,'N',14X,'XN1'15X,'YN',9X,'DN = YN - YNP',8X,'ERROR'/)NSTEPS = 32H = l./NSTEPSYBEGIN = 0.XBEGIN = 0.CC ** COMPUTE FIRST FOUR POINTS USING EXACT SOLUTIONF(1) = XBEGIN + YBEGINN = 0ERROR = 0.DIFF = 0.PRINT 60l,N,XBEGIN,YBEGIN,DIFF,ERROR601 FORMAT(' ',13,4X,4El7.8)DO 20 N=1,3XN = XBEGIN + N*HYN = SOLN(XN)F(N+1) = XN + YNPRINT 601, N,XN,YN,DIFF,ERROR20 CONTINUECC ** BEGIN ITERATIONDO 50 N=4,NSTEPSPREDICT USING ADAMS-BASHFORTH FORMULACYNPRED = YN + (H/24.)*(55.*F(4)-59.*F(3)+37.*F(2)-9.*F(l))XN = XBEGIN + N*HFNPRED = XN + YNPREDCORRECT USING ADAMS-MOULTON FORMULACYN = YN + (H/24.)*(9.*FNPRED + l9.*F(4) - 5.*F(3) + F(2))DIFF = (YN - YNPRED)/l4.F(1) = F(2)F(2) = F(3)F(3) = F(4)F(4) = XN + YNYOFXN = SOLN(XN)ERROR = YN - YOFXNPRINT 601, N,XN,YN,DIFF,ERROR50 CONTINUESTOPENDCOMPUTER RESULTS FOR EXAMPLE 8.7N012345678XN0.0.31250000E-010.62500000E-010.93750000E-010.12500000E-000.15625000E-000.18750000E-000.21875000E-000.25000000E-00YN0.0.49340725E-030.19944459E-020.45351386E-020.81484520E-020.12868445E-010.18730249E-010.25770108E-010.34025417E-01DN0.0.0.0.0.78164571E-090.90637643E-090.88143028E-090.91469178E-090.93132257E-09ERROR0.0.0 .0.0.53551048E-080.11408702E-070.11175871E-070.10011718E-070.13969839E-08388THE SOLUTION OF DIFFERENTIAL EQUATIONSCOMPUTER RESULTS FOR EXAMPLE 8.7 (continued)NXNYNDN910111213141516171819202122232425262728293031320.28 125000E-000.31250000E-000.34375000E-000.37500000E-000.40625000E-000.43750000E-00046875000E-000.50000000E 000.53125000E 000.56250000E 000.59375000E 000.62500000E 000.65625000E 000.68750000E 000.71875000E 000.75000000E 000.78125000E 000.81250000E 000.84375000E 000.87500000E 00090625000E 000.93750000E 000.96875000E 000.09999999E 010.43534759E-010.54337924E-010.66476036E-010.79991416E-010.94927801E-010.11133030E-000.12924545E-000.14872127E-000.16980730E-000.19255465E-000.21701607E-000.24324595E-000.27130044E-000.30123746E-000.33311676E-000.3670000E-00040295079E-000.44103477E-000.48131964E-000.52387527E 000.56877375E 000.61608943E 00066589906E 000.71828180E 000.96458407E-090.99784564E-090.99784564E-090.10643686E-080.11308917E-080.11308917E-080.10643686E-080.11974147E-080.13304609E-080.13304609E-080.13304609E-080.13304609E-080.13304609E-080.13304609E-080.13304609E-080.15965530E-080.15965530E-080.15965530E-080.18626451E-080.15965530E-080.21287372E-080.21287372E-080.21287372E-080.21287372E-08ERROR0.37252903E-080.83819032E-080.37252903E-080.15832484E-070.13969839E-070.14901161E-070.55879354E-080.74505806E-080.18626451E-080.37252903E-080.0.11175871E-070.11175871E-070.-0.37252903E-08-0.74505806E-080.37252903E-080.74505806E-080.74505806E-080.74505806E-080.0.-0.74505806E-08-0.74505806E-08EXERCISES8.9-l Show that the iteration defined byk = 1, 2, .
. .xn fixedwill converge, provided that(see Sec. 8.8).8.9-2 Derive the error (8.60) for the Adams-Moulton method, using (8.57) and (8.58).8.9-3 Derive the local-error estimate (8.65) for the Milne predictor-corrector formulas (8.64).8.9-4 Solve the equation y´ = y + x2, y(0) = 1, from x = 0 to x = 2 with h = 0.1, using theAdams-Moulton predictor-corrector formulas. The starting values correct to six decimalplaces arey(0) = 1.000000y(0.1) = 1.105513y(0.2) = 1.224208y(0.3) = 1.359576Compute Dn+1, and estimate the error at x = 2.*8.10STABILITY OF NUMERICAL METHODS389*8.10 STABILITY OF NUMERICAL METHODSWhen computers first became widely used for solving differential equations, it was observed that some of the commonly used integration formulas, such as Milne’s formulas (8.64), led to errors in the solution muchlarger than would be expected from the discretization error alone.