Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 67
Текст из файла (страница 67)
In this case we will have absolutestability since errors will not be magnified because of the extraneoussolution. If, however, |hλ| > 1, then one of these roots will be greater thanone in magnitude and we will encounter some instability. The conditionthat -1 < hλ < 0 effectively restricts the step size h that can be used forthis method. For example, if λ = -100, then we must choose h < 0.01 toassure stability. A multistep method is said to be absolutely stable for thosevalues of hλ for which the roots of its stability polynomial (8.74) are lessthan one in magnitude.
Different methods have different regions of absolute stability. Generally we prefer those methods which have the largestregion of absolute stability. It can be shown, for example, that theAdams-Moulton implicit methods have regions of stability that are morethan 10 times larger than those for the Adams-Bashforth methods of thesame order. In particular, the second-order Adams-Moulton method givenbyis absolutely stable for < hλ < 0 for the test equation y´ = λy withλ < 0.For equations of the form y´ = λy where λ > 0, the required solutionwill be growing exponentially like ehλ .
Any multistep method will have tohave one root, the principal root, which approximates the required solution. All other extraneous roots will then have to be less in magnitude thanthis principal root. A method which has the property that all extraneousroots of the stability polynomial are less than the principal root inmagnitude is said to be relatively stable.
Stability regions for differentmultistep methods are discussed extensively in Gear [30].EXERCISES8.10-l Show that the corrector formula based on the trapezoidal rule (8.52) is stable forequations of the form y´ = λy (see Exercise 8.8-l).8.10-2 Show that the roots of the characteristic equation (8.76) can be expressed in the form(8.77) asand that the solution of the difference equation (8.75) approaches (8.78) as8.10-3 Write a computer program to find the roots of the characteristic equation (8.73) for theAdams-Bashforth formula.
Take λ = -1 and h = 0(0.1) Determine an approximate valueof beyond which one or more roots of this equation will be greater than one in magnitude.Thus establish an upper bound on h, beyond which the Adams-Bashforth method will beunstable.8.10-4 Solve Eq. (8.67) by Milne’s method (8.64) from x = 0 to x = 6 with h = 1/2. Take thestarting values from Table 8.1.
Note the effect of instability on the solution.8.11ROUND-OFF-ERROR PROPAGATION AND CONTROL395*8.11 ROUND-OFF-ERROR PROPAGATION AND CONTROLIn Sec. 8.4 we defined the discretization error en ase n = y(x n ) - y nwhere y(xn) is the true solution of the differential equation, and yn is theexact solution of the difference equation which approximates the differential equation. In practice, because computers deal with finite word lengths,we will obtain a value ýn which will differ from yn because of round-offerrors.
We shall denote bythe accumulated round-off error, i.e., the difference between the exactsolution of the difference equation and the value produced by the computer at x = xn. At each step of an integration, a round-off error will beproduced which we call the local round-off error and which we denote byε n. In Euler’s method, for example, εn is defined byThe accumulated round-off error is not simply the sum of the localround-off errors, because each local error is propagated and may eithergrow or decay as the computation proceeds. In general, the subject ofround-off-error propagation is poorly understood, and very few theoreticalresults are available.
The accumulated roundoff depends upon manyfactors, including (1) the kind of arithmetic used in the computer, i.e., fixedpoint or floating point; (2) the way in which the machine rounds; (3) theorder in which the arithmetic operations are performed; (4) the numericalprocedure being used.As shown in Sec. 8.10, where numerical instability was considered, theeffect of round-off propagation can be disastrous.
Even with stablemethods, however, there will be some inevitable loss of accuracy due torounding errors. This was illustrated in Chap. 7, where the trapezoidal rulewas used to evaluate an integral. Over an extended interval the loss ofaccuracy may be so serious as to invalidate the results completely.It is possible to obtain estimates of the accumulated rounding error bymaking some statistical assumptions about the distribution of local roundoff errors. These possibilities will not be pursued here.
We wish to considerhere a simple but effective procedure for reducing the loss of accuracy dueto round-off errors when solving differential equations.Most of the formulas discussed in this chapter for solving differentialequations can be written in the formy n + l = y n + h∆y nwhere h ∆yn represents an increment involving combinations of f(x,y) atselected points. The increment is usually small compared withy, itself.
In396THE SOLUTION OF DIFFERENTIAL EQUATIONSforming the sum yn + h ∆yn in floating-point arithmetic, the computer willtherefore shift h ∆yn to the right until the exponent of h ∆yn agrees withthat of yn dropping bits at the right end as it does so. The addition is thenperformed, but because of the bits which were dropped, there will be arounding error. To see this more clearly, let us attempt to add the twofloating-point numbers (0.5472)(104) and (0.3856)(102), assuming a wordlength of four decimal places.
If we shift the second number two places tothe right, drop the last two digits, and add to the first number, we willobtain (0.5510)(104 ), whereas with proper rounding the result should be(0.5511)(104). This is, of course, an exaggerated example, since the computer will be working with binary bits and longer word lengths, but eventhen the cumulative effects can be serious.We shall now describe a simple procedure which will significantlyreduce errors of this type. First, each computed value of yn is stored indouble-precision form; next h ∆yn is computed in single precision, andonly the single-precision part of any value of yn needed in forming h ∆yn isused; the sum yn = h ∆yn is formed in double precision; and yn+1 = yn +h ∆yn is stored in double precision, This procedure may be called partialdouble-precision accumulation.
On some computers double-precisionarithmetic is available as an instruction, but even when it is not, only onedouble-precision sum must be formed per integration step. The major partof the computation is determining h ∆yn, and this is performed in singleprecision. The extra amount of work as well as the extra storage is quiteminor. On the other hand, the possible gain in accuracy can be verysignificant, especially when great accuracy over an extended interval isrequired. Indeed, this procedure is so effective in reducing round-off-erroraccumulation that no general-purpose library routine for solving differential equations should ever be written which does not provide for some formof partial double-precision accumulation.A final word of caution is in order at this point.
The accuracy of anumerical integration will depend upon the discretization error and theaccumulated rounding error. To keep the discretization error small, we willnormally choose the step size h small. On the other hand, the smaller h istaken, the more integration steps we shall have to perform, and the greaterthe rounding error is likely to be. There is, therefore, an optimum value ofthe step size h which for a given machine and a given problem will result inthe best accuracy. This optimum is in practice very difficult to find withoutthe use of extensive amounts of computer time.
The existence of such anoptimum does show, however, that there is some danger in taking too smalla step size.Example 8.8 Solve the equationy (1) = -1*8.11ROUND-OFF-ERROR PROPAGATION AND CONTROL397from x = 1 to x = 3, using the Adams-Bashforth method, with and without partialdouble-precision accumulation, for h = 1/256.The machine results are given below. The step size is purposely chosen smallenough so that the discretization error is negligible.
The results are printed every 16steps. The exact solution of this problem is y = -1/x. The accuracy can therefore beeasily checked. At x = 3 the partial double-precision results are correct to three units inthe eighth decimal place; the single-precision results are correct to 253 units in the eighthdecimal place. Since all this error is due to roundoff, this example clearly demonstratesthe effectiveness of partial double precision in reducing round-off-error accumulation.COMPUTER RESULTS FOR EXAMPLE 8.8XSINGLEPRECISIONPARTIAL DOUBLEPRECISION0.999999991.062500001.125000001.187500001.249999901.312499901.375000001.427500001.500000001.562499901.624999901.687500001.750000001.812500001.874999901.937499902.000000002.062508002.125000002.187499902.249999902.312500002.375000002.437500002.499999902.562499902.625000002.687500002.750000002.812499902.874999902.927500003.00000000-0.99999999-0.94117642-0.88888878-0.84210509-0.79999977-0.76190444-0.72727232-0.69565168-0.66666608-0.63999934-0.61538386-0.59259175-0.57142763-0.55172310-0.53333220-0.51612781-0.49999869-0.48484711-0.47058678-0.45714134-0.44444284-0.43243076-0.42105088-0.41025458-0.39999810-0.39024193-0.38095033-0.37209089-0.36363416-0.35555328-0.34782372-0.34042308-0.33333080-0.99999999-0.94117647-0.88888889-0.84210526-0.80000000-0.76190476-0.72727273-0.69565218-0.66666667-0.64000001-0.61538462-0.59259260-0.57142858-0.55172415-0.53333335-0.51612905-0.50000001-0.48484850-0.47058825-0.45714287-04444446-0.43243245-0.42 105265-0.41024643-0.40000002-0.39024393-0.38095240-0.37209304-0.36363639-0.35555558-0.34782612-0.34042556-0.33333336398THE SOLUTION OF DIFFERENTIAL EQUATIONSEXERCISES8.11-l Write a program based on the Adams-Bashforth method which uses both single-precision and partial-double-precision accumulation.8.113 Use the program of Exercise 8.1 l-l to solve the equationy ´ = - 2yy(0) = 1from x = 0 to x = 2 using a fixed step size h = 0.01.
The starting values can be obtainedfrom the exact solution y = e-2x. What is the error due to roundoff?8.11-3 Write a program for the classical fourth-order Runge-Kutta method which uses bothsingle-precision and double-precision accumulation. Use it to solve the equation of Exercise8.11-2 with the same value of h.*8.12 SYSTEMS OF DIFFERENTIAL EQUATIONSMost general-purpose differential-equation subroutines assume that anNth-order differential equation has been expressed as a system of Nfirst-order equations. For an Nth-order equation given in the formy ( N ) = f(x,y(x),y´(x), . .