Thompson - Computing for Scientists and Engineers (523188), страница 46
Текст из файла (страница 46)
Supposethat y is known for a succession of x values (equally-spaced by an amount h, tomake things simple) up to x = xk. We now describe two basic ways of advancingthe solution at xk to that at xk+1 = xk + h. For notational simplicity we abbreviatefk = f(xk, yk). We also use the abbreviation yk = y (xk), with a similar notationfor derivatives.For a working example we choose a function for which the solution is wellknown and simple, but which illustrates the various methods and their errors.
Wework with(7.35)for which we already know a solution to (7.34), namely(7.36)in which we have chosen the boundary condition y (0) = 1, so that actual errors andrelative errors will coincide when x is near zero. This choice off is also simple todeal with in estimating errors because all derivatives of y are just equal to y itself.The first methods used for our first-order differential equation (7.34) are the socalled Euler predictor formulas.
In the first version of this method we approximatethe first derivative on the left-hand side of (7.34) by the forward-difference formula(4.25), namely7.4NUMERICAL METHODS FOR FIRST-ORDER EQUATIONS243(7.37)If h is small enough we may get a good approximation to y at the next x value byinserting this in (7.34) and solving for yk+l as(7.38)This forward-difference predictor formula we call the forward Euler predictor for afirst-order differential equation.An estimate of the error in each step of this predictor is the next term in the Taylor expansion of yk+1.
From (7.34) and the general Taylor series formula (3.6), thisestimate is(7.39)For the working example in which we use the exponential function (7.36) note thaty (X + h) = ex+h. Therefore(7.40)is the actual error, which agrees with the estimate (7.39) through h2 terms.Before turning to numerical comparisons. we consider a second Euler predictorformula.
As shown in Section 4.4, the central-difference derivative is more accurate than the forward-difference derivative. We therefore expect that using the central differences in (7.34) will produce a more-accurate prediction of the next yvalue. Try it and see.Exercise 7.16Show that using central differences produces the predictor formula(7.41)in which the error estimate is(7.42)obtained from the first neglected term in the Taylor expansion. nFormula (7.41) we call the central Euler predictor for solving a first-order differential equation.For our working example of a first-order equation, with f(x,y) = y, the errorestimates are particularly simple, since the exact value, given our initial value ofy (0) = 1, is just y (x) = ex. You can therefore work out relative errors and compare the efficiencies of the two methods for yourself.244INTRODUCTION TO DIFFERENTIAL EQUATIONSExercise 7.17(a) Consider the relative errors in the forward predictor formula (7.38), rk(f),and in the central predictor formula (7.41), rk(c), for the working example.Show that these error estimates for each step do not depend on x or y, and thatfor h = 0.05 are about 1.25 × 1O-3 and 0.042 × 10-3, respectively.(b) Estimate the efficiency of the central-difference method compared to theforward-difference method for the working example as the ratio of step sizes hcand hf required for about the same errors in both methods.
Show that the relative efficiency is aboutand show that this is at least a factor of 5 forhc = 0.05. nThe above formulas advance the solution of the differential equation and provide estimates of the error incurred at each step.Testing the Euler predictorsWe now have two predictor formulas for solving first-order differential equations,formulas (7.38) and (7.41), together with error estimates for each step, (7.39) and(7.42). It is time to check these out numerically.We take a look forward to the completed program for the first-order solutionsthat is given in Section 7.5, concentrating on the forward- and central-predictormethods that we have just analyzed.
We use this with the exact analytical solutiony (x) = ex for our working example, as explained above. It is probably easiest tofirst enter the definition, control and input-output parts of the program NumericalDE_1 from Section 7.5, and to write only the code cases for choice = 1 andchoice= 2, the forward and central predictors, respectively. By doing this, youwill be coding something that you understand and you will exercise your mind ratherthan just exercising your fingers.Exercise 7.18(a) Key in all parts of the program Numerical DE_1 given in Section 7.5, except case 3 and case 4, which we consider in Section 7.5. Note that themain program refers to f in (7.34) as FUNC .
For general use, you will eventually substitute the coding for the function of interest to you. For our workingexample the function is y. Don’t be tempted to anticipate the exact value by using exp (x) instead.(b) Run this version of the program for an interesting range of x values.
Forsimplicity xmin = 0, xmax = 4 . 0 , and h = 0.05, are reasonable choices forgetting started. As usual, there is a file ( NuMDE-1 ) to record your output for usein a spreadsheet or graphics application.(c) Add the sections of code that allow comparison of the numerical values justcomputed with analytical values for the solution of the differential equation.These values are computed by function ANALYT.
Run Numerical DE_1 withcompare = y (for ‘yes’), and check your output against the values showngraphically in Figure 7.5. n7.4NUMERICAL METHODS FOR FIRST-ORDER EQUATIONS245FIGURE 7.5 Euler predictor solutions of the first-order differential equations (7.34), (7.35) withboundary condition y (0) = 1.
The solid line shows the analytical solution, exp (x), the dashedline shows the percentage error in the forward-difference numerical solution, and the dotted lineshows 10 times the percentage error in the central-difference solution.Sample output from Numerical DE_1 for the Euler predictor methods (cases 1and 2), using the parameters suggested in Exercise 7.18, is shown in Figure 7.5.The relative errors in the forward- and central-difference Euler predictors are alsoshown, as err(f) and err(c), respectively.As you will notice from your output and from Figure 7.5, solution of the firstorder equation by using the central-difference predictor has a much smaller accumulated relative error (a factor of 60 less at x = 4) than does the forward-differencepredictor. From point to point the errors are in agreement with the estimates (7.39)and (7.42), respectively.
The accumulated error after many steps is generally muchmore difficult to estimate, because errors may cancel in different regions of x. Inour working example of the monotonically-increasing function exp (x) the error justaccumulates linearly with the number of steps. This is clear from the linearity of thetwo error curves in Figure 7.5.Adams predictor formulasIn the Euler predictor formulas for numerical solution of the first-order differentialequation (7.34) that we developed and applied in the two preceding subsections, themethods rely on estimating derivatives. An alternative is to integrate (7.34) oncewith respect to x, then to approximate the integral of the right-hand side.We now develop two examples of such formulas, beginning with the originalfirst-order differential equation(7.43)246INTRODUCTION TO DIFFERENTIAL EQUATIONSWe integrate this equation from xk to xk +l, to obtain(7.44)This equation is, like many beautiful things, true but not immediately useful.
Hidden inside the integrand are all the unknown values of y in the interval between thek th and the (k + 1) th value, which is just what we are trying to find. Therefore,some practical means of approximating the integral is needed.Our first approximation of the integral rule is simply to use the trapezoid rulefrom Section 4.6, namely(7.45)which has an error of order h3. If this approximation to the integral is used in(7.44), then we have an implicit equation for yk+l, since it then appears on both theright- and left-hand sides of (7.44).
An iterative solution of this equation may betried. That is, a trial value of yk+1 is used in f to predict its value on the left-handside. We indicate this procedure by writing(7.46)where the order of the iteration, n 0. Note that iterative solution of this equation toa given limit does not guarantee a more-correct solution of the original differentialequation than that provided by the trapezoid rule.
In Section 7.5 we refer to (7.46)as the Adams-trapezoid predictor.If the integral in (7.44) is estimated by a formula that is more accurate, then thesolution of the differential equation will be more accurate for a given step size h.Exercise 7.19Use Simpson’s rule, (4.45), for the integral to derive the estimate(7.47)which is to be solved by iterating on n until a desired convergence of y value isreached. nThe error in this formula, once iterated to convergence, is just the error in the Simpson integration rule, which is of order h 5 . We refer to formula (7.47) as theAdams-Simpson predictor.7.5PROGRAM FOR SOLVING FIRST-ORDER EQUATIONS247Formulas in which combinations of integration-iteration methods are used arecalled Adams closed formulas or Adams-Moulton formulas. We illustrate their usein the following project.