Shampine, Allen, Pruess - Fundamentals of Numerical Computing (523185), страница 48
Текст из файла (страница 48)
Imple1, 2, 3, . . . , the solutions are given by the relationment it in a code for a single equation. Test it on someof the problems of this section and compare it to Rke.6.7 OTHER NUMERICAL METHODSThe explicit Runge-Kutta methods discussed in this chapter have no memory of whathas happened prior to the current point of the integration. Other methods take advantage of previously computed solution values. The Adams methods furnish a veryimportant example that is easily understood.
On reaching X, with the approximatesolution yny(xn ), there are (usually) available values yn+l-i = y(xn+1+i ) for i =2, 3, . . . , k. From the differential equation y´ = f(x,y), approximations to the derivatives y´(xn+1-i) can be obtained asKnowledge of solution values prior to the current point xn are exploited by means ofthe integrated form of the differential equation:This is done with ideas used throughout this book: interpolate y´(t) by a polynomialand approximate the integral by integrating the polynomial.
Let P(t) be the polynomial6.7 OTHER NUMERICAL METHODS241that interpolates fn+l-i at xn+l-i for i = 1, 2, . . . , k. A numerical approximation yn+lto the exact solution y(xn+l) is then defined byThis is called the Adams-Bashforth formula of order k. When P(t) is written in Lagrange form, this formula becomesIn terms of the current step size hn = xn+1 - xn and the coefficientsthis isThe integration coefficients αk,i depend on the spacing of the mesh points xn+l, xn, . . .and in general must be computed at each step. It is easy to verify that they depend onlyon the relative spacing, so when the step size is a constant h, they can be computedonce and for all. Using the theory of interpolation developed in Chapter 3, it is notdifficult to show that if the “memorized” values yn+l-i are sufficiently accurate andf satisfies a Lipschitz condition, then this formula is of order k (hence the name).An Adams-Bashforth formula involves only one evaluation of f per step.
Given ynand previously computed values fn-1, fn-2,. . . , the value fn = f(xn,yn) is formed; ifnecessary, the coefficients αk,i are computed, and then yn+l is evaluated by the formulato advance to xn+1. An attractive feature of this approach is that it naturally providesa polynomial approximation to y(x) that can be used to obtain values between meshpoints:An Adams-Bashforth formula is so much cheaper than a Runge-Kutta formula ofthe same order that it is natural to ask how Runge-Kutta codes can possibly be competitive.
It seems that by recycling previously computed values we get something (highorder) for almost nothing (only one new f evaluation per step). Unfortunately, we donot. All methods with memory have certain difficulties. One is getting started: Wheredo the “previously computed” values come from on the first few steps? A relateddifficulty is the recurring one of changing the step size. When previously computedvalues are recycled, it is natural to wonder if a “feedback” of errors might cause thecomputed results to “explode.” This instability can occur, and some accurate formulasthat resemble the Adams-Bashforth formulas cannot be used at all because the integration is unstable even for arbitrarily small step sizes.
Fortunately, if the step sizeis small enough, integration with Adams-Bashforth formulas is stable. Returning tothe striking difference in cost of the formulas, it is important to realize that it is not242CHAPTER 6ORDINARY DIFFERENTIAL EQUATIONSmerely the cost per step that is the issue but also how big a step can be taken andstill achieve a desired accuracy. The popular Runge-Kutta formulas cost much moreper step, but offset this by taking larger steps. Which method proves more efficientin practice depends on the problem, the accuracy desired, and the particular formulasbeing compared.
There are many issues to be considered when selecting a method andunfortunately, there is no choice that is best for all problems.The Adams-Moulton formulas arise when the polynomial P(t) interpolates fn+1-ifor i = 0, l,. . . , k - 1. Proceeding analogously to the Adams-Bashforth case, we areled to the formulaThe term involving interpolation to fn+1 at xn+1 has been extracted from the sum toemphasize that yn+1 is only defined implicitly by this formula. It is not obvious thatyn+l is even well defined. To establish that it is, we will show how to solve the nonlinear equations defining yn+1 for all sufficiently small step sizes. This is accomplishedby first “predicting” a value using an explicit formula such as an Adams-Bashforthformula.
Letdenote this predicted value. “Simple” or “functional” iteration improves or “corrects” the mth approximation according to the explicit recipefor m = 0, l,. . . . If L is a Lipschitz constant for f and the step size is small enough thatfor some constant ρ,it is not difficult to show that there is a unique solution yn+l to the algebraic equationsand that the error of each iterate is decreased by a factor of ρ at each iteration. For“small” step sizes, the predicted value is close to yn+1 and the iteration convergesquickly.An implicit formula like an Adams-Moulton formula is more trouble and more expensive to evaluate than an explicit formula like an Adams-Bashforth formula.
Whybother? For one thing, the Adams-Moulton formula of order k is considerably moreaccurate than the Adams-Bashforth formula of the same order so it can use a biggerstep size. For another, the Adams-Moulton formula is much more stable. When allfactors are considered, the Adams-Moulton formulas are advantageous. A moderncode based on such formulas is more complicated than a Runge-Kutta code because itmust cope with the difficulties mentioned above concerning starting values and changing the step size.
It is much more complicated than even this brief discussion suggests.With sufficiently many memorized values, we can use whatever order formula we wishin the step from xn. Modern codes attempt to select the most efficient formula at eachstep. Unfortunately, the art of computing has run ahead of the theory in this regardthere is an adequate theoretical understanding of variation of step size with a fixedformula, but little has been proven about variation of order. Nevertheless, years of6.7 OTHER NUMERICAL METHODS243heavy usage of codes that vary the order have demonstrated that they do “work” andthat the variation of the order is very important to the efficiency of such codes.Another natural approach to approximating the solutions of differential equationsis based on numerical differentiation.
Again using a basic idea of this book, we startby interpolating previously computed solution values yn, yn+1, . . . ,yn+l-k as well asthe new one yn+l by a polynomial P(t). The derivative of the solution at xn+1 is thenapproximated by P´(xn+1). This approximation is tied to the differential equation atxn+1 by requiring thatA formula for yn+l is obtained by writing P(t) in Lagrange form and using it in theP´(xn+1) term of the equation. For certain practical reasons it is usual with this familyof formulas to work with a constant step size h. Making this assumption, carrying outthe substitution, and scaling by h lead to a formula of the formThis is a member of a family known as backward differentiation formulas, or justBDFs. They were popularized by Gear [6] and are sometimes known as Gear’s formulas. Obviously, these formulas are implicit like the Adams-Moulton formulas. Theyare not nearly as accurate as the Adams-Moulton formulas of the same order, andformulas of orders 7 and up cannot be used because they are not stable (hence notconvergent) as the step size tends to zero.
The reason they are interesting is that at theorders for which they are stable, they are much more stable than explicit Runge-Kuttaand Adams formulas. Before discussing their usage, some general remarks about thestep size are necessary.The selection of the step size is affected by a number of issues. The one thatreceives the most attention is choosing the step size sufficiently small to obtain thedesired accuracy.
We have also seen that for some methods the step size might haveto be reduced to produce an answer at a desired point. There are other less obviousconstraints on the step size. Earlier we touched on the matter that the step size mighthave to be restricted so as to evaluate an implicit formula efficiently and also alludedto the issue of restricting the step size in order to make the integration stable. Thereare problems of great practical interest called stiff for which these other restrictionswill cause an explicit Runge-Kutta method or an Adams-Moulton formula evaluatedby simple iteration to need a step size very much smaller than that permitted by the accuracy of the formula. The excellent stability properties of the BDFs have made themthe most popular formulas for solving such problems. They cannot be evaluated bysimple iteration because it restricts the step size too much.
In practice, a modificationof the Newton iteration described in Chapter 4 is used to solve the nonlinear algebraicequations for yn+1. This has many disagreeable consequences due to the necessity ofapproximating partial derivatives and solving systems of linear equations. Each step isvery expensive compared to a Runge-Kutta or Adams method, but when the problemis stiff, the steps can be so much larger that this is a bargain. Indeed, the solutionof a problem that is quite stiff is impractical with codes not specifically intended forsuch problems.
As with the Adams formulas, modern codes based on the BDFs vary244CHAPTER 6ORDINARY DIFFERENTIAL EQUATIONSthe order as well as the step size. Stiff problems are difficult technically as well aspractically and how to solve them is an active area of research.There is a large literature on the solution of the initial value problem for a system of ordinary differential equations.