Deturck, Wilf - Lecture Notes on Numerical Analysis (523142), страница 17
Текст из файла (страница 17)
The root at +1is the friendly one. As h increases slightly to small positive values, that root will follow thepower series expansion of e−τ very accurately, in fact, through the first four powers of τ .The root at −1 is to be regarded with apprehension, because it is poised on the brink ofcausing trouble. If as h grows to a small positive value, this root grows in absolute value,then its powers will dwarf the powers of the principal root in the numerical solution, andall accuracy will eventually be lost.To see if this happens, let’s substitute a power seriesα2 (h) = −1 + q1 τ + q2 τ 2 + · · ·(2.12.16)into the characteristic equation (2.12.8), which in the present case is just the quadraticequationτ4ττ21+α + α− 1−= 0.(2.12.17)333After substituting, we quickly find that q1 = −1/3, and our apprehension was fully warrented, because for small τ the root acts like −1 − τ /3, so for all small positive values of τthis lies outside of the unit disk, so the method will be unstable.In the next section, we are going to describe a family of multistep methods, called theAdams methods, that are stable, and that offer whatever accuracy one might want, if oneis willing to save enough backwards values of the y’s.
First we will develop a very generaltool, the Lagrange interpolation formula, that we’ll need in several parts of the sequel, andfollowing that we discuss the Adams formulas. The secret weapon of the Adams formulasis that when h = 0, one of the roots (the friendly one) is as usual sitting at 1, ready todevelop into the exponential series, and all of the unfriendly roots are huddled together atthe origin, just as far out of trouble as they can get.2.13Lagrange and Adams formulasOur next job is to develop formulas that can give us as much accuracy as we want in anumerical solution of a differential equation.
This means that we want methods in whichthe formulas span a number of points, i.e., in which the next value of y is obtained fromseveral backward values, instead of from just one or two as in the methods that we havealready studied. Furthermore, these methods will need some assistance in getting started,so we will have to develop matched formulas that will provide them with starting values ofthe right accuracy.All of these jobs can be done with the aid of a formula, due to Lagrange, whose missionin life is to fit a polynomial to a given set of data points, so let’s begin with a little example.2.13 Lagrange and Adams formulas75Problem: Through the three points (1, 17), (3, 10), (7, −5) there passes a unique quadraticpolynomial. Find it.Solution:P (x) = 17(x − 3)(x − 7)(1 − 3)(1 − 7)+ 10(x − 1)(x − 7)(3 − 1)(3 − 7)+ (−5)(x − 1)(x − 3).
(2.13.1)(7 − 1)(7 − 3)Let’s check that this is really the right answer. First of all, (2.13.1) is a quadraticpolynomial, since each term is. Does it pass through the three given points? When x = 1,the second and third terms vanish, because of the factor x − 1, and the first term has thevlaue 17, as is obvious from the cancellation that takes place. Similarly when x = 3, thefirst and third terms are zero and the second is 10, etc.Now we can jump from this little example all the way to the general formula.
If we wantto find the polynomial of degree n − 1 that passes through n given data points(x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ),then all we have to do is to write it out:P (x) =nXnYx − xj yi .j=1j6=ii=1(2.13.2)xi − x jThis is the Lagrange interpolation formula. In the ith term of the sum above is the productof n − 1 factors (x − xj )/(xi − xj ), namely of all those factors except for the one in whichj = i.Next, consider the special case of the above formula in which the points x1 , x2 , . .
. , xnare chosen to be equally spaced. To be specific, we might as well suppose that xj = jh forj = 1, 2, . . . , n.If we substitute in (2.13.2), we getP (x) =nXyi i=1nYj=1j6=ix − jh .(i − j)h(2.13.3)Consider the product of the denominators above. It is(i − 1)(i − 2) · · · (1)(−1)(−2) · · · (i − n)hn−1 = (−1)n−i (i − 1)!(n − i)!hn−1 .(2.13.4)Finally we replace x by xh and substitute in (2.13.3) to obtainP (xh) =nX(−1)n−ii=1(i − 1)!(n − i)!and that is about as simple as we can get it.yi nYj=1j6=i(x − j)(2.13.5)76The Numerical Solution of Differential EquationsNow we’ll use this result to obtain a collection of excellent methods for solving differentialequations, the so-called Adams formulas.
Begin with the obvious fact thatZp+1y((p + 1)h) = y(ph) + hy 0 (ht) dt.(2.13.6)pInstead of integrating the exact function y 0 in this formula, we will integrate the polynomial that agrees with y 0 at a number of given data points. First, we replace y 0 by theLagrange interpolating polynomial that agrees with it at the p+1 points h, 2h, . . . , (p+1)h.This transforms (2.13.6) intoyp+1 = yp + hp+1X(−1)p−i+1i=1(i − 1)!(p − i + 1)!y 0 (ih)jZp+1pp+1Y(x − j) dx.(2.13.7)j=1j6=iWe can rewrite this in the more customary form of a linear multistep method:yn+1 = yn + hp−1X0b−i yn−i.(2.13.8)i=−1This involves replacing i by p − i in (2.13.7), so the numbers b−i are given byb−i =Z(−1)i+1(p − 1 − i)!(i + 1)!p+1pp+1Y(x − j) dxi = −1, 0, .
. . , p − 1.(2.13.9)j=1j6=p−iNow to choose a formula in this family, all we have to do is specify p. For instance, let’stake p = 2. Then we findZ1 3(x − 1)(x − 2)dx =2 Z23b0 = −(x − 1)(x − 3)dx =b1 =b−1 =12Z 23251223(2.13.10)1(x − 2)(x − 3)dx = − 12so we have the third-order implicit Adams methodyn+1 = yn +h0).(5y 0 + 8yn0 − yn−112 n+1(2.13.11)If this process is repeated for various values of p we get the whole collection of theseformulas. For reference, we show the first four of them below, complete with their errorterms.yn+1 = yn + hyn0 −h2 00y2(2.13.12)2.13 Lagrange and Adams formulas77h 0h3(yn+1 + yn0 ) − y 000212hh400= yn + (5yn+1+ 8yn0 − yn−1) − y (iv1224h19h5 (v000= yn + (9yn+1+ 19yn0 − 5yn−1+ yn−2)−y24720yn+1 = yn +(2.13.13)yn+1(2.13.14)yn+1(2.13.15)If we compare these formulas with the general linear multistep method (2.12.1) then wequickly find the reason for the stability of these formulas.
Notice that only one backwardsvalue of y itself is used, namely yn . The other backwards values are all of the derivatives.Now look at equation (2.12.9) again. It is the polynomial equation that determines theroots of the characteristic equation of the method, in the limiting case where the step sizeis zero.If we use a formula with all the ai = 0, except that a0 = 1, then that polynomialequation becomes simplyαp+1 − αp = 0.(2.13.16)The roots of this are 1, 0, 0,. .
. ,0. The root at 1, as h grows to a small positive value, isthe one that gives us the accuracy of the computed solution. The other roots all start atthe origin, so when h is small they too will be small, instead of trying to cause trouble forus. All of the Adams formulas are stable for this reason.In Table 6 we show the behavior of the three roots of the characteristic equation (2.12.8)as it applies to the fourth-order method (2.13.15).
It happens that the roots are all real inthis case. Notice that the friendly root is the one of largest absolute value for all τ ≤ 0.9.τ0.00.10.20.30.40.50.60.70.80.91.0e−τ1.0000000.9048370.8187310.7408180.6703200.6065310.5488120.4965850.4493290.4065700.367879Friendly(τ )1.0000000.9048370.8187230.7407580.6700680.6057690.5469180.4924370.4409270.3900730.333333Root2(τ )0.0000000.0585360.0810480.0985500.1139480.1284570.1428570.1578690.1744580.1944750.224009Root3(τ )0.000000-0.075823-0.116824-0.153914-0.189813-0.225454-0.261204-0.297171-0.333333-0.369595-0.405827table 6Now we have found the implicit Adams formulas. In each of them the unknown yn+1appears on both sides of the equation. They are, therefore, useful as corrector formulas.To find matching predictor formulas is also straightforward. We return to (2.13.6), andfor a predictor method of order p + 1 we replace y 0 by the interpolating polynomial that78The Numerical Solution of Differential Equationsagrees with it at the data points 0, h, 2h, 3h, .