Deturck, Wilf - Lecture Notes on Numerical Analysis (523142), страница 10
Текст из файла (страница 10)
The most interesting and revealing case is where the true solutionis a decaying exponential, so we will assume that L > 0. Further, we will assume thaty(0) = 1 is the given initial value.44The Numerical Solution of Differential EquationsThe exact solution is of courseExact(x) = e−x/L .(2.6.5)Notice that if x increases by L, the solution changes by a factor of e. Hence L, called therelaxation length of the problem, can be conveniently visualized as the distance over whichthe solution falls by a factor of e.Now we would like to know how well each of the methods (2.6.1)–(2.6.3) handles theproblem (2.6.4).Suppose first that we ask Euler’s method to solve the problem. If we substitute y 0 =f (x, y) = −y/L into (2.6.1), we getyn+1yn= yn + h ∗ −Lh= 1−yn .L(2.6.6)Before we solve this recurrence, let’s comment on the ratio h/L that appears in it.
NowL is the distance over which the solution changes by a factor of e, and h is the step sizethat we are going to use in the numerical integration. Instinctively, one feels that if thesolution is changing rapidly in a certain region, then h will have to be kept small there ifgood accuracy is to be retained, while if the solution changes only slowly, then h can belarger without sacrificing too much accuracy.
The ratio h/L measures the step size of theintegration in relation to the distance over which the solution changes appreciably. Hence,h/L is exactly the thing that one feels should be kept small for a successful numericalsolution.Since h/L occurs frequently below, we will denote it with the symbol τ .Now the solution of the recurrence equation (2.6.6), with the starting value y0 = 1, isobviouslyyn = (1 − τ )nn = 0, 1, 2, . . . .(2.6.7)Next we study the trapezoidal approximation to the same equation (2.6.4). We substitute y 0 = f (x, y) = −y/L in (2.6.2) and getyn+1hyn yn+1= yn +− −.2LL(2.6.8)The unknown yn+1 appears, as usual with this method, on both sides of the equation.However, for the particularly simple equation that we are now studying, there is no difficultyin solving (2.6.8) for yn+1 (without any need for an iterative process) and obtainingyn+1 =1−1+τ2τ2yn .(2.6.9)Together with the initial value y0 = 1, this implies thatyn =1−1+τ2τ2!nn = 0, 1, 2, .
. . .(2.6.10)2.6 Comparison of the methods45Before we deal with the midpoint rule, let’s pause to examine the two methods whosesolutions we have just found. Note that for a given value of h, all three of (a) the exact solution, (b) Euler’s solution and (c) the trapezoidal solution are of the form yn = (constant)n ,in which the three values of “constant” are(a) e−τ(b) 1 − τ(c)1− τ21+ τ2(2.6.11).It follows that to compare the two approximate methods with the “truth,” all we haveto do is see how close the constants (b) and (c) above are to the true constant (a). If weremember that τ is being thought of as small compared to 1, then we have the power seriesexpansion of e−ττ2 τ3e−τ = 1 − τ +−+ ···(2.6.12)26to compare with 1 − τ and with the power series expansion of1−1+τ2τ2=1−τ +τ2 τ3−+ ···.24(2.6.13)The comparison is now clear.
Both the Euler and the trapezoidal methods yield approximate solutions of the form (constant)n , where “constant” is near e−τ . The trapezoidalrule does a better job of being near e−τ because its constant agrees with the power seriesexpansion of e−τ through the quadratic term, whereas that of the Euler method agrees onlyup to the linear term.Finally we study the nature of the approximation that is provided by the midpointrule. We will find that a new and important phenomenon rears its head in this case. Theanalysis begins just as it did in the previous two cases: We substitute the right-hand sidef (x, y) = −y/L for y 0 in (2.6.3) to getyn+1 = yn−1 + 2h ∗ −yn.L(2.6.14)One important feature is already apparent.
Instead of facing a first-order differenceequation as we did in (2.6.6) for Euler’s method and in (2.6.9) for the trapezoidal rule, wehave now to contend with a second-order difference equation.Since the equation is linear with constant coefficients, we know to try a solution of theform yn = r n . This leads to the quadratic equationr 2 + 2τ r − 1 = 0.(2.6.15)Evidently the discriminant of this equation is positive, so its roots are distinct. If we denotethese two roots by r+ (τ ) and r− (τ ), then the general solution of the difference equation(2.6.14) isyn = c (r+ (τ ))n + d (r− (τ ))n ,(2.6.16)46The Numerical Solution of Differential Equationswhere c and d are constants whose values are determined by the initial data y0 and y1 .The Euler and trapezoidal approximations were each of the form (constant)n . This oneis a sum of two terms of that kind. We will see that r+ (τ ) is a very good approximationto e−τ .
The other term, (r− (τ ))n is, so to speak, the price that we pay for getting such agood approximation in r+ (τ ). We hope that the other term will stay small relative to thefirst term, so as not to disturb the closeness of the approximation. We will see, however,that it need not be so obliging, and that in fact it might do whatever it can to spoil things.The two roots of the quadratic equation are√r+ (τ ) = −τ + √1 + τ 2r− (τ ) = −τ − 1 + τ 2 .(2.6.17)When τ = 0 the first of these is +1, so when τ is small r+ (τ ) is near +1, and it is the rootthat is trying to approximate the exact constant e−τ as well as possible. In fact it doespretty well, because the power series expansion of r+ (τ ) isr+ (τ ) = 1 − τ +τ2 τ4−+ ···28(2.6.18)so it agrees with e−τ through the quadratic terms.What about r− (τ )? Its Taylor series isr− (τ ) = −1 − τ −τ2+ ···.2(2.6.19)The bad news is now before us: When τ is a small positive number, the root r− (τ ) islarger than 1 in absolute value.
This means that the stability criterion of Theorem 1.6.1 isviolated, so we say that the midpoint rule is unstable.In practical terms, we observe that r+ (τ ) is close to e−τ , so the first term on the rightof (2.6.16) is very close to the exact solution. The second term of (2.6.16), the so-calledparasitic solution, is small compared to the first term when n is small, because the constantd will be small compared with c. However, as we move to the right, n increases, and thesecond term will eventually dominate the first, because the first term is shrinking to zeroas n increases, because that’s what the exact solution does, while the second term increasessteadily in size.
In fact, since r− (τ ) is negative and larger than 1 in absolute value, thesecond term will alternate in sign as n increases, and grow without bound in magnitude.In Table 4 below we show the result of integrating the problem y 0 = −y, y(0) = 1 witheach of the three methods that we have discussed, using a step size of h = 0.05. To get themidpoint method started, we used the exact value of y(0.05) (i.e., we cheated), and in thetrapezoidal rule we iterated to convergence with = 10−4 . The instability of the midpointrule is quite apparent.2.6 Comparison of the methodsx0.01.02.03.04.05.010.014.5515.8Euler(x)1.000000.358490.128510.046070.016520.005920.000043.3 × 10−79.1 × 10−847Trap(x)1.000000.367800.135270.049750.018300.006730.000054.8 × 10−71.4 × 10−7Midpoint(x)1.000000.368060.135520.050050.018880.008220.21688-20.4871.45Exact(x)1.000000.367880.135340.049790.018320.006740.000054.8 × 10−71.4 × 10−7table 4In addition to the above discussion of accuracy, we summarize here there additionalproperties of integration methods as they relate to the examples that we have alreadystudied.First, a numerical integration method might be iterative or noniterative.
A method isnoniterative if it expresses the next value of the unknown function quite explicitly in termsof values of the function and its derivatives at preceding points. In an iterative method,at each step of the solution process the next value of the unknown is defined implicitly byan equation, which must be solved to obtain the next value of the unknown function. Inpractice, we may either solve this equation completely by an iteration or do just one stepof the iteration, depending on the quality of available estimates for the unknown value.Second, a method is self-starting if the next value of the unknown function is obtainedfrom the values of the function and its derivatives at exactly one previous point.
It is notself-starting if values at more than one backward point are needed to get the next one. Inthe latter case some other method will have to be used to get the first few computed values.Third, we can define a numerical method to be stable if when it is applied to the equation= −y/L, where L > 0, then for all sufficiently small positive values of the step size h astable difference equation results, i.e., the computed solution (neglecting roundoff) remainsbounded as n → ∞.y0We summarize below the properties of the three methods that we have been studying.IterativeSelf-startingStableEulerNoYesYesMidpointNoNoNoTrapezoidalYesYesYesOf the three methods, the trapezoidal rule is clearly the best, though for efficient useit needs the help of some other formula to predict the next value of y and thereby avoidlengthy iterations.48The Numerical Solution of Differential Equations2.7Predictor-corrector methodsThe trapezoidal rule differs from the other two that we’ve looked at in that it does notexplicitly tell us what the next value of the unknown function is, but instead gives us anequation that must be solved in order to find it.
At first sight this seems like a nuisance,but in fact it is a boon, because it enables us to regulate the step size during the course ofa calculation, as we will discuss in section 2.9.Let’s take a look at the process by which we refine a guessed value of yn+1 to an improvedvalue, using the trapezoidal formulayn+1 = yn +h(f (xn , yn ) + f (xn+1 , yn+1 )).2(2.7.1)(k)Suppose we let yn+1 represent some guess to the value of yn+1 that satisfies (2.7.1). Then(k+1)the improved value yn+1 is computed from(k+1)yn+1 = yn +h(k)(f (xn , yn ) + f (xn+1 , yn+1 )).2(2.7.2)(k)We want to find out about how rapidly the successive values yn+1 , k = 1, 2, . .
. approacha limit, if at all. To do this, we rewrite equation (2.7.2), this time replacing k by k − 1 toobtainh(k)(k−1)yn+1 = yn + (f (xn , yn ) + f (xn+1 , yn+1 ))(2.7.3)2and then subtract (2.7.3) from (2.7.2) to get(k+1)(k)yn+1 − yn+1 =h(k)(k−1)(f (xn+1 , yn+1 ) − f (xn+1 , yn+1 )).2(2.7.4)Next we use the mean-value theorem on the difference of f values on the right-handside, yieldingh ∂f (k+1)(k)(k)(k−1yn+1 − yn+1 =y−y(2.7.5)n+1 ,2 ∂y (xn+1 ,η) n+1(k)(k−1)where η lies between yn+1 and yn+1 .From the above we see at once that the difference between two consecutive iteratedvalues of yn+1 will be h2 ∂f∂y times the difference between the previous two iterated values.It follws that the iterative process will converge if h is kept small enough so thatis less than 1 in absolute value.
We refer totrapezoidal rule.h ∂f2 ∂yh ∂f2 ∂yas the local convergence factor of theIf the factor is a lot less than 1 (and this can be assured by keeping h small enough),then the convergence will be extremely rapid.In actual practice, one uses an iterative formula together with another formula (thepredictor) whose mission is to provide an intelligent first guess for the iterative method2.7 Predictor-corrector methods49to use. The predictor formula will be explicit, or noniterative.