Deturck, Wilf - Lecture Notes on Numerical Analysis (523142), страница 12
Текст из файла (страница 12)
If the error term is of the form stated, then the trapezoidal rule can bewritten ash 0y(xh ) − y(x) −y (x + h) + y 0 (x) = c ∗ h3 ∗ y 000 (X),(2.8.4)2where X lies between x and x + h. To find c all we have to do is substitute y(x) = x3into (2.8.4) and we find at once that c = −1/12. The single-step truncation error of thetrapezoidal rule would therefore beE = −h3y 000 (X)12x < X < x + h.(2.8.5)Now let’s see that question (b) has the answer “yes” so that (2.8.5) is really right.To do this we start with a truncated Taylor series with the integral form of the remainder,rather than with the differential form of the remainder.
In general the series isy(x) = y(0) + xy 0 (0) + x2y 00 (0)y (n) (0)+ · · · + xn+ Rn (x)2!n!(2.8.6)2.8 Truncation error and step sizewhereRn (x) =1n!53Zx0(x − s)n y (n−1) (s) ds.(2.8.7)Indeed, one of the nice ways to prove Taylor’s theorem begins with the right-hand side of(2.8.7), plucked from the blue, and then repeatedly integrates by parts, lowereing the orderof the derivative of y and the power of (x − s) until both reach zero.In (2.8.6) we choose n = 2, because the trapezoidal rule is exact on polynomials ofdegree 2, and we write it in the formy(x) = P2 (x) + R2 (x)(2.8.8)where P2 (x) is the quadratic (in x) polynomial P2 (x) = y(0) + xy 0 (0) + x2 y 00 (0)/2.Next we define a certain operation that transforms a function y(x) into a new function,namely into the left-hand side of equation (2.8.4).
We call the operator L so it is definedbyh 0Ly(x) = y(x + h) − y(x) −y (x) + y 0 (x + h) .(2.8.9)2Now we apply the operator L to both sides of equation (2.8.8), and we notice immediatelythat LP2 (x) = 0, because the rule is exact on quadratic polynomials (this is why we chosen = 2 in (2.8.6)). Hence we haveLy(x) = LR2 (x).(2.8.10)Notice that we have here a remainder formula for the trapezoidal rule. It isn’t in a verysatisfactory form yet, so we will now make it a bit more explicit by computing LR2 (x).First, in the integral expression (2.8.7) for R2 (x) we want to replace the upper limit of theintegral by +∞. We can do this by writingR2 (x) =12!Z∞0H2 (x − s)y 000 (s) ds(2.8.11)where H2 (t) = t2 if t > 0 and H2 (t) = 0 if t < 0.Now if we bear in mind the fact that the operator L acts only on x, and that s is adummy variable of integration, we find that1LR2 (x) =2!Z∞0LH2 (x − s)y 000 (s) ds.(2.8.12)Choose x = h.
Then if s lies between 0 and h we findLH2 (x − s) = (h − s)2 −h(2(h − s))2(2.8.13)= −s(h − s(Caution: Do not read past the right-hand side of the first equals sign unless you can verifythe correctness of what you see there!), whereas if s > h then LH2 (x − s) = 0.54The Numerical Solution of Differential EquationsThen (2.8.12) becomes1LR2 (h) = −2Z0hs(h − s)y 000 (s) ds.(2.8.14)This is a much better form for the remainder, but we still do not have the “hard” question(b). To finish it off we need a form of the mean-value theorem of integral calculus, namelyTheorem 2.8.1 If p(x) is nonnegative, and g(x) is continuous, thenZZbbp(x)g(x) dx = g(X)ap(x) dx(2.8.15)awhere X lies between a and b.The theorem asserts that a weighted average of the values of a continuous function isitself one of the values of that function.
The vital hypothesis is that the “weight” p(x) doesnot change sign.Now in (2.8.14), the function s(h − s) does not change sign on the s-interval (0, h), so1LR2 (h) = − y 000 (X)2Zh0s(h − s) ds(2.8.16)and if we do the integral we obtain, finally,Theorem 2.8.2 The trapezoidal rule with remainder term is given by the formulay(xn+1 ) − y(xn ) = h3h 0y (xn ) + y 0 (xn+1 ) − y 000 (X),212(2.8.17)where X lies between xn and xn+1 .The proof of this theorem involved some ideas tha carry over almost unchanged tovery general kinds of integration rules. Therefore it is important to make sure that youcompletely understand its derivation.2.9Controlling the step sizeIn equation (2.8.5) we saw that if we can estimate the size of the third derivative duringthe calculation, then we can estimate the error in the trapezoidal rule as we go along, andmodify the step size h if necessary, to keep that error within preassigned bounds.To see how this can be done, we will quote, without proof, the result of a similarderivation for the midpoint rule.
It says thaty(xn+1 ) − y(xn−1 ) = 2hy 0 (xn ) +h3 000y (X),3(2.9.1)2.9 Controlling the step size55where X is between xn−1 and xn+1 . Thus the midpoint rule is also of second order. If thestep size were halved, the local error would be cut to one eighth of its former value. Theerror in the midpoint rule is, however, about four times as large as that in the trapezoidalformula, and of opposite sign.Now suppose we adopt an overall strategy of predicting the value yn+1 of the unknownby means of the midpoint rule, and then refining the prediction to convergence with thetrapezoidal corrector. We want to estiamte the size of the single-step truncation error,using only the following data, both of which are available during the calculation: (a) theinitial guess, from the midpoint method, and (b) the converged corrected value, from thetrapezoidal rule.We begin by defining three different kinds of values of the unknown function at the“next” point xn+1 .
They are(i) the quantity pn+1 is defined as the predicted value of y(xn+1 ) obtained from using themidpoint rule, except that backwards values are not the computed ones, but are theexact ones instead. In symbols,pn+1 = y(xn+1 ) + 2hy 0 (xn ).(2.9.2)Of course, pn+1 is not available during an actual computation.(ii) the quantity qn+1 is the value that we would compute from the trapezoidal correctorif for the backward value we use the exact solution y(xn ) instead of the calculatedsolution yn . Thus qn+1 satisfies the equationqn+1 = y(xn ) +h(f (xn , y(xn )) + f (xn+1 , qn+1 )) .2(2.9.3)Again, qn+1 is not available to us during calculation.(iii) the quantity y(xn+1 ), which is the exact solution itself. It staisfies two differentequations, one of which isy(xn+1 ) = y(xn ) +hh3(f (xn , y(xn )) + f (xn+1 , y(xn+1 ))) − y 000 (X)212(2.9.4)and the other of which is (2.9.1).
Note that the two X’s may be different.Now, from (2.9.1) and (2.9.2) we have at once thaty(xn+1 ) = pn+1 +h3 000y (X).3(2.9.5)Next, from (2.9.3) and (2.9.4) we gethh3(f (xn+1 , y(xn+1 )) − f (xn+1 , qn+1 )) − y 000 (X)212h∂fh3 000= qn+1 (y(xn+1 ) − qn+1 ) (xn+1 , Y ) − y (X2∂y12y(xn+1 ) =(2.9.6)56The Numerical Solution of Differential Equationswhere we have used the mean-value theorem, and Y lies between y(xn+1 ) and qn+1 . Now ifwe subtract qn+1 from both sides, we will observe that y(xn+1 ) − qn+1 will then appear onboth sides of the equation. Hence we will be able to solve for it, with the result thaty(xn+1 ) = qn+1 −h3 000y (X) + terms involving h4 .12(2.9.7)Now let’s make the working hypothesis that y 000 is constant over the range of values ofx considered, namely from xn − h to xn + h. The y 000 (X) in (2.9.7) is thereby decreed to beequal to the y 000 (X) in (2.9.5), even though the X’s are different.
Under this assumption,we can eliminate y(xn+1 ) between (2.9.5) and (2.9.7) and obtainqn+1 − pn+1 =5 3 000h y + terms involving h4 .12(2.9.8)We see that this expresses the unknown, but assumed constant, value of y 000 in termsof the difference between the initial prediction and the final converged value of y(xn+1 ).Now we ignore the “terms involving h4 ” in (2.9.8), solve for y 000 , and then for the estimatedsingle-step truncation error we haveh3 000y121 12≈−(qn+1 − pn+1 )12 51= − (qn+1 − pn+1 ).5Error = −(2.9.9)The quantity qn+1 − pn+1 is not available during the calculation, but as an estimatorwe can use the compted predicted value and the compted converged value, because thesediffer only in that they use computed, rather than exact backwards values of the unknownfunction.Hence, we have here an estimate of the single-step trunction error that we can conveniently compute, print out, or use to control the step size.The derivation of this formula was of course dependent on the fact that we used themidpoint metnod for the predoctor and the trapezoidal rule for the corrector.
If we hadused a different pair, however, the same argument would have worked, provided only thatthe error terms of the predictor and corrector formulas both involved the same derivativeof y, i.e., both formulas were of the same order.Hence, “matched pairs”” of predictor and corrector formulas, i.e., pairs in which bothare of the same order, are most useful for carrying out extended calculations in which thelocal errors are continually monitored and the step size is regulated accordingly.Let’s pause to see how this error estimator would have turned out in the case of a generalmatched pair of predictor-corrector formulas, insted of just for the midpoint and trapezoidalrule combination. Suppose the predictor formula has an error termyexact − ypredicted = λhq y (q) (X)(2.9.10)2.9 Controlling the step size57and suppose that the error in the corrector formula is given byyexact − ycorrected = µhq y (q) (X).(2.9.11)Then a derivation similar to the one that we have just done will show that the estimatorfor the single-step error that is available during the progress of the computation isError ≈µ(ypredicted − ycorrected).λ−µ(2.9.12)In the table below we show the result of integrating the differential equation y 0 = −ywith y(0) = 1 using the midpoint and trapezoidal formulas with h = 0.05 as the predictorand corrector, as described above.
The successive columns show x, the predicted value at x,the converged corrected value at x, the single-step error estimated from the approximation(2.9.9), and the actual single-step error obtained by computingy(xn+1 ) − y(xn ) −h 0(y (xn ) + y 0 (xn+1 ))2(2.9.13)using the true solution y(x) = e−x . The calculation was started by (cheating and) usingthe exact solution at 0.00 and 0.05.x0.000.050.100.150.200.250.300.350.400.450.50...Pred(x)————0.9048770.8607470.8187590.7788200.7408280.7046900.6703150.6376170.606514...Corr(x)————0.9048280.8606900.8187050.7787680.7407800.7046440.6702710.6375750.606474...0.951.000.3866940.3678310.3866690.367807Errest(x)————98 × 10−7113 × 10−7108 × 10−7102 × 10−797 × 10−793 × 10−788 × 10−784 × 10−780 × 10−7...51 × 10−748 × 10−7Error(x)————94 × 10−785 × 10−777 × 10−769 × 10−761 × 10−755 × 10−748 × 10−743 × 10−737 × 10−7...5 × 10−73 × 10−7table 5Now that we have a simple device for estimating the single-step truncation error, namelyby using one fifth of the distance between the first guess and the corrected value, we canregulate the step size so as to keep the error between preset limits.