Deturck, Wilf - Lecture Notes on Numerical Analysis (523142), страница 11
Текст из файла (страница 11)
If the predictor formula isclever enough, then it will happen that just a single application of the iterative refinement(corrector formula) will be sufficient, and we won’t have to get involved in a long convergenceprocess.If we use the trapezoidal rule for a corrector, for instance, then a clever predictor wouldbe the midpoint rule.
The reason for this will become clear if we look at both formulastogether with their error terms. We will see in the next section that the error terms are asfollows:h3yn+1 = yn−1 + 2hyn0 + y 000 (Xm )(2.7.6)3hh30yn+1 = yn + (yn0 + yn+1) − y 000 (Xt ).(2.7.7)212Now the exact locations of the points Xm and Xt are unknown, but we will assume herethat h is small enough that we can regard the two values of y 000 that appear as being thesame.As far as the powers of h that appear in the error terms go, we see that the third poweroccurs in both formulas.
We say then, that the midpoint predictor and the trapezoidalcorrector constitute a matched pair. The error in the trapezoidal rule is about one fourthas large as, and of opposite sign from, the error in the midpoint method.The midpont guess is therefore quite “intelligent”. The subsequent iterative refinementof that guess needs to reduce the error only by a factor of four. Now let yP denote the(1)midpoint predicted value, yn+1 denote the first refined value, and yn+1 be the final convergedvalue given by the trapezoidal rule. Then we haveyn+1 = yn +(1)yn+1h 0yn + f (xn+1 , yn+1 )2h 0= yn +yn + f (xn+1 , yP )2(2.7.8)and by subtraction(1)yn+1 − yn+1 =h ∂f(yP − yn+1 ) .2 ∂y(2.7.9)This shows that, however far from the converged value the first guess was, the refinedh ∂fvalue is h2 ∂f∂y times closer. Hence if we can keep 2 ∂y no bigger than about 1/4, then thedistance from the first refined value to the converged value will be no larger than the sizeof the error term in the method, so there would be little point in gilding the iteration anyfurther.The conclusion is that when we are dealing with a matched predictor-corrector pair,we need do only a single refinement of the corrector if the step size is kept moderatelysmall.
Furthermore, “moderately small” means that the step size times the local value of∂f∂y should be small compared to 1. For this reason, iteration to full convergence is rarelydone in practice.502.8The Numerical Solution of Differential EquationsTruncation error and step sizeWe have so far regarded the step size h as a silent partner, more often than not choosingit to be equal to 0.05, for no particular reason.
It is evident, however, that the accuracy ofthe calculation is strongly affected by the step size. If h is chosen too large, the computedsolution may be quite far from the true solution of the differential equation, if too smallthen the calculation will become unnecessarily time-consuming, and roundoff errors maybuild up excessively because of the numerous arithmetic operations that are being carriedout.Speaking in quite general terms, if the true solution of the differential equation is rapidlychanging, then we will need a small values of h, that is, small compared to the local relaxation length (see p. 44), and if the solution changes slowly, then a larger value of h willdo.Frequently in practice we deal with equations whose solutions change very rapidly overpart of the range of integration and slowly over another part.
Examples of this are providedby the study of the switching on of a complicated process, such as beginning a multi-stagechemical reaction, turning on a piece of electronic equipment, starting a power reactor,etc. In such cases there usually are rapid and ephemeral or “transient” phenomena thatoccur soon after startup, and that disappear quickly.
If we want to follow these transientsaccurately, we may need to choose a very tiny step size. After the transients die out,however, the steady-state solution may be a very quiet, slowly varying or nearly constantfunction, and then a much larger value of h will be adequate.If we are going to develop software that will be satisfactory for such problems, thenthe program will obviously have to choose, and re-choose its own step size as the calculation proceeds. While following a rapid transient it should use a small mesh size, then itshould gradually increase h as the transient fades, use a large step while the solution issteady, decreae it again if further quick changes appear, and so forth, all without operatorintervention.Before we go ahead to discuss methods for achieving this step size control, let’s observethat one technique is already available in the material of the previous section. Recall thatif we want to, we can implement the trapezoidal rule by first guessing, or predicting, theunknown at the next point by using Euler’s formula, and then correcting the guess tocomplete convergence by iteration.The first guess will be relatively far away from the final converged value if the solutionis rapidly varying, but if the solution is slowly varying, then the guess will be rather good.It follows that the number of iterations required to produce convergence is one measure ofthe appropriateness of the current value of the step size: if many iterations are needed, thenthe step size is too big.
Hence one way to get some control on h is to follow a policy ofcutting the step size in half whenever more than, say, one or two iterations are necessary.This suggestion is not sufficiently sensitive to allow doubling the stepsize when only oneiteration is needed, however, and somewhat more delicacy is called for in that situation.Furthermore this is a very time-consuming approach since it involves a complete iterationto convergence, when in fact a single turn of the crank is enough if the step size is kept2.8 Truncation error and step size51small enough.The discussion does, however, point to the fundamental idea that underlies the automatic control of step size during the integration. That basic idea is precisely that we canestimate the correctness of the step size by whatching how well the first guess in our iterative process agrees with the corrected value. The correction process itself, when viewed thisway, is seen to be a powerful ally of the software user, rather than the “pain in the neck”it seemed to be when we first met it.Indeed, why would anyone use the cumbersome procedure of guessing and refining (i.e.,prediction and correction) as we do in the trapezoidal rule, when many other methods areavailable that give the next value of the unknown immediately and explicitly? No doubtthe question crossed the reader’s mind, and the answer is now emerging.
It will appear thatnot only does the disparity between the first prediction and the corrected value help us tocontrol the step size, it actually can give us a quantitative estimate of the local error in theintegration, so that if we want to, we can print out the approximate size of the error alongwith the solution.Our next job will be to make these rather qualitative remarks into quantitative tools, sowe must discuss the estimation of the error that we commit by using a particular differenceapproximation to a differential equation, instead of that equation itself, on one step of theintegration process. This is the single-step truncation error.
It does not tell us how far ourcomputed solution is from the true solution, but only how much error is committed in asingle step.The easiest example, as usual, is Euler’s method. In fact, in equation (2.1.2) we havealready seen the single-step error of this metnod. That equation wasy(xn + h) = y(xn ) + hy 0 (xn ) + h2y 00 (X)2(2.8.1)where X lies between xn and xn + h. In Euler’s procedure, we drop the third term on theright, the “remainder term,” and compute the solution from the rest of the equation.
Indoing this we commit a single-step trunction error that is equal toE = h2y 00 (X)2xn < X < xn + h.(2.8.2)Thus, Euler’s method is exact (E = 0) if the solution is a polynomial of degree 1 or less(y 00 = 0). Otherwise, the single-step error is proportional to h2 , so if we cut the step size inhalf, the local error is reduced to 1/4 of its former value, approximately, and if we doubleh the error is multiplied by about 4.We could use (2.8.2) to estimate the error by somehow computing an estimate of y 00 .For instance, we might differentiate the differential equation y 0 = f (x, y) once more, andcompute y 00 directly from the resulting formula.
This is usually more trouble than it isworth, though, and we will prefer to estimate E by more indirect methods.Next we derive the local error of the trapezoidal rule. There are various special methodsthat might be used to do this, but instead we are going to use a very general method that52The Numerical Solution of Differential Equationsis capable of dealing with the error terms of almost every integration rule that we intendto study.First, let’s look a little more carefully at the capability of the trapezoidal rule, in theformh0yn+1 − yn − (yn0 + yn+1) = 0.(2.8.3)2Of course, this is a recurrence by meand of which we propagate the approximate solutionto the right.
It certainly is not exactly true if yn denotes the value of the true solution atthe point xn unless that true solution is very special. How special?Suppose the true solution is y(x) = 1 for all x. Then (2.8.3) would be exactly valid.Suppose y(x) = x. Then (2.8.3) is again exactly satisfied, as the reader should check.Furthermore, if y(x) = x2 , then a brief calculation reveals that (2.8.3) holds once more.How long does this continue? Our run of good luck has just expired, because if y(x) = x3then (check this) the left side of (2.8.3) is not 0, but is instead −h3 /2.We might say that the trapezoidal rule is exact on 1, x, and x2 , but not x3 , i.e., that itis an integration rule of order two (“order” is an overworked word in differential equations).It follows by linearity that the rule is exact on any quadratic polynomial.By way of contrast, it is easy to verify that Euler’s method is exact for a linear function,but fails on x2 .
Since the error term for Euler’s method in (2.8.2) is of the form const ∗ h2 ∗y 00 (X), it is perhaps reasonable to expect the error term for the trapezoidal rule to look likeconst ∗ h3 ∗ y 000 (X).Now we have to questions to handle, and they are respectively easy and hard:(a) If the error term in the trapezoidal rule really is const ∗ h3 ∗ y 000 (X), then what is“const”?(b) Is it true that the error term is const ∗ h3 ∗ y 000 (X)?We’ll do the easy one first, anticipating that the answer to (b) is affirmative so the effortwon’t be wasted.