c16-4 (779596)
Текст из файла
724Chapter 16.Integration of Ordinary Differential Equations}CITED REFERENCES AND FURTHER READING:Gear, C.W. 1971, Numerical Initial Value Problems in Ordinary Differential Equations (EnglewoodCliffs, NJ: Prentice-Hall), §6.1.4.Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag),§7.2.12.16.4 Richardson Extrapolation and theBulirsch-Stoer MethodThe techniques described in this section are not for differential equationscontaining nonsmooth functions. For example, you might have a differentialequation whose right-hand side involves a function that is evaluated by table look-upand interpolation.
If so, go back to Runge-Kutta with adaptive stepsize choice:That method does an excellent job of feeling its way through rocky or discontinuousterrain. It is also an excellent choice for quick-and-dirty, low-accuracy solutionof a set of equations. A second warning is that the techniques in this section arenot particularly good for differential equations that have singular points inside theinterval of integration. A regular solution must tiptoe very carefully across suchpoints. Runge-Kutta with adaptive stepsize can sometimes effect this; more generally,there are special techniques available for such problems, beyond our scope here.Apart from those two caveats, we believe that the Bulirsch-Stoer method,discussed in this section, is the best known way to obtain high-accuracy solutionsto ordinary differential equations with minimal computational effort.
(A possibleexception, infrequently encountered in practice, is discussed in §16.7.)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).ym=vector(1,nvar);yn=vector(1,nvar);h=htot/nstep;Stepsize this trip.for (i=1;i<=nvar;i++) {ym[i]=y[i];yn[i]=y[i]+h*dydx[i];First step.}x=xs+h;(*derivs)(x,yn,yout);Will use yout for temporary storage of derivah2=2.0*h;tives.for (n=2;n<=nstep;n++) {General step.for (i=1;i<=nvar;i++) {swap=ym[i]+h2*yout[i];ym[i]=yn[i];yn[i]=swap;}x += h;(*derivs)(x,yn,yout);}for (i=1;i<=nvar;i++)Last step.yout[i]=0.5*(ym[i]+yn[i]+h*yout[i]);free_vector(yn,1,nvar);free_vector(ym,1,nvar);72516.4 Richardson Extrapolation and the Bulirsch-Stoer Method2 steps6 stepsextrapolationto ∞ stepsx+HFigure 16.4.1.
Richardson extrapolation as used in the Bulirsch-Stoer method. A large interval H isspanned by different sequences of finer and finer substeps. Their results are extrapolated to an answerthat is supposed to correspond to infinitely fine substeps. In the Bulirsch-Stoer method, the integrationsare done by the modified midpoint method, and the extrapolation technique is rational function orpolynomial extrapolation.Three key ideas are involved. The first is Richardson’s deferred approachto the limit, which we already met in §4.3 on Romberg integration. The idea isto consider the final answer of a numerical calculation as itself being an analyticfunction (if a complicated one) of an adjustable parameter like the stepsize h. Thatanalytic function can be probed by performing the calculation with various valuesof h, none of them being necessarily small enough to yield the accuracy that wedesire.
When we know enough about the function, we fit it to some analytic form,and then evaluate it at that mythical and golden point h = 0 (see Figure 16.4.1).Richardson extrapolation is a method for turning straw into gold! (Lead into goldfor alchemist readers.)The second idea has to do with what kind of fitting function is used.
Bulirsch andStoer first recognized the strength of rational function extrapolation in Richardsontype applications. That strength is to break the shackles of the power series and itslimited radius of convergence, out only to the distance of the first pole in the complexplane. Rational function fits can remain good approximations to analytic functionseven after the various terms in powers of h all have comparable magnitudes. Inother words, h can be so large as to make the whole notion of the “order” of themethod meaningless — and the method can still work superbly.
Nevertheless, morerecent experience suggests that for smooth problems straightforward polynomialextrapolation is slightly more efficient than rational function extrapolation. We willaccordingly adopt polynomial extrapolation as the default, but the routine bsstepbelow allows easy substitution of one kind of extrapolation for the other. Youmight wish at this point to review §3.1–§3.2, where polynomial and rational functionextrapolation were already discussed.The third idea was discussed in the section before this one, namely to usea method whose error function is strictly even, allowing the rational function orpolynomial approximation to be in terms of the variable h2 instead of just h.Put these ideas together and you have the Bulirsch-Stoer method [1].
A singleBulirsch-Stoer step takes us from x to x + H, where H is supposed to be quite a largeSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).yx⊗4 steps726Chapter 16.Integration of Ordinary Differential Equationsn = 2, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, . . . , [nj = 2nj−2 ], . . .(16.4.1)More recent work by Deuflhard [2,3] suggests that the sequencen = 2, 4, 6, 8, 10, 12, 14, . .
., [nj = 2j], . . .(16.4.2)is usually more efficient. For each step, we do not know in advance how far up thissequence we will go. After each successive n is tried, a polynomial extrapolationis attempted. That extrapolation gives both extrapolated values and error estimates.If the errors are not satisfactory, we go higher in n. If they are satisfactory, we goon to the next step and begin anew with n = 2.Of course there must be some upper limit, beyond which we conclude that thereis some obstacle in our path in the interval H, so that we must reduce H rather thanjust subdivide it more finely.
In the implementations below, the maximum numberof n’s to be tried is called KMAXX. For reasons described below we usually take thisequal to 8; the 8th value of the sequence (16.4.2) is 16, so this is the maximumnumber of subdivisions of H that we allow.We enforce error control, as in the Runge-Kutta method, by monitoring internalconsistency, and adapting stepsize to match a prescribed bound on the local truncationerror. Each new result from the sequence of modified midpoint integrations allows atableau like that in §3.1 to be extended by one additional set of diagonals. The size ofthe new correction added at each stage is taken as the (conservative) error estimate.How should we use this error estimate to adjust the stepsize? The best strategy nowknown is due to Deuflhard [2,3] .
For completeness we describe it here:Suppose the absolute value of the error estimate returned from the kth column (and hencethe k + 1st row) of the extrapolation tableau is k+1,k . Error control is enforced by requiringk+1,k < (16.4.3)as the criterion for accepting the current step, where is the required tolerance. For the evensequence (16.4.2) the order of the method is 2k + 1:k+1,k ∼ H 2k+1(16.4.4)Thus a simple estimate of a new stepsize Hk to obtain convergence in a fixed column k would be1/(2k+1)(16.4.5)Hk = Hk+1,kWhich column k should we aim to achieve convergence in? Let’s compare the workrequired for different k.
Suppose Ak is the work to obtain row k of the extrapolation tableau,so Ak+1 is the work to obtain column k. We will assume the work is dominated by the costof evaluating the functions defining the right-hand sides of the differential equations. For nksubdivisions in H, the number of function evaluations can be found from the recurrenceA1 = n1 + 1Ak+1 = Ak + nk+1(16.4.6)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).— not at all infinitesimal — distance. That single step is a grand leap consistingof many (e.g., dozens to hundreds) substeps of modified midpoint method, whichare then extrapolated to zero stepsize.The sequence of separate attempts to cross the interval H is made withincreasing values of n, the number of substeps. Bulirsch and Stoer originallyproposed the sequence72716.4 Richardson Extrapolation and the Bulirsch-Stoer MethodThe work per unit step to get column k is Ak+1 /Hk , which we nondimensionalize with afactor of H and write asAk+1HHkk+1,k 1/(2k+1)= Ak+1Wk =(16.4.7)(16.4.8)Wq =mink=1,...,kfWk(16.4.9)where kf is the final column, in which the error criterion (16.4.3) was satisfied.
The qdetermined from (16.4.9) defines the stepsize Hq to be used as the next basic stepsize, so thatwe can expect to get convergence in the optimal column q.Two important refinements have to be made to the strategy outlined so far:• If the current H is “too small,” then kf will be “too small,” and so q remains“too small.” It may be desirable to increase H and aim for convergence in acolumn q > kf .• If the current H is “too big,” we may not converge at all on the current step and wewill have to decrease H.
We would like to detect this by monitoring the quantitiesk+1,k for each k so we can stop the current step as soon as possible.Deuflhard’s prescription for dealing with these two problems uses ideas from communication theory to determine the “average expected convergence behavior” of the extrapolation.His model produces certain correction factors α(k, q) by which Hk is to be multiplied to tryto get convergence in column q. The factors α(k, q) depend only on and the sequence {ni }and so can be computed once during initialization:Ak+1 −Aq+1α(k, q) = (2k+1)(Aq+1 −A1 +1)fork<q(16.4.10)with α(q, q) = 1.Now to handle the first problem, suppose convergence occurs in column q = kf . Thenrather than taking Hq for the next step, we might aim to increase the stepsize to get convergencein column q + 1.
Характеристики
Тип файла PDF
PDF-формат наиболее широко используется для просмотра любого типа файлов на любом устройстве. В него можно сохранить документ, таблицы, презентацию, текст, чертежи, вычисления, графики и всё остальное, что можно показать на экране любого устройства. Именно его лучше всего использовать для печати.
Например, если Вам нужно распечатать чертёж из автокада, Вы сохраните чертёж на флешку, но будет ли автокад в пункте печати? А если будет, то нужная версия с нужными библиотеками? Именно для этого и нужен формат PDF - в нём точно будет показано верно вне зависимости от того, в какой программе создали PDF-файл и есть ли нужная программа для его просмотра.















