c16-2 (779594), страница 2
Текст из файла (страница 2)
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).2ci718Chapter 16.Integration of Ordinary Differential Equationsfor the ith equation will be taken to be∆0 = eps × yscal[i](16.2.8)∆0 = h × dydx[i](16.2.9)This enforces fractional accuracy not on the values of y but (much more stringently)on the increments to those values at each step.
But now look back at (16.2.7). If ∆0has an implicit scaling with h, then the exponent 0.20 is no longer correct: Whenthe stepsize is reduced from a too-large value, the new predicted value h1 will fail tomeet the desired accuracy when yscal is also altered to this new h1 value. Insteadof 0.20 = 1/5, we must scale by the exponent 0.25 = 1/4 for things to work out.The exponents 0.20 and 0.25 are not really very different. This motivates usto adopt the following pragmatic approach, one that frees us from having to knowin advance whether or not you, the user, plan to scale your yscal’s with stepsize.Whenever we decrease a stepsize, let us use the larger value of the exponent (whetherwe need it or not!), and whenever we increase a stepsize, let us use the smallerexponent.
Furthermore, because our estimates of error are not exact, but onlyaccurate to the leading order in h, we are advised to put in a safety factor S which isa few percent smaller than unity. Equation (16.2.7) is thus replaced by ∆0 0.20 Sh1 ∆1 h0 = ∆0 0.25 Sh1 ∆1 ∆0 ≥ ∆1(16.2.10)∆0 < ∆1We have found this prescription to be a reliable one in practice.Here, then, is a stepper program that takes one “quality-controlled” RungeKutta step.Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).If you desire constant fractional errors, plug a pointer to y into the pointer to yscalcalling slot (no need to copy the values into a different array). If you desire constantabsolute errors relative to some maximum values, set the elements of yscal equal tothose maximum values. A useful “trick” for getting constant fractional errors except“very” near zero crossings is to set yscal[i] equal to |y[i]| + |h × dydx[i]|.(The routine odeint, below, does this.)Here is a more technical point. We have to consider one additional possibilityfor yscal.
The error criteria mentioned thus far are “local,” in that they bound theerror of each step individually. In some applications you may be unusually sensitiveabout a “global” accumulation of errors, from beginning to end of the integrationand in the worst possible case where the errors all are presumed to add with thesame sign. Then, the smaller the stepsize h, the smaller the value ∆0 that you willneed to impose.
Why? Because there will be more steps between your startingand ending values of x. In such cases you will want to set yscal proportional toh, typically to something like16.2 Adaptive Stepsize Control for Runge-Kutta719#include <math.h>#include "nrutil.h"#define SAFETY 0.9#define PGROW -0.2#define PSHRNK -0.25#define ERRCON 1.89e-4The value ERRCON equals (5/SAFETY) raised to the power (1/PGROW), see use below.yerr=vector(1,n);ytemp=vector(1,n);h=htry;Set stepsize to the initial trial value.for (;;) {rkck(y,dydx,n,*x,h,ytemp,yerr,derivs);Take a step.errmax=0.0;Evaluate accuracy.for (i=1;i<=n;i++) errmax=FMAX(errmax,fabs(yerr[i]/yscal[i]));errmax /= eps;Scale relative to required tolerance.if (errmax <= 1.0) break;Step succeeded.
Compute size of next step.htemp=SAFETY*h*pow(errmax,PSHRNK);Truncation error too large, reduce stepsize.h=(h >= 0.0 ? FMAX(htemp,0.1*h) : FMIN(htemp,0.1*h));No more than a factor of 10.xnew=(*x)+h;if (xnew == *x) nrerror("stepsize underflow in rkqs");}if (errmax > ERRCON) *hnext=SAFETY*h*pow(errmax,PGROW);else *hnext=5.0*h;No more than a factor of 5 increase.*x += (*hdid=h);for (i=1;i<=n;i++) y[i]=ytemp[i];free_vector(ytemp,1,n);free_vector(yerr,1,n);}The routine rkqs calls the routine rkck to take a Cash-Karp Runge-Kutta step:#include "nrutil.h"void rkck(float y[], float dydx[], int n, float x, float h, float yout[],float yerr[], void (*derivs)(float, float [], float []))Given values for n variables y[1..n] and their derivatives dydx[1..n] known at x, usethe fifth-order Cash-Karp Runge-Kutta method to advance the solution over an interval hand return the incremented variables as yout[1..n].
Also return an estimate of the localtruncation error in yout using the embedded fourth-order method. The user supplies the routinederivs(x,y,dydx), which returns derivatives dydx at x.{int i;Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).void rkqs(float y[], float dydx[], int n, float *x, float htry, float eps,float yscal[], float *hdid, float *hnext,void (*derivs)(float, float [], float []))Fifth-order Runge-Kutta step with monitoring of local truncation error to ensure accuracy andadjust stepsize. Input are the dependent variable vector y[1..n] and its derivative dydx[1..n]at the starting value of the independent variable x.
Also input are the stepsize to be attemptedhtry, the required accuracy eps, and the vector yscal[1..n] against which the error isscaled. On output, y and x are replaced by their new values, hdid is the stepsize that wasactually accomplished, and hnext is the estimated next stepsize. derivs is the user-suppliedroutine that computes the right-hand side derivatives.{void rkck(float y[], float dydx[], int n, float x, float h,float yout[], float yerr[], void (*derivs)(float, float [], float []));int i;float errmax,h,htemp,xnew,*yerr,*ytemp;720Chapter 16.Integration of Ordinary Differential Equationsak2=vector(1,n);ak3=vector(1,n);ak4=vector(1,n);ak5=vector(1,n);ak6=vector(1,n);ytemp=vector(1,n);for (i=1;i<=n;i++)First step.ytemp[i]=y[i]+b21*h*dydx[i];(*derivs)(x+a2*h,ytemp,ak2);Second step.for (i=1;i<=n;i++)ytemp[i]=y[i]+h*(b31*dydx[i]+b32*ak2[i]);(*derivs)(x+a3*h,ytemp,ak3);Third step.for (i=1;i<=n;i++)ytemp[i]=y[i]+h*(b41*dydx[i]+b42*ak2[i]+b43*ak3[i]);(*derivs)(x+a4*h,ytemp,ak4);Fourth step.for (i=1;i<=n;i++)ytemp[i]=y[i]+h*(b51*dydx[i]+b52*ak2[i]+b53*ak3[i]+b54*ak4[i]);(*derivs)(x+a5*h,ytemp,ak5);Fifth step.for (i=1;i<=n;i++)ytemp[i]=y[i]+h*(b61*dydx[i]+b62*ak2[i]+b63*ak3[i]+b64*ak4[i]+b65*ak5[i]);(*derivs)(x+a6*h,ytemp,ak6);Sixth step.for (i=1;i<=n;i++)Accumulate increments with proper weights.yout[i]=y[i]+h*(c1*dydx[i]+c3*ak3[i]+c4*ak4[i]+c6*ak6[i]);for (i=1;i<=n;i++)yerr[i]=h*(dc1*dydx[i]+dc3*ak3[i]+dc4*ak4[i]+dc5*ak5[i]+dc6*ak6[i]);Estimate error as difference between fourth and fifth order methods.free_vector(ytemp,1,n);free_vector(ak6,1,n);free_vector(ak5,1,n);free_vector(ak4,1,n);free_vector(ak3,1,n);free_vector(ak2,1,n);}Noting that the above routines are all in single precision, don’t be too greedy inspecifying eps.
The punishment for excessive greediness is interesting and worthy ofGilbert and Sullivan’s Mikado: The routine can always achieve an apparent zero errorby making the stepsize so small that quantities of order hy0 add to quantities of ordery as if they were zero. Then the routine chugs happily along taking infinitely manyinfinitesimal steps and never changing the dependent variables one iota. (You guardagainst this catastrophic loss of your computer budget by signaling on abnormallysmall stepsizes or on the dependent variable vector remaining unchanged from stepto step. On a personal workstation you guard against it by not taking too long alunch hour while your program is running.)Here is a full-fledged “driver” for Runge-Kutta with adaptive stepsize control.We warmly recommend this routine, or one like it, for a variety of problems, notablySample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
















