c16-4 (779596), страница 2
Текст из файла (страница 2)
Since we don’t have Hq+1 available from the computation, we estimate it asHq+1 = Hq α(q, q + 1)(16.4.11)By equation (16.4.7) this replacement is efficient, i.e., reduces the work per unit step, ifAq+1Aq+2>HqHq+1(16.4.12)Aq+1 α(q, q + 1) > Aq+2(16.4.13)orDuring initialization, this inequality can be checked for q = 1, 2, . . .
to determine kmax , thelargest allowed column. Then when (16.4.12) is satisfied it will always be efficient to useHq+1 . (In practice we limit kmax to 8 even when is very small as there is very little furthergain in efficiency whereas roundoff can become a problem.)The problem of stepsize reduction is handled by computing stepsize estimatesH̄k ≡ Hk α(k, q),k = 1, . . . , q − 1(16.4.14)during the current step.
The H̄’s are estimates of the stepsize to get convergence in theoptimal column q. If any H̄k is “too small,” we abandon the current step and restart usingH̄k . The criterion of being “too small” is taken to beHk α(k, q + 1) < HThe α’s satisfy α(k, q + 1) > α(k, q).(16.4.15)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).The quantities Wk can be calculated during the integration.
The optimal column index qis then defined by728Chapter 16.Integration of Ordinary Differential EquationsDuring the first step, when we have no information about the solution, the stepsizereduction check is made for all k. Afterwards, we test for convergence and for possiblestepsize reduction only in an “order window”max(1, q − 1) ≤ k ≤ min(kmax , q + 1)(16.4.16)called err[k] in the code.
As usual, we include a “safety factor” in the stepsize selection.This is implemented by replacing by 0.25. Other safety factors are explained in theprogram comments.Note that while the optimal convergence column is restricted to increase by at most oneon each step, a sudden drop in order is allowed by equation (16.4.9). This gives the methoda degree of robustness for problems with discontinuities.Let us remind you once again that scaling of the variables is often crucial forsuccessful integration of differential equations. The scaling “trick” suggested inthe discussion following equation (16.2.8) is a good general purpose choice, butnot foolproof.
Scaling by the maximum values of the variables is more robust, butrequires you to have some prior information.The following implementation of a Bulirsch-Stoer step has exactly the samecalling sequence as the quality-controlled Runge-Kutta stepper rkqs. This meansthat the driver odeint in §16.2 can be used for Bulirsch-Stoer as well as RungeKutta: Just substitute bsstep for rkqs in odeint’s argument list. The routinebsstep calls mmid to take the modified midpoint sequences, and calls pzextr, givenbelow, to do the polynomial extrapolation.#include <math.h>#include "nrutil.h"#define KMAXX 8#define IMAXX (KMAXX+1)#define SAFE1 0.25#define SAFE2 0.7#define REDMAX 1.0e-5#define REDMIN 0.7#define TINY 1.0e-30#define SCALMX 0.1Maximum row number used in the extrapolation.Safety factors.Maximum factor for stepsize reduction.Minimum factor for stepsize reduction.Prevents division by zero.1/SCALMX is the maximum factor by which astepsize can be increased.float **d,*x;Pointers to matrix and vector used by pzextr or rzextr.void bsstep(float y[], float dydx[], int nv, float *xx, float htry, float eps,float yscal[], float *hdid, float *hnext,void (*derivs)(float, float [], float []))Bulirsch-Stoer step with monitoring of local truncation error to ensure accuracy and adjuststepsize.
Input are the dependent variable vector y[1..nv] and its derivative dydx[1..nv]at the starting value of the independent variable x. Also input are the stepsize to be attemptedhtry, the required accuracy eps, and the vector yscal[1..nv] against which the error isscaled. On output, y and x are replaced by their new values, hdid is the stepsize that wasactually accomplished, and hnext is the estimated next stepsize. derivs is the user-suppliedroutine that computes the right-hand side derivatives. Be sure to set htry on successive stepsSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).The rationale for the order window is that if convergence appears to occur for k < q − 1 itis often spurious, resulting from some fortuitously small error estimate in the extrapolation.On the other hand, if you need to go beyond k = q + 1 to obtain convergence, your localmodel of the convergence behavior is obviously not very good and you need to cut thestepsize and reestablish it.In the routine bsstep, these various tests are actually carried out using quantities1/(2k+1)Hk+1,k(k) ≡=(16.4.17)Hk16.4 Richardson Extrapolation and the Bulirsch-Stoer Method729d=matrix(1,nv,1,KMAXX);err=vector(1,KMAXX);x=vector(1,KMAXX);yerr=vector(1,nv);ysav=vector(1,nv);yseq=vector(1,nv);if (eps != epsold) {A new tolerance, so reinitialize.*hnext = xnew = -1.0e29;“Impossible” values.eps1=SAFE1*eps;a[1]=nseq[1]+1;Compute work coefficients Ak .for (k=1;k<=KMAXX;k++) a[k+1]=a[k]+nseq[k+1];for (iq=2;iq<=KMAXX;iq++) {Compute α(k, q).for (k=1;k<iq;k++)alf[k][iq]=pow(eps1,(a[k+1]-a[iq+1])/((a[iq+1]-a[1]+1.0)*(2*k+1)));}epsold=eps;for (kopt=2;kopt<KMAXX;kopt++)Determine optimal row number forif (a[kopt+1] > a[kopt]*alf[kopt-1][kopt]) break;convergence.kmax=kopt;}h=htry;for (i=1;i<=nv;i++) ysav[i]=y[i];Save the starting values.if (*xx != xnew || h != (*hnext)) {A new stepsize or a new integration:first=1;re-establish the order window.kopt=kmax;}reduct=0;for (;;) {for (k=1;k<=kmax;k++) {Evaluate the sequence of modifiedxnew=(*xx)+h;midpoint integrations.if (xnew == (*xx)) nrerror("step size underflow in bsstep");mmid(ysav,dydx,nv,*xx,h,nseq[k],yseq,derivs);xest=SQR(h/nseq[k]);Squared, since error series is even.pzextr(k,xest,yseq,y,yerr,nv);Perform extrapolation.if (k != 1) {Compute normalized error estimateerrmax=TINY;(k).for (i=1;i<=nv;i++) errmax=FMAX(errmax,fabs(yerr[i]/yscal[i]));errmax /= eps;Scale error relative to tolerance.km=k-1;err[km]=pow(errmax/SAFE1,1.0/(2*km+1));}if (k != 1 && (k >= kopt-1 || first)) {In order window.if (errmax < 1.0) {Converged.exitflag=1;break;}Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).to the value of hnext returned from the previous step, as is the case if the routine is calledby odeint.{void mmid(float y[], float dydx[], int nvar, float xs, float htot,int nstep, float yout[], void (*derivs)(float, float[], float[]));void pzextr(int iest, float xest, float yest[], float yz[], float dy[],int nv);int i,iq,k,kk,km;static int first=1,kmax,kopt;static float epsold = -1.0,xnew;float eps1,errmax,fact,h,red,scale,work,wrkmin,xest;float *err,*yerr,*ysav,*yseq;static float a[IMAXX+1];static float alf[KMAXX+1][KMAXX+1];static int nseq[IMAXX+1]={0,2,4,6,8,10,12,14,16,18};int reduct,exitflag=0;730Chapter 16.Integration of Ordinary Differential Equations}}if (exitflag) break;red=FMIN(red,REDMIN);red=FMAX(red,REDMAX);h *= red;reduct=1;Reduce stepsize by at least REDMINand at most REDMAX.}Try again.*xx=xnew;Successful step taken.*hdid=h;first=0;wrkmin=1.0e35;Compute optimal row for convergencefor (kk=1;kk<=km;kk++) {and corresponding stepsize.fact=FMAX(err[kk],SCALMX);work=fact*a[kk+1];if (work < wrkmin) {scale=fact;wrkmin=work;kopt=kk+1;}}*hnext=h/scale;if (kopt >= k && kopt != kmax && !reduct) {Check for possible order increase, but not if stepsize was just reduced.fact=FMAX(scale/alf[kopt-1][kopt],SCALMX);if (a[kopt+1]*fact <= wrkmin) {*hnext=h/fact;kopt++;}}free_vector(yseq,1,nv);free_vector(ysav,1,nv);free_vector(yerr,1,nv);free_vector(x,1,KMAXX);free_vector(err,1,KMAXX);free_matrix(d,1,nv,1,KMAXX);}The polynomial extrapolation routine is based on the same algorithm as polint§3.1.
It is simpler in that it is always extrapolating to zero, rather than to an arbitraryvalue. However, it is more complicated in that it must individually extrapolate eachcomponent of a vector of quantities.Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).if (k == kmax || k == kopt+1) {Check for possible stepsizered=SAFE2/err[km];reduction.break;}else if (k == kopt && alf[kopt-1][kopt] < err[km]) {red=1.0/err[km];break;}else if (kopt == kmax && alf[km][kmax-1] < err[km]) {red=alf[km][kmax-1]*SAFE2/err[km];break;}else if (alf[km][kopt] < err[km]) {red=alf[km][kopt-1]/err[km];break;}16.4 Richardson Extrapolation and the Bulirsch-Stoer Method731#include "nrutil.h"extern float **d,*x;Defined in bsstep.c=vector(1,nv);x[iest]=xest;Save current independent variable.for (j=1;j<=nv;j++) dy[j]=yz[j]=yest[j];if (iest == 1) {Store first estimate in first column.for (j=1;j<=nv;j++) d[j][1]=yest[j];} else {for (j=1;j<=nv;j++) c[j]=yest[j];for (k1=1;k1<iest;k1++) {delta=1.0/(x[iest-k1]-xest);f1=xest*delta;f2=x[iest-k1]*delta;for (j=1;j<=nv;j++) {Propagate tableau 1 diagonal more.q=d[j][k1];d[j][k1]=dy[j];delta=c[j]-q;dy[j]=f1*delta;c[j]=f2*delta;yz[j] += dy[j];}}for (j=1;j<=nv;j++) d[j][iest]=dy[j];}free_vector(c,1,nv);}Current wisdom favors polynomial extrapolation over rational function extrapolation in the Bulirsch-Stoer method.















