c16-6 (779598), страница 2
Текст из файла (страница 2)
Note that systems wherethe right-hand side depends explicitly on x, f(y, x), can be handled by adding x tothe list of dependent variables so that the system to be solved is 0 yf=(16.6.19)x1738Chapter 16.Integration of Ordinary Differential EquationsRosenbrock Methodsi=1where the corrections ki are found by solving s linear equations that generalize the structurein (16.6.17):!i−1i−1XX0(1 − γhf ) · ki = hf y0 +αij kj + hf 0 ·γij kj ,i = 1, . . .
, s (16.6.22)j=1j=1Here we denote the Jacobian matrix by f 0 . The coefficients γ, ci, αij , and γij are fixedconstants independent of the problem. If γ = γij = 0, this is simply a Runge-Kutta scheme.Equations (16.6.22) can be solved successively for k1 , k2 , . .
. .Crucial to the success of a stiff integration scheme is an automatic stepsize adjustmentalgorithm. Kaps and Rentrop [2] discovered an embedded or Runge-Kutta-Fehlberg methodas described in §16.2: Two estimates of the form (16.6.21) are computed, the “real” one yand a lower-order estimate by with different coefficients ĉi , i = 1, . . . , ŝ, where ŝ < s but theki are the same. The difference between y and by leads to an estimate of the local truncationerror, which can then be used for stepsize control. Kaps and Rentrop showed that the smallestvalue of s for which embedding is possible is s = 4, ŝ = 3, leading to a fourth-order method.To minimize the matrix-vector multiplications on the right-hand side of (16.6.22), werewrite the equations in terms of quantitiesgi =i−1Xγij kj + γki(16.6.23)j=1The equations then take the form(1/γh − f 0 ) · g1 = f(y0 )(1/γh − f 0 ) · g2 = f(y0 + a21 g1 ) + c21 g1 /h(1/γh − f 0 ) · g3 = f(y0 + a31 g1 + a32 g2 ) + (c31 g1 + c32 g2 )/h(1/γh − f 0 ) · g4 = f(y0 + a41 g1 + a42 g2 + a43 g3 ) + (c41 g1 + c42 g2 + c43 g3 )/h(16.6.24)In our implementation stiff of the Kaps-Rentrop algorithm, we have carried out thereplacement (16.6.19) explicitly in equations (16.6.24), so you need not concern yourselfabout it.
Simply provide a routine (called derivs in stiff) that returns f (called dydx) as afunction of x and y. Also supply a routine jacobn that returns f 0 (dfdy) and ∂f/∂x (dfdx)as functions of x and y. If x does not occur explicitly on the right-hand side, then dfdx willbe zero.
Usually the Jacobian matrix will be available to you by analytic differentiation ofthe right-hand side f. If not, your routine will have to compute it by numerical differencingwith appropriate increments ∆y.Kaps and Rentrop gave two different sets of parameters, which have slightly differentstability properties. Several other sets have been proposed. Our default choice is that ofShampine [3], but we also give you one of the Kaps-Rentrop sets as an option. Some proposedparameter sets require function evaluations outside the domain of integration; we prefer toavoid that complication.The calling sequence of stiff is exactly the same as the nonstiff routines given earlierin this chapter.
It is thus “plug-compatible” with them in the general ODE integrating routineSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).These methods have the advantage of being relatively simple to understand and imple−4−5ment.
For moderate accuracies ( <∼ 10 – 10 in the error criterion) and moderate-sized<systems (N ∼ 10), they are competitive with the more complicated algorithms. For morestringent parameters, Rosenbrock methods remain reliable; they merely become less efficientthan competitors like the semi-implicit extrapolation method (see below).A Rosenbrock method seeks a solution of the formsXy(x0 + h) = y0 +c i ki(16.6.21)16.6 Stiff Sets of Equations739yexact = y + O(h5 )yexact = by + O(h4 )(16.6.25)Thus|y − by| = O(h4 )(16.6.26)Referring back to the steps leading from equation (16.2.4) to equation (16.2.10), we seethat the new stepsize should be chosen as in equation (16.2.10) but with the exponents 1/4and 1/5 replaced by 1/3 and 1/4, respectively.
Also, experience shows that it is wise toprevent too large a stepsize change in one step, otherwise we will probably have to undothe large change in the next step. We adopt 0.5 and 1.5 as the maximum allowed decreaseand increase of h in one step.#include <math.h>#include "nrutil.h"#define SAFETY 0.9#define GROW 1.5#define PGROW -0.25#define SHRNK 0.5#define PSHRNK (-1.0/3.0)#define ERRCON 0.1296#define MAXTRY 40Here NMAX is the maximum value of n; GROW and SHRNK are the largest and smallest factorsby which stepsize can change in one step; ERRCON equals (GROW/SAFETY) raised to the power(1/PGROW) and handles the case when errmax ' 0.#define GAM (1.0/2.0)#define A21 2.0#define A31 (48.0/25.0)#define A32 (6.0/25.0)#define C21 -8.0#define C31 (372.0/25.0)#define C32 (12.0/5.0)#define C41 (-112.0/125.0)#define C42 (-54.0/125.0)#define C43 (-2.0/5.0)#define B1 (19.0/9.0)#define B2 (1.0/2.0)#define B3 (25.0/108.0)#define B4 (125.0/108.0)#define E1 (17.0/54.0)#define E2 (7.0/36.0)#define E3 0.0#define E4 (125.0/108.0)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).odeint. This compatibility requires, unfortunately, one slight anomaly: While the usersupplied routine derivs is a dummy argument (which can therefore have any actual name),the other user-supplied routine is not an argument and must be named (exactly) jacobn.stiff begins by saving the initial values, in case the step has to be repeated becausethe error tolerance is exceeded.
The linear equations (16.6.24) are solved by first computingthe LU decomposition of the matrix 1/γh − f 0 using the routine ludcmp. Then the fourgi are found by back-substitution of the four different right-hand sides using lubksb. Notethat each step of the integration requires one call to jacobn and three calls to derivs (onecall to get dydx before calling stiff, and two calls inside stiff).
The reason only threecalls are needed and not four is that the parameters have been chosen so that the last twocalls in equation (16.6.24) are done with the same arguments. Counting the evaluation ofthe Jacobian matrix as roughly equivalent to N evaluations of the right-hand side f, we seethat the Kaps-Rentrop scheme involves about N + 3 function evaluations per step. Note thatif N is large and the Jacobian matrix is sparse, you should replace the LU decompositionby a suitable sparse matrix procedure.Stepsize control depends on the fact that740Chapter 16.#define#define#define#define#define#defineC1XC2XC3XC4XA2XA3XIntegration of Ordinary Differential Equations(1.0/2.0)(-3.0/2.0)(121.0/50.0)(29.0/250.0)1.0(3.0/5.0)indx=ivector(1,n);a=matrix(1,n,1,n);dfdx=vector(1,n);dfdy=matrix(1,n,1,n);dysav=vector(1,n);err=vector(1,n);g1=vector(1,n);g2=vector(1,n);g3=vector(1,n);g4=vector(1,n);ysav=vector(1,n);xsav=(*x);Save initial values.for (i=1;i<=n;i++) {ysav[i]=y[i];dysav[i]=dydx[i];}jacobn(xsav,ysav,dfdx,dfdy,n);The user must supply this routine to return the n-by-n matrix dfdy and the vector dfdx.h=htry;Set stepsize to the initial trial value.for (jtry=1;jtry<=MAXTRY;jtry++) {for (i=1;i<=n;i++) {Set up the matrix 1 − γhf0 .for (j=1;j<=n;j++) a[i][j] = -dfdy[i][j];a[i][i] += 1.0/(GAM*h);}ludcmp(a,n,indx,&d);LU decomposition of the matrix.for (i=1;i<=n;i++)Set up right-hand side for g1 .g1[i]=dysav[i]+h*C1X*dfdx[i];lubksb(a,n,indx,g1);Solve for g1 .for (i=1;i<=n;i++)Compute intermediate values of y and x.y[i]=ysav[i]+A21*g1[i];*x=xsav+A2X*h;(*derivs)(*x,y,dydx);Compute dydx at the intermediate values.for (i=1;i<=n;i++)Set up right-hand side for g2 .g2[i]=dydx[i]+h*C2X*dfdx[i]+C21*g1[i]/h;lubksb(a,n,indx,g2);Solve for g2 .for (i=1;i<=n;i++)Compute intermediate values of y and x.y[i]=ysav[i]+A31*g1[i]+A32*g2[i];Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.















