c9-7 (779537), страница 2
Текст из файла (страница 2)
Thisroutine uses some of the techniques described in §5.7 for computing numerical derivatives. Ofcourse, you can always replace fdjac with a routine that calculates the Jacobian analyticallyif this is easy for you to do.#include <math.h>#include "nrutil.h"#define MAXITS 200#define TOLF 1.0e-4#define TOLMIN 1.0e-6#define TOLX 1.0e-7#define STPMX 100.0Here MAXITS is the maximum number of iterations; TOLF sets the convergence criterion onfunction values; TOLMIN sets the criterion for deciding whether spurious convergence to aminimum of fmin has occurred; TOLX is the convergence criterion on δx; STPMX is the scaledmaximum step length allowed in line searches.int nn;Global variables to communicate with fmin.float *fvec;void (*nrfuncv)(int n, float v[], float f[]);#define FREERETURN {free_vector(fvec,1,n);free_vector(xold,1,n);\free_vector(p,1,n);free_vector(g,1,n);free_matrix(fjac,1,n,1,n);\free_ivector(indx,1,n);return;}Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).}alamin=TOLX/test;alam=1.0;Always try full Newton step first.for (;;) {Start of iteration loop.for (i=1;i<=n;i++) x[i]=xold[i]+alam*p[i];*f=(*func)(x);if (alam < alamin) {Convergence on ∆x.
For zero findfor (i=1;i<=n;i++) x[i]=xold[i];ing, the calling program should*check=1;verify the convergence.return;} else if (*f <= fold+ALF*alam*slope) return;Sufficient function decrease.else {Backtrack.if (alam == 1.0)tmplam = -slope/(2.0*(*f-fold-slope));First time.else {Subsequent backtracks.rhs1 = *f-fold-alam*slope;rhs2=f2-fold-alam2*slope;a=(rhs1/(alam*alam)-rhs2/(alam2*alam2))/(alam-alam2);b=(-alam2*rhs1/(alam*alam)+alam*rhs2/(alam2*alam2))/(alam-alam2);if (a == 0.0) tmplam = -slope/(2.0*b);else {disc=b*b-3.0*a*slope;if (disc < 0.0) tmplam=0.5*alam;else if (b <= 0.0) tmplam=(-b+sqrt(disc))/(3.0*a);else tmplam=-slope/(b+sqrt(disc));}if (tmplam > 0.5*alam)tmplam=0.5*alam;λ ≤ 0.5λ1.}}alam2=alam;f2 = *f;alam=FMAX(tmplam,0.1*alam);λ ≥ 0.1λ1.}Try again.9.7 Globally Convergent Methods for Nonlinear Systems of Equations387indx=ivector(1,n);fjac=matrix(1,n,1,n);g=vector(1,n);p=vector(1,n);xold=vector(1,n);fvec=vector(1,n);Define global variables.nn=n;nrfuncv=vecfunc;f=fmin(x);fvec is also computed by this call.test=0.0;Test for initial guess being a root.
Usefor (i=1;i<=n;i++)more stringent test than simply TOLF.if (fabs(fvec[i]) > test) test=fabs(fvec[i]);if (test < 0.01*TOLF) {*check=0;FREERETURN}for (sum=0.0,i=1;i<=n;i++) sum += SQR(x[i]);Calculate stpmax for line searches.stpmax=STPMX*FMAX(sqrt(sum),(float)n);for (its=1;its<=MAXITS;its++) {Start of iteration loop.fdjac(n,x,fvec,fjac,vecfunc);If analytic Jacobian is available, you can replace the routine fdjac below with yourown routine.for (i=1;i<=n;i++) {Compute ∇f for the line search.for (sum=0.0,j=1;j<=n;j++) sum += fjac[j][i]*fvec[j];g[i]=sum;}for (i=1;i<=n;i++) xold[i]=x[i];Store x,fold=f;and f .for (i=1;i<=n;i++) p[i] = -fvec[i]; Right-hand side for linear equations.ludcmp(fjac,n,indx,&d);Solve linear equations by LU decompolubksb(fjac,n,indx,p);sition.lnsrch(n,xold,fold,g,p,x,&f,stpmax,check,fmin);lnsrch returns new x and f .
It also calculates fvec at the new x when it calls fmin.test=0.0;Test for convergence on function valfor (i=1;i<=n;i++)ues.if (fabs(fvec[i]) > test) test=fabs(fvec[i]);if (test < TOLF) {*check=0;FREERETURN}if (*check) {Check for gradient of f zero, i.e., spuritest=0.0;ous convergence.den=FMAX(f,0.5*n);for (i=1;i<=n;i++) {temp=fabs(g[i])*FMAX(fabs(x[i]),1.0)/den;Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).void newt(float x[], int n, int *check,void (*vecfunc)(int, float [], float []))Given an initial guess x[1..n] for a root in n dimensions, find the root by a globally convergentNewton’s method.
The vector of functions to be zeroed, called fvec[1..n] in the routinebelow, is returned by the user-supplied routine vecfunc(n,x,fvec). The output quantitycheck is false (0) on a normal return and true (1) if the routine has converged to a localminimum of the function fmin defined below. In this case try restarting from a different initialguess.{void fdjac(int n, float x[], float fvec[], float **df,void (*vecfunc)(int, float [], float []));float fmin(float x[]);void lnsrch(int n, float xold[], float fold, float g[], float p[], float x[],float *f, float stpmax, int *check, float (*func)(float []));void lubksb(float **a, int n, int *indx, float b[]);void ludcmp(float **a, int n, int *indx, float *d);int i,its,j,*indx;float d,den,f,fold,stpmax,sum,temp,test,**fjac,*g,*p,*xold;388Chapter 9.Root Finding and Nonlinear Sets of Equationsif (temp > test) test=temp;}*check=(test < TOLMIN ? 1 : 0);FREERETURN}nrerror("MAXITS exceeded in newt");}#include <math.h>#include "nrutil.h"#define EPS 1.0e-4Approximate square root of the machine precision.void fdjac(int n, float x[], float fvec[], float **df,void (*vecfunc)(int, float [], float []))Computes forward-difference approximation to Jacobian.
On input, x[1..n] is the point atwhich the Jacobian is to be evaluated, fvec[1..n] is the vector of function values at thepoint, and vecfunc(n,x,f) is a user-supplied routine that returns the vector of functions atx. On output, df[1..n][1..n] is the Jacobian array.{int i,j;float h,temp,*f;f=vector(1,n);for (j=1;j<=n;j++) {temp=x[j];h=EPS*fabs(temp);if (h == 0.0) h=EPS;x[j]=temp+h;Trick to reduce finite precision error.h=x[j]-temp;(*vecfunc)(n,x,f);x[j]=temp;for (i=1;i<=n;i++) df[i][j]=(f[i]-fvec[i])/h;Forward difference for}mula.free_vector(f,1,n);}#include "nrutil.h"extern int nn;extern float *fvec;extern void (*nrfuncv)(int n, float v[], float f[]);float fmin(float x[])Returns f = 12 F · F at x.
The global pointer *nrfuncv points to a routine that returns thevector of functions at x. It is set to point to a user-supplied routine in the calling program.Global variables also communicate the function values back to the calling program.{int i;float sum;(*nrfuncv)(nn,x,fvec);for (sum=0.0,i=1;i<=nn;i++) sum += SQR(fvec[i]);return 0.5*sum;}Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).}test=0.0;Test for convergence on δx.for (i=1;i<=n;i++) {temp=(fabs(x[i]-xold[i]))/FMAX(fabs(x[i]),1.0);if (temp > test) test=temp;}if (test < TOLX) FREERETURN3899.7 Globally Convergent Methods for Nonlinear Systems of EquationsThe routine newt assumes that typical values of all components of x and of F are of orderunity, and it can fail if this assumption is badly violated.
You should rescale the variables bytheir typical values before invoking newt if this problem occurs.Multidimensional Secant Methods: Broyden’s MethodBi · δxi = −Fi(9.7.15)where δxi = xi+1 − xi (cf. equation 9.7.3). The quasi-Newton or secant condition is thatBi+1 satisfyBi+1 · δxi = δFi(9.7.16)where δFi = Fi+1 − Fi .
This is the generalization of the one-dimensional secant approximation to the derivative, δF/δx. However, equation (9.7.16) does not determine Bi+1 uniquelyin more than one dimension.Many different auxiliary conditions to pin down Bi+1 have been explored, but thebest-performing algorithm in practice results from Broyden’s formula. This formula is basedon the idea of getting Bi+1 by making the least change to Bi consistent with the secantequation (9.7.16). Broyden showed that the resulting formula isBi+1 = Bi +(δFi − Bi · δxi ) ⊗ δxiδxi · δxi(9.7.17)You can easily check that Bi+1 satisfies (9.7.16).Early implementations of Broyden’s method used the Sherman-Morrison formula,equation (2.7.2), to invert equation (9.7.17) analytically,−1B−1+i+1 = Bi· δFi ) ⊗ δxi · B−1(δxi − B−1iiδxi · B−1·δFii(9.7.18)Then instead of solving equation (9.7.3) by e.g., LU decomposition, one determinedδxi = −B−1· Fii(9.7.19)by matrix multiplication in O(N 2 ) operations.















