c9-6 (779536), страница 2
Текст из файла (страница 2)
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).indx=ivector(1,n);p=vector(1,n);fvec=vector(1,n);fjac=matrix(1,n,1,n);for (k=1;k<=ntrial;k++) {usrfun(x,n,fvec,fjac);User function supplies function values at x inerrf=0.0;fvec and Jacobian matrix in fjac.for (i=1;i<=n;i++) errf += fabs(fvec[i]);Check function convergence.if (errf <= tolf) FREERETURNfor (i=1;i<=n;i++) p[i] = -fvec[i];Right-hand side of linear equations.ludcmp(fjac,n,indx,&d);Solve linear equations using LU decomposition.lubksb(fjac,n,indx,p);errx=0.0;Check root convergence.for (i=1;i<=n;i++) {Update solution.errx += fabs(p[i]);x[i] += p[i];}if (errx <= tolx) FREERETURN}FREERETURN9.7 Globally Convergent Methods for Nonlinear Systems of Equations383such methods can still occasionally fail by coming to rest on a local minimum ofF , they often succeed where a direct attack via Newton’s method alone fails.
Thenext section deals with these methods.CITED REFERENCES AND FURTHER READING:Ortega, J., and Rheinboldt, W. 1970, Iterative Solution of Nonlinear Equations in Several Variables (New York: Academic Press).9.7 Globally Convergent Methods for NonlinearSystems of EquationsWe have seen that Newton’s method for solving nonlinear equations has anunfortunate tendency to wander off into the wild blue yonder if the initial guessis not sufficiently close to the root. A global method is one that converges toa solution from almost any starting point.
In this section we will develop analgorithm that combines the rapid local convergence of Newton’s method with aglobally convergent strategy that will guarantee some progress towards the solutionat each iteration. The algorithm is closely related to the quasi-Newton method ofminimization which we will describe in §10.7.Recall our discussion of §9.6: the Newton step for the set of equationsF(x) = 0(9.7.1)xnew = xold + δx(9.7.2)δx = −J−1 · F(9.7.3)iswhereHere J is the Jacobian matrix. How do we decide whether to accept the Newton stepδx? A reasonable strategy is to require that the step decrease |F|2 = F · F. This isthe same requirement we would impose if we were trying to minimizef=1F·F2(9.7.4)(The 12 is for later convenience.) Every solution to (9.7.1) minimizes (9.7.4), butthere may be local minima of (9.7.4) that are not solutions to (9.7.1). Thus, asalready mentioned, simply applying one of our minimum finding algorithms fromChapter 10 to (9.7.4) is not a good idea.To develop a better strategy, note that the Newton step (9.7.3) is a descentdirection for f:∇f · δx = (F · J) · (−J −1 · F) = −F · F < 0(9.7.5)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), Chapter 14. [1]Ostrowski, A.M.
1966, Solutions of Equations and Systems of Equations, 2nd ed. (New York:Academic Press)..















