c9-7 (779537)
Текст из файла
9.7 Globally Convergent Methods for Nonlinear Systems of Equations383such methods can still occasionally fail by coming to rest on a local minimum ofF , they often succeed where a direct attack via Newton’s method alone fails. Thenext section deals with these methods.CITED REFERENCES AND FURTHER READING:Ortega, J., and Rheinboldt, W. 1970, Iterative Solution of Nonlinear Equations in Several Variables (New York: Academic Press).9.7 Globally Convergent Methods for NonlinearSystems of EquationsWe have seen that Newton’s method for solving nonlinear equations has anunfortunate tendency to wander off into the wild blue yonder if the initial guessis not sufficiently close to the root. A global method is one that converges toa solution from almost any starting point.
In this section we will develop analgorithm that combines the rapid local convergence of Newton’s method with aglobally convergent strategy that will guarantee some progress towards the solutionat each iteration. The algorithm is closely related to the quasi-Newton method ofminimization which we will describe in §10.7.Recall our discussion of §9.6: the Newton step for the set of equationsF(x) = 0(9.7.1)xnew = xold + δx(9.7.2)δx = −J−1 · F(9.7.3)iswhereHere J is the Jacobian matrix. How do we decide whether to accept the Newton stepδx? A reasonable strategy is to require that the step decrease |F|2 = F · F.
This isthe same requirement we would impose if we were trying to minimizef=1F·F2(9.7.4)(The 12 is for later convenience.) Every solution to (9.7.1) minimizes (9.7.4), butthere may be local minima of (9.7.4) that are not solutions to (9.7.1). Thus, asalready mentioned, simply applying one of our minimum finding algorithms fromChapter 10 to (9.7.4) is not a good idea.To develop a better strategy, note that the Newton step (9.7.3) is a descentdirection for f:∇f · δx = (F · J) · (−J −1 · F) = −F · F < 0(9.7.5)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), Chapter 14. [1]Ostrowski, A.M.
1966, Solutions of Equations and Systems of Equations, 2nd ed. (New York:Academic Press).384Chapter 9.Root Finding and Nonlinear Sets of EquationsLine Searches and BacktrackingWhen we are not close enough to the minimum of f , taking the full Newton step p = δxneed not decrease the function; we may move too far for the quadratic approximation tobe valid. All we are guaranteed is that initially f decreases as we move in the Newtondirection. So the goal is to move to a new point xnew along the direction of the Newtonstep p, but not necessarily all the way:xnew = xold + λp,0<λ≤1(9.7.6)The aim is to find λ so that f (xold + λp) has decreased sufficiently. Until the early 1970s,standard practice was to choose λ so that xnew exactly minimizes f in the direction p.However, we now know that it is extremely wasteful of function evaluations to do so. Abetter strategy is as follows: Since p is always the Newton direction in our algorithms, wefirst try λ = 1, the full Newton step.
This will lead to quadratic convergence when x issufficiently close to the solution. However, if f (xnew ) does not meet our acceptance criteria,we backtrack along the Newton direction, trying a smaller value of λ, until we find a suitablepoint. Since the Newton direction is a descent direction, we are guaranteed to decrease ffor sufficiently small λ.What should the criterion for accepting a step be? It is not sufficient to require merelythat f (xnew ) < f (xold ). This criterion can fail to converge to a minimum of f in one oftwo ways. First, it is possible to construct a sequence of steps satisfying this criterion withf decreasing too slowly relative to the step lengths. Second, one can have a sequence wherethe step lengths are too small relative to the initial rate of decrease of f . (For examples ofsuch sequences, see [1], p.
117.)A simple way to fix the first problem is to require the average rate of decrease of f tobe at least some fraction α of the initial rate of decrease ∇f · p:f (xnew ) ≤ f (xold ) + α∇f · (xnew − xold )(9.7.7)Here the parameter α satisfies 0 < α < 1. We can get away with quite small values ofα; α = 10−4 is a good choice.The second problem can be fixed by requiring the rate of decrease of f at xnew to begreater than some fraction β of the rate of decrease of f at xold .
In practice, we will notneed to impose this second constraint because our backtracking algorithm will have a built-incutoff to avoid taking steps that are too small.Here is the strategy for a practical backtracking routine: Defineg(λ) ≡ f (xold + λp)(9.7.8)g0 (λ) = ∇f · p(9.7.9)so thatIf we need to backtrack, then we model g with the most current information we have andchoose λ to minimize the model. We start with g(0) and g0 (0) available. The first step isSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Thus our strategy is quite simple: We always first try the full Newton step,because once we are close enough to the solution we will get quadratic convergence.However, we check at each iteration that the proposed step reduces f.
If not, webacktrack along the Newton direction until we have an acceptable step. Because theNewton step is a descent direction for f, we are guaranteed to find an acceptable stepby backtracking. We will discuss the backtracking algorithm in more detail below.Note that this method essentially minimizes f by taking Newton steps designedto bring F to zero. This is not equivalent to minimizing f directly by taking Newtonsteps designed to bring ∇f to zero.
While the method can still occasionally fail bylanding on a local minimum of f, this is quite rare in practice. The routine newtbelow will warn you if this happens. The remedy is to try a new starting point.9.7 Globally Convergent Methods for Nonlinear Systems of Equations385always the Newton step, λ = 1. If this step is not acceptable, we have available g(1) aswell. We can therefore model g(λ) as a quadratic:g(λ) ≈ [g(1) − g(0) − g0 (0)]λ2 + g0 (0)λ + g(0)(9.7.10)Taking the derivative of this quadratic, we find that it is a minimum wheng0 (0)2[g(1) − g(0) − g0 (0)](9.7.11)1Since the Newton step failed, we can show that λ <∼ 2 for small α.
We need to guard againsttoo small a value of λ, however. We set λmin = 0.1.On second and subsequent backtracks, we model g as a cubic in λ, using the previousvalue g(λ1 ) and the second most recent value g(λ2 ):g(λ) = aλ3 + bλ2 + g0 (0)λ + g(0)(9.7.12)Requiring this expression to give the correct values of g at λ1 and λ2 gives two equationsthat can be solved for the coefficients a and b: a−1/λ221/λ21g(λ1 ) − g0 (0)λ1 − g(0)1=(9.7.13)·λ1 − λ2 −λ2 /λ21 λ1 /λ22bg(λ2 ) − g0 (0)λ2 − g(0)The minimum of the cubic (9.7.12) is atpb2 − 3ag0 (0)λ=(9.7.14)3aWe enforce that λ lie between λmax = 0.5λ1 and λmin = 0.1λ1 .The routine has two additional features, a minimum step length alamin and a maximumstep length stpmax.
lnsrch will also be used in the quasi-Newton minimization routinedfpmin in the next section.−b +#include <math.h>#include "nrutil.h"#define ALF 1.0e-4#define TOLX 1.0e-7Ensures sufficient decrease in function value.Convergence criterion on ∆x.void lnsrch(int n, float xold[], float fold, float g[], float p[], float x[],float *f, float stpmax, int *check, float (*func)(float []))Given an n-dimensional point xold[1..n], the value of the function and gradient there, foldand g[1..n], and a direction p[1..n], finds a new point x[1..n] along the direction p fromxold where the function func has decreased “sufficiently.” The new function value is returnedin f. stpmax is an input quantity that limits the length of the steps so that you do not try toevaluate the function in regions where it is undefined or subject to overflow.
p is usually theNewton direction. The output quantity check is false (0) on a normal exit. It is true (1) whenx is too close to xold. In a minimization algorithm, this usually signals convergence and canbe ignored. However, in a zero-finding algorithm the calling program should check whether theconvergence is spurious. Some “difficult” problems may require double precision in this routine.{int i;float a,alam,alam2,alamin,b,disc,f2,rhs1,rhs2,slope,sum,temp,test,tmplam;*check=0;for (sum=0.0,i=1;i<=n;i++) sum += p[i]*p[i];sum=sqrt(sum);if (sum > stpmax)for (i=1;i<=n;i++) p[i] *= stpmax/sum;Scale if attempted step is too big.for (slope=0.0,i=1;i<=n;i++)slope += g[i]*p[i];if (slope >= 0.0) nrerror("Roundoff problem in lnsrch.");test=0.0;Compute λmin .for (i=1;i<=n;i++) {Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).λ=−386Chapter 9.Root Finding and Nonlinear Sets of Equationstemp=fabs(p[i])/FMAX(fabs(xold[i]),1.0);if (temp > test) test=temp;}Here now is the globally convergent Newton routine newt that uses lnsrch. A featureof newt is that you need not supply the Jacobian matrix analytically; the routine will attempt tocompute the necessary partial derivatives of F by finite differences in the routine fdjac.
Характеристики
Тип файла PDF
PDF-формат наиболее широко используется для просмотра любого типа файлов на любом устройстве. В него можно сохранить документ, таблицы, презентацию, текст, чертежи, вычисления, графики и всё остальное, что можно показать на экране любого устройства. Именно его лучше всего использовать для печати.
Например, если Вам нужно распечатать чертёж из автокада, Вы сохраните чертёж на флешку, но будет ли автокад в пункте печати? А если будет, то нужная версия с нужными библиотеками? Именно для этого и нужен формат PDF - в нём точно будет показано верно вне зависимости от того, в какой программе создали PDF-файл и есть ли нужная программа для его просмотра.















