c9-6 (779536)
Текст из файла
3799.6 Newton-Raphson Method for Nonlinear Systems of EquationsHence one step of Newton-Raphson, taking a guess xk into a new guess xk+1 ,can be written asxk+1 = xk −P 0 (xP (xk )Pj−1k ) − P (xk )i=1 (xk − xi )(9.5.29)CITED REFERENCES AND FURTHER READING:Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), Chapter 7. [1]Peters G., and Wilkinson, J.H.
1971, Journal of the Institute of Mathematics and its Applications,vol. 8, pp. 16–35. [2]IMSL Math/Library Users Manual (IMSL Inc., 2500 CityWest Boulevard, Houston TX 77042). [3]Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York:McGraw-Hill), §8.9–8.13. [4]Adams, D.A. 1967, Communications of the ACM, vol. 10, pp. 655–658. [5]Johnson, L.W., and Riess, R.D.
1982, Numerical Analysis, 2nd ed. (Reading, MA: AddisonWesley), §4.4.3. [6]Henrici, P. 1974, Applied and Computational Complex Analysis, vol. 1 (New York: Wiley).Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag),§§5.5–5.9.9.6 Newton-Raphson Method for NonlinearSystems of EquationsWe make an extreme, but wholly defensible, statement: There are no good, general methods for solving systems of more than one nonlinear equation.
Furthermore,it is not hard to see why (very likely) there never will be any good, general methods:Consider the case of two dimensions, where we want to solve simultaneouslyf(x, y) = 0g(x, y) = 0(9.6.1)The functions f and g are two arbitrary functions, each of which has zerocontour lines that divide the (x, y) plane into regions where their respective functionis positive or negative. These zero contour boundaries are of interest to us. Thesolutions that we seek are those points (if any) that are common to the zero contoursof f and g (see Figure 9.6.1).
Unfortunately, the functions f and g have, in general,no relation to each other at all! There is nothing special about a common point fromeither f’s point of view, or from g’s. In order to find all common points, which areSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).This equation, if used with i ranging over the roots already polished, will prevent atentative root from spuriously hopping to another one’s true root. It is an exampleof so-called zero suppression as an alternative to true deflation.Muller’s method, which was described above, can also be useful at thepolishing stage.380Chapter 9.Root Finding and Nonlinear Sets of Equationsno root here!two roots heref posg posMf=0g posg=g negf posf negg neg=0gg=0g posxFigure 9.6.1.
Solution of two nonlinear equations in two unknowns. Solid curves refer to f (x, y),dashed curves to g(x, y). Each equation divides the (x, y) plane into positive and negative regions,bounded by zero curves. The desired solutions are the intersections of these unrelated zero curves. Thenumber of solutions is a priori unknown.the solutions of our nonlinear equations, we will (in general) have to do neither morenor less than map out the full zero contours of both functions.
Note further thatthe zero contours will (in general) consist of an unknown number of disjoint closedcurves. How can we ever hope to know when we have found all such disjoint pieces?For problems in more than two dimensions, we need to find points mutuallycommon to N unrelated zero-contour hypersurfaces, each of dimension N − 1.You see that root finding becomes virtually impossible without insight! Youwill almost always have to use additional information, specific to your particularproblem, to answer such basic questions as, “Do I expect a unique solution?” and“Approximately where?” Acton [1] has a good discussion of some of the particularstrategies that can be tried.In this section we will discuss the simplest multidimensional root findingmethod, Newton-Raphson.
This method gives you a very efficient means ofconverging to a root, if you have a sufficiently good initial guess. It can alsospectacularly fail to converge, indicating (though not proving) that your putativeroot does not exist nearby. In §9.7 we discuss more sophisticated implementationsof the Newton-Raphson method, which try to improve on Newton-Raphson’s poorglobal convergence.
A multidimensional generalization of the secant method, calledBroyden’s method, is also discussed in §9.7.A typical problem gives N functional relations to be zeroed, involving variablesxi , i = 1, 2, . . . , N :Fi (x1 , x2, . . . , xN ) = 0i = 1, 2, . . . , N.(9.6.2)We let x denote the entire vector of values xi and F denote the entire vector offunctions Fi . In the neighborhood of x, each of the functions Fi can be expandedSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).f=0y0f pos9.6 Newton-Raphson Method for Nonlinear Systems of Equations381in Taylor seriesFi (x + δx) = Fi (x) +NX∂Fij=1∂xjδxj + O(δx2 ).(9.6.3)Jij ≡∂Fi.∂xj(9.6.4)In matrix notation equation (9.6.3) isF(x + δx) = F(x) + J · δx + O(δx2 ).(9.6.5)By neglecting terms of order δx2 and higher and by setting F(x + δx) = 0, weobtain a set of linear equations for the corrections δx that move each function closerto zero simultaneously, namelyJ · δx = −F.(9.6.6)Matrix equation (9.6.6) can be solved by LU decomposition as described in§2.3.
The corrections are then added to the solution vector,xnew = xold + δx(9.6.7)and the process is iterated to convergence. In general it is a good idea to check thedegree to which both functions and variables have converged. Once either reachesmachine accuracy, the other won’t change.The following routine mnewt performs ntrial iterations starting from aninitial guess at the solution vector x[1..n]. Iteration stops if either the sum of themagnitudes of the functions Fi is less than some tolerance tolf, or the sum of theabsolute values of the corrections to δxi is less than some tolerance tolx. mnewtcalls a user supplied function usrfun which must provide the function values F andthe Jacobian matrix J.
If J is difficult to compute analytically, you can try havingusrfun call the routine fdjac of §9.7 to compute the partial derivatives by finitedifferences. You should not make ntrial too big; rather inspect to see what ishappening before continuing for some further iterations.#include <math.h>#include "nrutil.h"void usrfun(float *x,int n,float *fvec,float **fjac);#define FREERETURN {free_matrix(fjac,1,n,1,n);free_vector(fvec,1,n);\free_vector(p,1,n);free_ivector(indx,1,n);return;}void mnewt(int ntrial, float x[], int n, float tolx, float tolf)Given an initial guess x[1..n] for a root in n dimensions, take ntrial Newton-Raphson stepsto improve the root.
Stop if the root converges in either summed absolute variable incrementstolx or summed absolute function values tolf.{void lubksb(float **a, int n, int *indx, float b[]);Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).The matrix of partial derivatives appearing in equation (9.6.3) is the Jacobianmatrix J:382Chapter 9.Root Finding and Nonlinear Sets of Equationsvoid ludcmp(float **a, int n, int *indx, float *d);int k,i,*indx;float errx,errf,d,*fvec,**fjac,*p;}Newton’s Method versus MinimizationIn the next chapter, we will find that there are efficient general techniques forfinding a minimum of a function of many variables.
Why is that task (relatively)easy, while multidimensional root finding is often quite hard? Isn’t minimizationequivalent to finding a zero of an N -dimensional gradient vector, not so different fromzeroing an N -dimensional function? No! The components of a gradient vector are notindependent, arbitrary functions. Rather, they obey so-called integrability conditionsthat are highly restrictive. Put crudely, you can always find a minimum by slidingdownhill on a single surface.
The test of “downhillness” is thus one-dimensional.There is no analogous conceptual procedure for finding a multidimensional root,where “downhill” must mean simultaneously downhill in N separate function spaces,thus allowing a multitude of trade-offs, as to how much progress in one dimensionis worth compared with progress in another.It might occur to you to carry out multidimensional root finding by collapsingall these dimensions into one: Add up the sums of squares of the individual functionsFi to get a master function F which (i) is positive definite, and (ii) has a globalminimum of zero exactly at all solutions of the original set of nonlinear equations.Unfortunately, as you will see in the next chapter, the efficient algorithms for findingminima come to rest on global and local minima indiscriminately. You will oftenfind, to your great dissatisfaction, that your function F has a great number of localminima.
In Figure 9.6.1, for example, there is likely to be a local minimum whereverthe zero contours of f and g make a close approach to each other. The point labeledM is such a point, and one sees that there are no nearby roots.However, we will now see that sophisticated strategies for multidimensionalroot finding can in fact make use of the idea of minimizing a master function F , bycombining it with Newton’s method applied to the full set of functions Fi . WhileSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Характеристики
Тип файла PDF
PDF-формат наиболее широко используется для просмотра любого типа файлов на любом устройстве. В него можно сохранить документ, таблицы, презентацию, текст, чертежи, вычисления, графики и всё остальное, что можно показать на экране любого устройства. Именно его лучше всего использовать для печати.
Например, если Вам нужно распечатать чертёж из автокада, Вы сохраните чертёж на флешку, но будет ли автокад в пункте печати? А если будет, то нужная версия с нужными библиотеками? Именно для этого и нужен формат PDF - в нём точно будет показано верно вне зависимости от того, в какой программе создали PDF-файл и есть ли нужная программа для его просмотра.















