Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C (523184), страница 84
Текст из файла (страница 84)
And, if the minimum of the function isexactly zero, then you have found a double root.)As usual, we want to discourage you from using routines as black boxes withoutunderstanding them. However, as a guide to beginners, here are some reasonablestarting points:• Brent’s algorithm in §9.3 is the method of choice to find a bracketed rootof a general one-dimensional function, when you cannot easily computethe function’s derivative. Ridders’ method (§9.2) is concise, and a closecompetitor.• When you can compute the function’s derivative, the routine rtsafe in§9.4, which combines the Newton-Raphson method with some bookkeeping on bounds, is recommended.
Again, you must first bracket your root.• Roots of polynomials are a special case. Laguerre’s method, in §9.5,is recommended as a starting point. Beware: Some polynomials areill-conditioned!• Finally, for multidimensional problems, the only elementary method isNewton-Raphson (§9.6), which works very well if you can supply a9.0 Introduction349good first guess of the solution. Try it. Then read the more advancedmaterial in §9.7 for some more complicated, but globally more convergent,alternatives.Avoiding implementations for specific computers, this book must generallysteer clear of interactive or graphics-related routines. We make an exception rightnow. The following routine, which produces a crude function plot with interactivelyscaled axes, can save you a lot of grief as you enter the world of root finding.#include <stdio.h>#define ISCR 60#define JSCR 21#define BLANK ’ ’#define ZERO ’-’#define YY ’l’#define XX ’-’#define FF ’x’Number of horizontal and vertical positions in display.void scrsho(float (*fx)(float))For interactive CRT terminal use.
Produce a crude graph of the function fx over the promptedfor interval x1,x2. Query for another plot until the user signals satisfaction.{int jz,j,i;float ysml,ybig,x2,x1,x,dyj,dx,y[ISCR+1];char scr[ISCR+1][JSCR+1];for (;;) {printf("\nEnter x1 x2 (x1=x2 to stop):\n");Query for another plot, quitscanf("%f %f",&x1,&x2);if x1=x2.if (x1 == x2) break;for (j=1;j<=JSCR;j++)Fill vertical sides with character ’l’.scr[1][j]=scr[ISCR][j]=YY;for (i=2;i<=(ISCR-1);i++) {scr[i][1]=scr[i][JSCR]=XX;Fill top, bottom with character ’-’.for (j=2;j<=(JSCR-1);j++)Fill interior with blanks.scr[i][j]=BLANK;}dx=(x2-x1)/(ISCR-1);x=x1;ysml=ybig=0.0;Limits will include 0.for (i=1;i<=ISCR;i++) {Evaluate the function at equal intervals.y[i]=(*fx)(x);Find the largest and smallest valif (y[i] < ysml) ysml=y[i];ues.if (y[i] > ybig) ybig=y[i];x += dx;}if (ybig == ysml) ybig=ysml+1.0;Be sure to separate top and bottom.dyj=(JSCR-1)/(ybig-ysml);jz=1-(int) (ysml*dyj);Note which row corresponds to 0.for (i=1;i<=ISCR;i++) {Place an indicator at function height andscr[i][jz]=ZERO;0.j=1+(int) ((y[i]-ysml)*dyj);scr[i][j]=FF;}printf(" %10.3f ",ybig);for (i=1;i<=ISCR;i++) printf("%c",scr[i][JSCR]);printf("\n");for (j=(JSCR-1);j>=2;j--) {Display.printf("%12s"," ");for (i=1;i<=ISCR;i++) printf("%c",scr[i][j]);printf("\n");}printf(" %10.3f ",ysml);350Chapter 9.Root Finding and Nonlinear Sets of Equationsfor (i=1;i<=ISCR;i++) printf("%c",scr[i][1]);printf("\n");printf("%8s %10.3f %44s %10.3f\n"," ",x1," ",x2);}}CITED REFERENCES AND FURTHER READING:Stoer, J., and Bulirsch, R.
1980, Introduction to Numerical Analysis (New York: Springer-Verlag),Chapter 5.Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), Chapters 2, 7, and 14.Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York:McGraw-Hill), Chapter 8.Householder, A.S. 1970, The Numerical Treatment of a Single Nonlinear Equation (New York:McGraw-Hill).9.1 Bracketing and BisectionWe will say that a root is bracketed in the interval (a, b) if f(a) and f(b)have opposite signs. If the function is continuous, then at least one root must lie inthat interval (the intermediate value theorem).
If the function is discontinuous, butbounded, then instead of a root there might be a step discontinuity which crosseszero (see Figure 9.1.1). For numerical purposes, that might as well be a root, sincethe behavior is indistinguishable from the case of a continuous function whose zerocrossing occurs in between two “adjacent” floating-point numbers in a machine’sfinite-precision representation. Only for functions with singularities is there thepossibility that a bracketed root is not really there, as for examplef(x) =1x−c(9.1.1)Some root-finding algorithms (e.g., bisection in this section) will readily convergeto c in (9.1.1). Luckily there is not much possibility of your mistaking c, or anynumber x close to it, for a root, since mere evaluation of |f(x)| will give a verylarge, rather than a very small, result.If you are given a function in a black box, there is no sure way of bracketingits roots, or of even determining that it has roots.
If you like pathological examples,think about the problem of locating the two real roots of equation (3.0.1), which dipsbelow zero only in the ridiculously small interval of about x = π ± 10−667.In the next chapter we will deal with the related problem of bracketing afunction’s minimum. There it is possible to give a procedure that always succeeds;in essence, “Go downhill, taking steps of increasing size, until your function startsback uphill.” There is no analogous procedure for roots.
The procedure “go downhilluntil your function changes sign,” can be foiled by a function that has a simpleextremum. Nevertheless, if you are prepared to deal with a “failure” outcome, thisprocedure is often a good first start; success is usual if your function has oppositesigns in the limit x → ±∞.3519.1 Bracketing and Bisectionbax1(a)efcx1 da x2 x3 b(b)(c)ab(d)Figure 9.1.1.
Some situations encountered while root finding: (a) shows an isolated root x1 bracketedby two points a and b at which the function has opposite signs; (b) illustrates that there is not necessarilya sign change in the function near a double root (in fact, there is not necessarily a root!); (c) is apathological function with many roots; in (d) the function has opposite signs at points a and b, but thepoints bracket a singularity, not a root.352Chapter 9.Root Finding and Nonlinear Sets of Equations#include <math.h>#define FACTOR 1.6#define NTRY 50int zbrac(float (*func)(float), float *x1, float *x2)Given a function func and an initial guessed range x1 to x2, the routine expands the rangegeometrically until a root is bracketed by the returned values x1 and x2 (in which case zbracreturns 1) or until the range becomes unacceptably large (in which case zbrac returns 0).{void nrerror(char error_text[]);int j;float f1,f2;if (*x1 == *x2) nrerror("Bad initial range in zbrac");f1=(*func)(*x1);f2=(*func)(*x2);for (j=1;j<=NTRY;j++) {if (f1*f2 < 0.0) return 1;if (fabs(f1) < fabs(f2))f1=(*func)(*x1 += FACTOR*(*x1-*x2));elsef2=(*func)(*x2 += FACTOR*(*x2-*x1));}return 0;}Alternatively, you might want to “look inward” on an initial interval, ratherthan “look outward” from it, asking if there are any roots of the function f(x) inthe interval from x1 to x2 when a search is carried out by subdivision into n equalintervals.
The following function calculates brackets for up to nb distinct intervalswhich each contain one or more roots.void zbrak(float (*fx)(float), float x1, float x2, int n, float xb1[],float xb2[], int *nb)Given a function fx defined on the interval from x1-x2 subdivide the interval into n equallyspaced segments, and search for zero crossings of the function. nb is input as the maximum number of roots sought, and is reset to the number of bracketing pairs xb1[1..nb], xb2[1..nb]that are found.{int nbb,i;float x,fp,fc,dx;nbb=0;dx=(x2-x1)/n;Determine the spacing appropriate to the mesh.fp=(*fx)(x=x1);for (i=1;i<=n;i++) {Loop over all intervalsfc=(*fx)(x += dx);if (fc*fp <= 0.0) {If a sign change occurs then record values for thexb1[++nbb]=x-dx;bounds.xb2[nbb]=x;if(*nb == nbb) return;}fp=fc;}*nb = nbb;}3539.1 Bracketing and BisectionBisection MethodOnce we know that an interval contains a root, several classical procedures areavailable to refine it. These proceed with varying degrees of speed and surenesstowards the answer.
Unfortunately, the methods that are guaranteed to converge plodalong most slowly, while those that rush to the solution in the best cases can also dashrapidly to infinity without warning if measures are not taken to avoid such behavior.The bisection method is one that cannot fail. It is thus not to be sneered atas a method for otherwise badly behaved problems. The idea is simple.
Oversome interval the function is known to pass through zero because it changes sign.Evaluate the function at the interval’s midpoint and examine its sign. Use themidpoint to replace whichever limit has the same sign. After each iteration thebounds containing the root decrease by a factor of two. If after n iterations the rootis known to be within an interval of size n , then after the next iteration it will bebracketed within an interval of sizen+1 = n /2(9.1.2)neither more nor less.
Thus, we know in advance the number of iterations requiredto achieve a given tolerance in the solution,0(9.1.3)n = log2where 0 is the size of the initially bracketing interval, is the desired endingtolerance.Bisection must succeed. If the interval happens to contain two or more roots,bisection will find one of them. If the interval contains no roots and merely straddlesa singularity, it will converge on the singularity.When a method converges as a factor (less than 1) times the previous uncertaintyto the first power (as is the case for bisection), it is said to converge linearly. Methodsthat converge as a higher power,n+1 = constant × (n )mm>1(9.1.4)are said to converge superlinearly.
In other contexts “linear” convergence would betermed “exponential,” or “geometrical.” That is not too bad at all: Linear convergencemeans that successive significant figures are won linearly with computational effort.It remains to discuss practical criteria for convergence. It is crucial to keep inmind that computers use a fixed number of binary digits to represent floating-pointnumbers. While your function might analytically pass through zero, it is possible thatits computed value is never zero, for any floating-point argument. One must decidewhat accuracy on the root is attainable: Convergence to within 10−6 in absolutevalue is reasonable when the root lies near 1, but certainly unachievable if the rootlies near 1026. One might thus think to specify convergence by a relative (fractional)criterion, but this becomes unworkable for roots near zero.