Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C (523184), страница 87
Текст из файла (страница 87)
The root, returned as the function value rtsafe, will be refined untilits accuracy is known within ±xacc. funcd is a user-supplied routine that returns both thefunction value and the first derivative of the function.{void nrerror(char error_text[]);int j;float df,dx,dxold,f,fh,fl;float temp,xh,xl,rts;(*funcd)(x1,&fl,&df);(*funcd)(x2,&fh,&df);if ((fl > 0.0 && fh > 0.0) || (fl < 0.0 && fh < 0.0))nrerror("Root must be bracketed in rtsafe");if (fl == 0.0) return x1;if (fh == 0.0) return x2;if (fl < 0.0) {Orient the search so that f (xl) < 0.xl=x1;xh=x2;} else {xh=x1;xl=x2;}rts=0.5*(x1+x2);Initialize the guess for root,dxold=fabs(x2-x1);the “stepsize before last,”dx=dxold;and the last step.(*funcd)(rts,&f,&df);for (j=1;j<=MAXIT;j++) {Loop over allowed iterations.if ((((rts-xh)*df-f)*((rts-xl)*df-f) > 0.0)Bisect if Newton out of range,|| (fabs(2.0*f) > fabs(dxold*df))) {or not decreasing fast enough.dxold=dx;dx=0.5*(xh-xl);rts=xl+dx;if (xl == rts) return rts;Change in root is negligible.} else {Newton step acceptable.
Take it.dxold=dx;dx=f/df;temp=rts;rts -= dx;if (temp == rts) return rts;}if (fabs(dx) < xacc) return rts;Convergence criterion.(*funcd)(rts,&f,&df);The one new function evaluation per iteration.3679.4 Newton-Raphson Method Using DerivativeMaintain the bracket on the root.if (f < 0.0)xl=rts;elsexh=rts;}nrerror("Maximum number of iterations exceeded in rtsafe");return 0.0;Never get here.}For many functions the derivative f (x) often converges to machine accuracybefore the function f(x) itself does. When that is the case one need not subsequentlyupdate f (x).
This shortcut is recommended only when you confidently understandthe generic behavior of your function, but it speeds computations when the derivativecalculation is laborious. (Formally this makes the convergence only linear, but if thederivative isn’t changing anyway, you can do no better.)Newton-Raphson and FractalsAn interesting sidelight to our repeated warnings about Newton-Raphson’sunpredictable global convergence properties — its very rapid local convergencenotwithstanding — is to investigate, for some particular equation, the set of startingvalues from which the method does, or doesn’t converge to a root.Consider the simple equationz3 − 1 = 0(9.4.8)whose single real root is z = 1, but which also has complex roots at the other twocube roots of unity, exp(±2πi/3).
Newton’s method gives the iterationzj+1 = zj −zj3 − 13zj2(9.4.9)Up to now, we have applied an iteration like equation (9.4.9) only for realstarting values z0 , but in fact all of the equations in this section also apply in thecomplex plane. We can therefore map out the complex plane into regions from whicha starting value z0 , iterated in equation (9.4.9), will, or won’t, converge to z = 1.Naively, we might expect to find a “basin of convergence” somehow surroundingthe root z = 1. We surely do not expect the basin of convergence to fill the wholeplane, because the plane must also contain regions that converge to each of the twocomplex roots.
In fact, by symmetry, the three regions must have identical shapes.Perhaps they will be three symmetric 120◦ wedges, with one root centered in each?Now take a look at Figure 9.4.4, which shows the result of a numericalexploration. The basin of convergence does indeed cover 1/3 the area of thecomplex plane, but its boundary is highly irregular — in fact, fractal. (A fractal, socalled, has self-similar structure that repeats on all scales of magnification.) Howdoes this fractal emerge from something as simple as Newton’s method, and anequation as simple as (9.4.8)? The answer is already implicit in Figure 9.4.2, whichshowed how, on the real line, a local extremum causes Newton’s method to shootoff to infinity.
Suppose one is slightly removed from such a point. Then one mightbe shot off not to infinity, but — by luck — right into the basin of convergence368Chapter 9.Root Finding and Nonlinear Sets of EquationsFigure 9.4.4. The complex z plane with real and imaginary components in the range (−2, 2). Theblack region is the set of points from which Newton’s method converges to the root z = 1 of the equationz3 − 1 = 0. Its shape is fractal.of the desired root. But that means that in the neighborhood of an extremum theremust be a tiny, perhaps distorted, copy of the basin of convergence — a kind of“one-bounce away” copy. Similar logic shows that there can be “two-bounce”copies, “three-bounce” copies, and so on.
A fractal thus emerges.Notice that, for equation (9.4.8), almost the whole real axis is in the domain ofconvergence for the root z = 1. We say “almost” because of the peculiar discretepoints on the negative real axis whose convergence is indeterminate (see figure).What happens if you start Newton’s method from one of these points? (Try it.)CITED REFERENCES AND FURTHER READING:Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), Chapter 2.Ralston, A., and Rabinowitz, P.
1978, A First Course in Numerical Analysis, 2nd ed. (New York:McGraw-Hill), §8.4.Ortega, J., and Rheinboldt, W. 1970, Iterative Solution of Nonlinear Equations in Several Variables (New York: Academic Press).Mandelbrot, B.B. 1983, The Fractal Geometry of Nature (San Francisco: W.H. Freeman).Peitgen, H.-O., and Saupe, D. (eds.) 1988, The Science of Fractal Images (New York: SpringerVerlag).9.5 Roots of Polynomials3699.5 Roots of PolynomialsHere we present a few methods for finding roots of polynomials. These willserve for most practical problems involving polynomials of low-to-moderate degreeor for well-conditioned polynomials of higher degree.
Not as well appreciated as itought to be is the fact that some polynomials are exceedingly ill-conditioned. Thetiniest changes in a polynomial’s coefficients can, in the worst case, send its rootssprawling all over the complex plane. (An infamous example due to Wilkinson isdetailed by Acton [1].)Recall that a polynomial of degree n will have n roots. The roots can be realor complex, and they might not be distinct. If the coefficients of the polynomial arereal, then complex roots will occur in pairs that are conjugate, i.e., if x1 = a + biis a root then x2 = a − bi will also be a root.
When the coefficients are complex,the complex roots need not be related.Multiple roots, or closely spaced roots, produce the most difficulty for numericalalgorithms (see Figure 9.5.1). For example, P (x) = (x − a)2 has a double real rootat x = a. However, we cannot bracket the root by the usual technique of identifyingneighborhoods where the function changes sign, nor will slope-following methodssuch as Newton-Raphson work well, because both the function and its derivativevanish at a multiple root.
Newton-Raphson may work, but slowly, since largeroundoff errors can occur. When a root is known in advance to be multiple, thenspecial methods of attack are readily devised. Problems arise when (as is generallythe case) we do not know in advance what pathology a root will display.Deflation of PolynomialsWhen seeking several or all roots of a polynomial, the total effort can besignificantly reduced by the use of deflation.
As each root r is found, the polynomialis factored into a product involving the root and a reduced polynomial of degreeone less than the original, i.e., P (x) = (x − r)Q(x). Since the roots of Q areexactly the remaining roots of P , the effort of finding additional roots decreases,because we work with polynomials of lower and lower degree as we find successiveroots. Even more important, with deflation we can avoid the blunder of having ouriterative method converge twice to the same (nonmultiple) root instead of separatelyto two different roots.Deflation, which amounts to synthetic division, is a simple operation that actson the array of polynomial coefficients.
The concise code for synthetic division by amonomial factor was given in §5.3 above. You can deflate complex roots either byconverting that code to complex data type, or else — in the case of a polynomial withreal coefficients but possibly complex roots — by deflating by a quadratic factor,[x − (a + ib)] [x − (a − ib)] = x2 − 2ax + (a2 + b2 )(9.5.1)The routine poldiv in §5.3 can be used to divide the polynomial by this factor.Deflation must, however, be utilized with care. Because each new root is knownwith only finite accuracy, errors creep into the determination of the coefficients ofthe successively deflated polynomial.
Consequently, the roots can become more andmore inaccurate. It matters a lot whether the inaccuracy creeps in stably (plus or370Chapter 9.Root Finding and Nonlinear Sets of Equationsf (x)f(x)x(a)x(b)Figure 9.5.1. (a) Linear, quadratic, and cubic behavior at the roots of polynomials. Only under highmagnification (b) does it become apparent that the cubic has one, not three, roots, and that the quadratichas two roots rather than none.minus a few multiples of the machine precision at each stage) or unstably (erosion ofsuccessive significant figures until the results become meaningless).