Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C (523184), страница 88
Текст из файла (страница 88)
Which behavioroccurs depends on just how the root is divided out. Forward deflation, where thenew polynomial coefficients are computed in the order from the highest power of xdown to the constant term, was illustrated in §5.3. This turns out to be stable if theroot of smallest absolute value is divided out at each stage. Alternatively, one can dobackward deflation, where new coefficients are computed in order from the constantterm up to the coefficient of the highest power of x. This is stable if the remainingroot of largest absolute value is divided out at each stage.A polynomial whose coefficients are interchanged “end-to-end,” so that theconstant becomes the highest coefficient, etc., has its roots mapped into theirreciprocals.
(Proof: Divide the whole polynomial by its highest power xn andrewrite it as a polynomial in 1/x.) The algorithm for backward deflation is thereforevirtually identical to that of forward deflation, except that the original coefficients aretaken in reverse order and the reciprocal of the deflating root is used. Since we willuse forward deflation below, we leave to you the exercise of writing a concise codingfor backward deflation (as in §5.3).
For more on the stability of deflation, consult [2].To minimize the impact of increasing errors (even stable ones) when usingdeflation, it is advisable to treat roots of the successively deflated polynomials asonly tentative roots of the original polynomial. One then polishes these tentative rootsby taking them as initial guesses that are to be re-solved for, using the nondeflatedoriginal polynomial P . Again you must beware lest two deflated roots are inaccurateenough that, under polishing, they both converge to the same undeflated root; in thatcase you gain a spurious root-multiplicity and lose a distinct root.
This is detectable,since you can compare each polished root for equality to previous ones from distincttentative roots. When it happens, you are advised to deflate the polynomial justonce (and for this root only), then again polish the tentative root, or to use Maehly’sprocedure (see equation 9.5.29 below).Below we say more about techniques for polishing real and complex-conjugate3719.5 Roots of Polynomialstentative roots.
First, let’s get back to overall strategy.There are two schools of thought about how to proceed when faced with apolynomial of real coefficients. One school says to go after the easiest quarry, thereal, distinct roots, by the same kinds of methods that we have discussed in previoussections for general functions, i.e., trial-and-error bracketing followed by a safeNewton-Raphson as in rtsafe. Sometimes you are only interested in real roots, inwhich case the strategy is complete. Otherwise, you then go after quadratic factorsof the form (9.5.1) by any of a variety of methods.
One such is Bairstow’s method,which we will discuss below in the context of root polishing. Another is Muller’smethod, which we here briefly discuss.Muller’s MethodMuller’s method generalizes the secant method, but uses quadratic interpolationamong three points instead of linear interpolation between two. Solving for thezeros of the quadratic allows the method to find complex pairs of roots.
Given threeprevious guesses for the root xi−2 , xi−1 , xi , and the values of the polynomial P (x)at those points, the next approximation xi+1 is produced by the following formulas,xi − xi−1xi−1 − xi−2A ≡ qP (xi ) − q(1 + q)P (xi−1 ) + q 2 P (xi−2 )q≡(9.5.2)B ≡ (2q + 1)P (xi ) − (1 + q)2 P (xi−1 ) + q 2 P (xi−2 )C ≡ (1 + q)P (xi )followed byxi+12C√= xi − (xi − xi−1 )B ± B 2 − 4AC(9.5.3)where the sign in the denominator is chosen to make its absolute value or modulusas large as possible. You can start the iterations with any three values of x that youlike, e.g., three equally spaced values on the real axis. Note that you must allowfor the possibility of a complex denominator, and subsequent complex arithmetic,in implementing the method.Muller’s method is sometimes also used for finding complex zeros of analyticfunctions (not just polynomials) in the complex plane, for example in the IMSLroutine ZANLY [3].Laguerre’s MethodThe second school regarding overall strategy happens to be the one to whichwe belong.
That school advises you to use one of a very small number of methodsthat will converge (though with greater or lesser efficiency) to all types of roots:real, complex, single, or multiple. Use such a method to get tentative values for alln roots of your nth degree polynomial. Then go back and polish them as you desire.372Chapter 9.Root Finding and Nonlinear Sets of EquationsLaguerre’s method is by far the most straightforward of these general, complexmethods. It does require complex arithmetic, even while converging to real roots;however, for polynomials with all real roots, it is guaranteed to converge to aroot from any starting point.
For polynomials with some complex roots, little istheoretically proved about the method’s convergence. Much empirical experience,however, suggests that nonconvergence is extremely unusual, and, further, can almostalways be fixed by a simple scheme to break a nonconverging limit cycle.
(This isimplemented in our routine, below.) An example of a polynomial that requires thiscycle-breaking scheme is one of high degree (>∼ 20), with all its roots just outside ofthe complex unit circle, approximately equally spaced around it. When the methodconverges on a simple complex zero, it is known that its convergence is third order.In some instances the complex arithmetic in the Laguerre method is nodisadvantage, since the polynomial itself may have complex coefficients.To motivate (although not rigorously derive) the Laguerre formulas we can notethe following relations between the polynomial and its roots and derivativesPn (x) = (x − x1 )(x − x2 ) .
. . (x − xn )(9.5.4)ln |Pn (x)| = ln |x − x1 | + ln |x − x2 | + . . . + ln |x − xn |(9.5.5)Pn111d ln |Pn (x)|=+++...+=≡ G (9.5.6)dxx − x1x − x2x − xnPnd2 ln |Pn (x)|111−=+++...+dx2(x − x1 )2(x − x2 )2(x − xn )2 2P Pn=− n ≡H(9.5.7)PnPnStarting from these relations, the Laguerre formulas make what Acton [1] nicely calls“a rather drastic set of assumptions”: The root x1 that we seek is assumed to belocated some distance a from our current guess x, while all other roots are assumedto be located at a distance bx − x1 = a ;x − xi = b i = 2, 3, .
. . , n(9.5.8)Then we can express (9.5.6), (9.5.7) as1 n−1+=Gab1n−1+ 2 =Ha2b(9.5.9)(9.5.10)which yields as the solution for aa=G±(n(n − 1)(nH − G2 )(9.5.11)where the sign should be taken to yield the largest magnitude for the denominator.Since the factor inside the square root can be negative, a can be complex. (A morerigorous justification of equation 9.5.11 is in [4].)9.5 Roots of Polynomials373The method operates iteratively: For a trial value x, a is calculated by equation(9.5.11). Then x − a becomes the next trial value.
This continues until a issufficiently small.The following routine implements the Laguerre method to find one root of agiven polynomial of degree m, whose coefficients can be complex. As usual, the firstcoefficient a[0] is the constant term, while a[m] is the coefficient of the highestpower of x. The routine implements a simplified version of an elegant stoppingcriterion due to Adams [5], which neatly balances the desire to achieve full machineaccuracy, on the one hand, with the danger of iterating forever in the presence ofroundoff error, on the other.#include <math.h>#include "complex.h"#include "nrutil.h"#define EPSS 1.0e-7#define MR 8#define MT 10#define MAXIT (MT*MR)Here EPSS is the estimated fractional roundoff error.
We try to break (rare) limit cycles withMR different fractional values, once every MT steps, for MAXIT total allowed iterations.void laguer(fcomplex a[], int m, fcomplex *x, int *its)"iGiven the degree m and the m+1 complex coefficients a[0..m] of the polynomial mi=0 a[i]x ,and given a complex value x, this routine improves x by Laguerre’s method until it converges,within the achievable roundoff limit, to a root of the given polynomial. The number of iterationstaken is returned as its.{int iter,j;float abx,abp,abm,err;fcomplex dx,x1,b,d,f,g,h,sq,gp,gm,g2;static float frac[MR+1] = {0.0,0.5,0.25,0.75,0.13,0.38,0.62,0.88,1.0};Fractions used to break a limit cycle.for (iter=1;iter<=MAXIT;iter++) {Loop over iterations up to allowed maximum.*its=iter;b=a[m];err=Cabs(b);d=f=Complex(0.0,0.0);abx=Cabs(*x);for (j=m-1;j>=0;j--) {Efficient computation of the polynomial andf=Cadd(Cmul(*x,f),d);its first two derivatives.d=Cadd(Cmul(*x,d),b);b=Cadd(Cmul(*x,b),a[j]);err=Cabs(b)+abx*err;}err *= EPSS;Estimate of roundoff error in evaluating polynomial.if (Cabs(b) <= err) return;We are on the root.g=Cdiv(d,b);The generic case: use Laguerre’s formula.g2=Cmul(g,g);h=Csub(g2,RCmul(2.0,Cdiv(f,b)));sq=Csqrt(RCmul((float) (m-1),Csub(RCmul((float) m,h),g2)));gp=Cadd(g,sq);gm=Csub(g,sq);abp=Cabs(gp);abm=Cabs(gm);if (abp < abm) gp=gm;dx=((FMAX(abp,abm) > 0.0 ? Cdiv(Complex((float) m,0.0),gp): RCmul(1+abx,Complex(cos((float)iter),sin((float)iter)))));x1=Csub(*x,dx);374Chapter 9.Root Finding and Nonlinear Sets of Equationsif (x->r == x1.r && x->i == x1.i) return;Converged.if (iter % MT) *x=x1;else *x=Csub(*x,RCmul(frac[iter/MT],dx));Every so often we take a fractional step, to break any limit cycle (itself a rare occurrence).}nrerror("too many iterations in laguer");Very unusual — can occur only for complex roots.