Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C (523184), страница 48
Текст из файла (страница 48)
Other than in this special case, Clenshaw’s recurrence is always stable,independent of whether the recurrence for the functions Fk is stable in the upwardor downward direction.CITED REFERENCES AND FURTHER READING:Abramowitz, M., and Stegun, I.A. 1964, Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55 (Washington: National Bureau of Standards; reprinted 1968 byDover Publications, New York), pp. xiii, 697. [1]Gautschi, W.
1967, SIAM Review, vol. 9, pp. 24–82. [2]Lakshmikantham, V., and Trigiante, D. 1988, Theory of Difference Equations: Numerical Methodsand Applications (San Diego: Academic Press). [3]Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), pp. 20ff. [4]Clenshaw, C.W.
1962, Mathematical Tables, vol. 5, National Physical Laboratory (London: H.M.Stationery Office). [5]Dahlquist, G., and Bjorck, A. 1974, Numerical Methods (Englewood Cliffs, NJ: Prentice-Hall),§4.4.3, p. 111.Goodwin, E.T. (ed.) 1961, Modern Computing Methods, 2nd ed. (New York: Philosophical Library), p. 76.5.6 Quadratic and Cubic EquationsThe roots of simple algebraic equations can be viewed as being functions of theequations’ coefficients.
We are taught these functions in elementary algebra. Yet,surprisingly many people don’t know the right way to solve a quadratic equationwith two real roots, or to obtain the roots of a cubic equation.There are two ways to write the solution of the quadratic equationax2 + bx + c = 0(5.6.1)with real coefficients a, b, c, namelyx=−b ±√b2 − 4ac2a(5.6.2)184Chapter 5.andx=Evaluation of Functions2c√−b ± b2 − 4ac(5.6.3)If you use either (5.6.2) or (5.6.3) to get the two roots, you are asking for trouble: Ifeither a or c (or both) are small, then one of the roots will involve the subtractionof b from a very nearly equal quantity (the discriminant); you will get that root veryinaccurately. The correct way to compute the roots isq≡−(1b + sgn(b) b2 − 4ac2(5.6.4)Then the two roots arex1 =qax2 =andcq(5.6.5)If the coefficients a, b, c, are complex rather than real, then the above formulasstill hold, except that in equation (5.6.4) the sign of the square root should bechosen so as to make((5.6.6)Re(b* b2 − 4ac) ≥ 0where Re denotes the real part and asterisk denotes complex conjugation.Apropos of quadratic equations, this seems a convenient place to recall thatthe inverse hyperbolic functions sinh−1 and cosh−1 are in fact just logarithms ofsolutions to such equations,(,ln x + x2 + 1(,cosh−1 (x) = ± ln x + x2 − 1sinh−1 (x) =(5.6.7)(5.6.8)Equation (5.6.7) is numerically robust for x ≥ 0.
For negative x, use the symmetrysinh−1 (−x) = − sinh−1 (x). Equation (5.6.8) is of course valid only for x ≥ 1.For the cubic equationx3 + ax2 + bx + c = 0(5.6.9)with real or complex coefficients a, b, c, first computeQ≡a2 − 3b9andR≡2a3 − 9ab + 27c54(5.6.10)If Q and R are real (always true when a, b, c are real) and R2 < Q3 , then the cubicequation has three real roots. Find them by computingθ = arccos(R/(Q3 )(5.6.11)1855.6 Quadratic and Cubic Equationsin terms of which the three roots are (aθ−x1 = −2 Q cos33(θ + 2π−x2 = −2 Q cos3(θ − 2π−x3 = −2 Q cos3a3a3(5.6.12)(This equation first appears in Chapter VI of François Viète’s treatise “De emendatione,” published in 1615!)Otherwise, compute1/3(A = − R + R 2 − Q3(5.6.13)where the sign of the square root is chosen to make(Re(R* R2 − Q3 ) ≥ 0(5.6.14)(asterisk again denoting complex conjugation).
If Q and R are both real, equations(5.6.13)–(5.6.14) are equivalent to1/3((5.6.15)A = −sgn(R) |R| + R2 − Q3where the positive square root is assumed. Next compute+Q/A(A = 0)B=0(A = 0)(5.6.16)in terms of which the three roots arex1 = (A + B) −a3(5.6.17)(the single real root when a, b, c are real) and√a13(A − B)x2 = − (A + B) − + i232(5.6.18)√a13(A − B)x3 = − (A + B) − − i232(in that same case, a complex conjugate pair). Equations (5.6.13)–(5.6.16) arearranged both to minimize roundoff error, and also (as pointed out by A.J.
Glassman)to ensure that no choice of branch for the complex cube root can result in thespurious loss of a distinct root.If you need to solve many cubic equations with only slightly different coefficients, it is more efficient to use Newton’s method (§9.4).CITED REFERENCES AND FURTHER READING:Weast, R.C. (ed.) 1967, Handbook of Tables for Mathematics, 3rd ed. (Cleveland: The ChemicalRubber Co.), pp. 130–133.Pachner, J. 1983, Handbook of Numerical Analysis Applications (New York: McGraw-Hill), §6.1.McKelvey, J.P. 1984, American Journal of Physics, vol. 52, pp.
269–270; see also vol. 53,p. 775, and vol. 55, pp. 374–375.186Chapter 5.Evaluation of Functions5.7 Numerical DerivativesImagine that you have a procedure which computes a function f(x), and nowyou want to compute its derivative f (x). Easy, right? The definition of thederivative, the limit as h → 0 off (x) ≈f(x + h) − f(x)h(5.7.1)practically suggests the program: Pick a small value h; evaluate f(x + h); youprobably have f(x) already evaluated, but if not, do it too; finally apply equation(5.7.1).
What more needs to be said?Quite a lot, actually. Applied uncritically, the above procedure is almostguaranteed to produce inaccurate results. Applied properly, it can be the right wayto compute a derivative only when the function f is fiercely expensive to compute,when you already have invested in computing f(x), and when, therefore, you wantto get the derivative in no more than a single additional function evaluation.
In sucha situation, the remaining issue is to choose h properly, an issue we now discuss:There are two sources of error in equation (5.7.1), truncation error and roundofferror. The truncation error comes from higher terms in the Taylor series expansion,11f(x + h) = f(x) + hf (x) + h2 f (x) + h3 f (x) + · · ·26(5.7.2)whence1f(x + h) − f(x)= f + hf + · · ·h2(5.7.3)The roundoff error has various contributions.
First there is roundoff error in h:Suppose, by way of an example, that you are at a point x = 10.3 and you blindlychoose h = 0.0001. Neither x = 10.3 nor x + h = 10.30001 is a number withan exact representation in binary; each is therefore represented with some fractionalerror characteristic of the machine’s floating-point format, m , whose value in singleprecision may be ∼ 10−7 . The error in the effective value of h, namely the differencebetween x + h and x as represented in the machine, is therefore on the order of m x,which implies a fractional error in h of order ∼ m x/h ∼ 10−2 ! By equation (5.7.1)this immediately implies at least the same large fractional error in the derivative.We arrive at Lesson 1: Always choose h so that x + h and x differ by an exactlyrepresentable number.
This can usually be accomplished by the program stepstemp = x + hh = temp − x(5.7.4)Some optimizing compilers, and some computers whose floating-point chips havehigher internal accuracy than is stored externally, can foil this trick; if so, it isusually enough to declare temp as volatile, or else to call a dummy functiondonothing(temp) between the two equations (5.7.4). This forces temp into andout of addressable memory.1875.7 Numerical DerivativesWith h an “exact” number, the roundoff error in equation (5.7.1) is er ∼f |f(x)/h|.
Here f is the fractional accuracy with which f is computed; for asimple function this may be comparable to the machine accuracy, f ≈ m , but for acomplicated calculation with additional sources of inaccuracy it may be larger. Thetruncation error in equation (5.7.3) is on the order of et ∼ |hf (x)|. Varying h tominimize the sum er + et gives the optimal choice of h,*f f√h∼≈ f x c(5.7.5)f where xc ≡ (f/f )1/2 is the “curvature scale” of the function f, or “characteristicscale” over which it changes. In the absence of any other information, one oftenassumes xc = x (except near x = 0 where some other estimate of the typical xscale should be used).With the choice of equation (5.7.5), the fractional accuracy of the computedderivative is(er + et )/|f | ∼√f (ff /f )1/2 ∼2√f(5.7.6)Here the last order-of-magnitude equality assumes that f, f , and f all sharethe same characteristic length scale, usually the case. One sees that the simplefinite-difference equation (5.7.1) gives at best only the square root of the machineaccuracy m .If you can afford two function evaluations for each derivative calculation, thenit is significantly better to use the symmetrized formf (x) ≈f(x + h) − f(x − h)2h(5.7.7)In this case, by equation (5.7.2), the truncation error is et ∼ h2 f .
The roundofferror er is about the same as before. The optimal choice of h, by a short calculationanalogous to the one above, is nowh∼f ff 1/3∼ (f )1/3 xc(5.7.8)and the fractional error is(er + et )/|f | ∼ (f )2/3 f 2/3 (f )1/3 /f ∼ (f )2/3(5.7.9)which will typically be an order of magnitude (single precision) or two orders ofmagnitude (double precision) better than equation (5.7.6). We have arrived at Lesson2: Choose h to be the correct power of f or m times a characteristic scale xc.You can easily derive the correct powers for other cases [1]. For a function oftwo dimensions, for example, and the mixed derivative formula[f(x + h, y + h) − f(x + h, y − h)] − [f(x − h, y + h) − f(x − h, y − h)]∂2f=∂x∂y4h2(5.7.10)188Chapter 5.Evaluation of Functions1/3the correct scaling is h ∼ f xc .It is disappointing, certainly, that no simple finite-difference formula likeequation (5.7.1) or (5.7.7) gives an accuracy comparable to the machine accuracym , or even the lower accuracy to which f is evaluated, f .