Heath - Scientific Computing (523150), страница 9
Текст из файла (страница 9)
Cancellation is not an issue only in computer arithmetic;it may also affect any situation in which limited precision is attainable, such as empiricalmeasurements or laboratory experiments. For example, determining the distance from Manhattan to Staten Island by using their respective distances from Los Angeles will produce avery poor result unless the latter distances are known with extraordinarily high accuracy.As another example, for many years physicists have been trying to compute the totalenergy of the helium atom from first principles using Monte Carlo techniques. The accuracyof these computations is determined largely by the number of random trials used.
Asfaster computers become available and computational techniques are refined, the attainableaccuracy improves. The total energy is the sum of the kinetic energy and the potentialenergy, which are computed separately and have opposite signs. Thus, the total energyis computed as a difference and suffers cancellation. Table 1.2 gives a sequence of valuesobtained over a number of years (these data were kindly provided by Dr. Robert Panoff).During this span the computed values for the kinetic and potential energies changed by only6 percent or less, yet the resulting estimate for the total energy changed by 144 percent.The one or two significant digits in the earlier computations were completely lost in thesubsequent subtraction.Table 1.2: ComputedYear19711977198019851988values for the total energy of the helium atomKinetic13.012.7612.2212.2812.40Potential−14.0−14.02−14.35−14.65−14.84Total−1.0−1.26−2.13−2.37−2.441.3.
COMPUTER ARITHMETIC17Example 1.10 Quadratic Formula. Cancellation and other numerical difficulties neednot involve a long series of computations. For example, use of the standard formula for theroots of a quadratic equation is fraught with numerical pitfalls. As every schoolchild learns,the two solutions of the quadratic equationax2 + bx + c = 0are given by√b2 − 4ac.2aFor some values of the coefficients, naive use of this formula in floating-point arithmetic canproduce overflow, underflow, or catastrophic cancellation.For example, if the coefficients are very large or very small, then b2 or 4ac may overflowor underflow.
The possibility of overflow can be avoided by rescaling the coefficients, suchas dividing all three coefficients by the coefficient of largest magnitude. Such a rescalingdoes not change the roots of the quadratic equation, but now the largest coefficient is 1and overflow cannot occur in computing b2 or 4ac. Such rescaling does not eliminate thepossibility of underflow, but it does prevent needless underflow, which could otherwise occurwhen all three coefficients are very small.Cancellation between −b and the square root can be avoided by computing one of theroots using the alternative formulax=x=−b ±2c√,−b ∓ b2 − 4acwhich has the opposite sign pattern from that of the standard formula.
But cancellationinside the square root cannot be easily avoided without using higher precision (if the discriminant is small relative to the coefficients, then the two roots are close to each other,and the problem is inherently ill-conditioned).As an illustration, we use four-digit decimal arithmetic, with rounding to nearest, tocompute the roots of the quadratic equation having coefficients a = 0.05010, b = −98.78,and c = 5.015. For comparison, the correct roots, rounded to ten significant digits, are1971.605916and 0.05077069387.Computing the discriminant in four-digit arithmetic producesb2 − 4ac = 9757 − 1.005 = 9756,so thatpb2 − 4ac = 98.77.The standard quadratic formula then gives the roots98.78 ± 98.77= 19720.1002and 0.0998.The first root is the correctly rounded four-digit result, but the other root is completelywrong, with an error of about 100 percent.
The culprit is cancellation, not in the sense18CHAPTER 1. SCIENTIFIC COMPUTINGthat the final subtraction is wrong (indeed it is exactly correct), but in the sense thatcancellation of the leading digits has left nothing remaining but previous rounding errors.The alternative quadratic formula gives the roots10.03= 100398.78 ∓ 98.77and 0.05077.Once again we have obtained one fully accurate root and one completely erroneous root,but in each case it is the opposite root from the one obtained previously. Cancellationis again the explanation, but the different sign pattern causes the opposite root to becontaminated.
In general, for computing each root we should choose whichever formulaavoids this cancellation, depending on the sign of b.Example 1.11 Finite Difference Approximation. Consider the finite difference approximation to the first derivativef 0 (x) ≈f (x + h) − f (x).hWe want h to be small so that the approximation will be accurate, but if h is too small,then fl(x + h) may not differ from fl(x). Even if fl(x + h) 6= fl(x), we might still havefl(f (x + h)) = fl(f (x)) if f is slowly varying. In any case, we can expect some cancellationin computing the difference f (x + h) − f (x). Thus, there is a trade-off between truncationerror and rounding error in choosing the size of h.If the relative error in the function values is bounded by , then the rounding error inthe approximate derivative value is bounded by 2|f (x)|/h. The Taylor series expansionf (x + h) = f (x) + f 0 (x)h + f 00 (x)h2 /2 + · · ·gives an estimate of M h/2 for the truncation error, where M is a bound for |f 00 (x)|.
Thetotal error is therefore bounded by2|f (x)| M h+,h2which is minimized whenph = 2 |f (x)|/M .If we assume that the function values are accurate to machine precision and that f and f 00have roughly the same magnitude, then we obtain the rule of thumb that it is usually bestto perturb about half the digits of x by taking√h ≈ mach · |x|.A typical example is shown in Fig.
1.4, where the error in the finite difference approximation for a particular function is plotted as a function of the stepsize h. This computationwas done in IEEE single precision with x = 1, and the error indeed reaches a minimum√at h ≈ mach . The error increases for smaller values of h because of rounding error, andincreases for larger values of h because of truncation error.1.3. COMPUTER ARITHMETIC19The rounding error can be reduced by working with higher-precision arithmetic. Truncation error can be reduced by using a more accurate formula, such as the centered differenceapproximation (see Section 8.7.1)f 0 (x) ≈f (x + h) − f (x − h).2h10010−1error10−210−310−4.........................................................................................................................................................
..... .... ... .... .......10−510−710−610−510−410−310−210−1 100stepsize hFigure 1.4: Error in finite difference approximation as a function of stepsize.Example 1.12 Standard Deviation. The mean of a finite sequence of real values xi ,i = 1, . . . , n, is defined byn1Xx̄ =xi ,ni=1and the standard deviation is defined by"n1 Xσ=(xi − x̄)2n−1#1/2.i=1Use of these formulas requires two passes through the data: one to compute the mean andanother to compute the standard deviation.
For better efficiency, it is tempting to use themathematically equivalent formula"1σ=n−1nXx2i2− nx̄!#1/2i=1to compute the standard deviation, since both the sum and the sum of squares can becomputed in a single pass through the data.Unfortunately, the single cancellation at the end of the one-pass formula is often muchmore damaging numerically than all of the cancellations in the two-pass formula combined.The problem is that the two quantities being subtracted in the one-pass formula are apt to20CHAPTER 1. SCIENTIFIC COMPUTINGbe relatively large and nearly equal, and hence the relative error in the difference may belarge (indeed, the result can even be negative, causing the square root to fail).Example 1.13 Computing Residuals.
Assessing the accuracy of a computation is oftendifficult if one uses only the same precision as that of the computation itself. Perhaps thisobservation should not be surprising: if we knew the actual error, we could have used it toobtain a more accurate result in the first place.As a simple example, suppose we are solving the scalar linear equation ax = b for theunknown x, and we have obtained an approximate solution x̂. As one measure of the qualityof our answer, we wish to compute the residual r = b − ax̂. In floating-point arithmetic,a ×fl x̂ = ax̂(1 + δ1 )for some δ1 ≤ mach . Sob −fl (a ×fl x̂) = [b − ax̂(1 + δ1 )](1 + δ2 )= [r − δ1 ax̂](1 + δ2 )= r + δ2 r − δ1 ax̂ − δ1 δ2 ax̂≈ r + δ2 r − δ1 b.But δ1 b may be as large as mach b, which may be as large as r.
Thus, higher precision maybe required to enable a meaningful computation of the residual r.1.4Mathematical SoftwareThis book covers a wide range of topics in numerical analysis and scientific computing. Wewill discuss the essential aspects of each topic but will not have the luxury of examining anytopic in great detail.
To be able to solve interesting computational problems, we will oftenrely on mathematical software written by professionals. Leaving the algorithmic details tosuch software will allow us to focus on proper problem formulation and interpretation ofresults. We will consider only the most fundamental algorithms for each type of problem,motivated primarily by the insight to be gained into choosing an appropriate method andusing it wisely. Our primary goal is to become intelligent users, rather than creators, ofmathematical software.Before citing some specific sources of good mathematical software, let us summarizethe desirable characteristics that such software should possess, in no particular order ofimportance:• Reliability: always works correctly for easy problems• Robustness: usually works for hard problems, but fails gracefully and informativelywhen it does fail• Accuracy: produces results as accurate as warranted by the problem and input data,preferably with an estimate of the accuracy achieved1.4.
MATHEMATICAL SOFTWARE21• Efficiency: requires execution time and storage that are close to the minimum possiblefor the problem being solved• Maintainability: is easy to understand and modify• Portability: adapts with little or no change to new computing environments• Usability: has a convenient and well-documented user interface• Applicability: solves a broad range of problemsObviously, these properties often conflict, and it is rare software indeed that satisfies all ofthem. Nevertheless, this list gives mathematical software users some idea what qualities tolook for and developers some worthy goals to strive for.1.4.1Mathematical Software LibrariesSeveral widely available sources of general-purpose mathematical software are listed here.The software listed is written in Fortran unless otherwise noted.
At the end of each chapterof this book, specific routines are listed for given types of problems, both from these generallibraries and from more specialized packages. For additional information about availablemathematical software, see the URL http://gams.nist.gov on the Internet’s World-WideWeb.• FMM: A collection of software accompanying the book Computer Methods for MathematicalComputations, by Forsythe, Malcolm, and Moler [82].
Available from netlib (see below).• HSL (Harwell Subroutine Library): A collection of software developed at Harwell Laboratory in England. See URL http://www.cse.clrc.ac.uk/Activity/HSL.• IMSL (International Mathematical and Statistical Libraries): A commercial product ofVisual Numerics Inc., Houston, Texas. A comprehensive library of mathematical software; the full library is available in Fortran, and a subset is available in C. See URLhttp://www.vni.com.• KMN: A collection of software accompanying the book Numerical Methods and Software,by Kahaner, Moler, and Nash [142].• NAG (Numerical Algorithms Group): A commercial product of NAG Inc., Downers Grove,Illinois.