Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 4
Текст из файла (страница 4)
Loss of significant digits is thereforedangerous only if we wish to keep the relative error small.Such loss can often be avoided by anticipating its occurrence. Consider, for example, the evaluation of the functionin six-decimal-digit arithmetic. Sincefor x near zero, there willbe loss of significant digits for x near zero if we calculate f(x) by firstfinding cos x and then subtracting the calculated value from 1. For wecannot calculate cos x to more than six digits, so that the error in thecalculated value may be as large as 5 · 10-7, hence as large as, or largerthan, f(x) for x near zero. If one wishes to compute the value of f(x) nearzero to about six significant digits using six-digit arithmetic, one wouldhave to use an alternative formula for f(x), such aswhich can be evaluated quite accurately for small x; else, one could makeuse of the Taylor expansion (see Sec.
1.7) for f(x),which shows, for example, that foragrees with f(x) to atleast six significant digits.Another example is provided by the problem of finding the roots ofthe quadratic equation(1.16)We know from algebra that the roots are given by the quadratic formula(1.17)Let us assume that b2 - 4ac > 0, that b > 0, and that we wish to find theroot of smaller absolute value using (1.17); i.e.,(1.18)If 4ac is small compared with b 2 , thenwill agree with b toseveral places. Hence, given thatwill be calculated correctlyonly to as many places as are used in the calculations, it follows that thenumerator of (1.18), and therefore the calculated root, will be accurate tofewer places than were used during the calculation.
To be specific, take the14NUMBER SYSTEMS AND ERRORSequation(1.19)Using (1.18) and five-decimal-digit floating-point chopped arithmetic, wecalculatewhile in fact,is the correct root to the number of digits shown. Here too, the loss ofsignificant digits can be avoided by using an alternative formula for thecalculation of the absolutely smaller root, viz.,(1.20)Using this formula, and five-decimal-digit arithmetic, we calculatewhich is accurate to five digits.Once an error is committed, it contaminates subsequent results. Thiserror propagation through subsequent calculations is conveniently studiedin terms of the two related concepts of condition and instability.The word condition is used to describe the sensitivity of the functionvalue f(x) to changes in the argument x.
The condition is usually measuredby the maximum relative change in the function value f(x) caused by aunit relative change in the argument. In a somewhat informal formula,condition off at x =(1.21)The larger the condition, the more ill-conditioned the function is said tobe. Here we have made use of the fact (see Sec.
1.7) thati.e., the change in argument from x to x* changes the function value byapproximatelyIf, for example,1.4LOSS OF SIGNIFICANCE, ERROR PROPAGATION; CONDITION, INSTABILITYthen15hence the condition of f is, approximately,This says that taking square roots is a well-conditioned process since itactually reduces the relative error. By contrast, ifthenso thatand this number can be quite large for |x| near 1. Thus, for x near 1 or- 1, this function is quite ill-conditioned. It very much magnifies relativeerrors in the argument there.The related notion of instability describes the sensitivity of a numericalprocess for the calculation of f(x) from x to the inevitable rounding errorscommitted during its execution in finite precision arithmetic.
The preciseeffect of these errors on the accuracy of the computed value for f(x) ishard to determine except by actually carrying out the computations forparticular finite precision arithmetics and comparing the computed answerwith the exact answer. But it is possible to estimate these effects roughly byconsidering the rounding errors one at a time. This means we look at theindividual computational steps which make up the process. Suppose thereare n such steps. Denote by xi the output from the ith such step, and takex0 = x. Such an xi then serves as input to one or more of the later stepsand, in this way, influences the final answer xn = f(x). Denote by f i thefunction which describes the dependence of the final answer on theintermediate result xi .
In particular, f0 is just f. Then the total process isunstable to the extent that one or more of these functions fi is ill-conditioned. More precisely, the process is unstable to the extent that one ormore of the fi ’s has a much larger condition than f = f0 has. For it is thecondition of fi which gauges the relative effect of the inevitable roundingerror incurred at the ith step on the final answer.To give a simple example, consider the functionfor “large” x, say forIts condition there iswhich is quite good. But, if we calculate f(12345) in six-decimal arithmetic,16NUMBER SYSTEMS AND ERRORSwe findwhile, actually,So our calculated answer is in error by 10 percent. We analyze thecomputational process.
It consists of the following four computationalsteps:(1.22)Now consider, for example, the function f 3 , i.e., the function whichdescribes how the final answer x4 depends on x3. We havehence its condition is, approximately,This number is usually near 1, i.e., f 3 is usually well-conditioned exceptwhen t is near x2. In this latter case, f3 can be quite badly conditioned. Forexample, in our particular case,whileso thecondition isor more than 40,000 times as big as the condition off itself.We conclude that the process described in (1.22) is an unstable way toevaluate f. Of course, if you have read the beginning of this sectioncarefully, then you already know a stable way to evaluate this function,namely by the equivalent formula1In six-decimal arithmetic, this gives1.4LOSS OF SIGNIFICANCE, ERROR PROPAGATION; CONDITION, INSTABILITY17which is in error by only 0.0003 percent.
The computational process is(1.23)Here, for example, f3(t) = 1/(x2 + t), and the condition of this function is,approximately,which is the case here. Thus, the condition of f3 is quite good; itforis as good as that of f itself.We will meet other examples of large instability, particularly in thediscussion of the numerical solution of differential equations.EXERCISES1.4-l Find the root of smallest magnitude of the equationusing formulas (1.18) and (1.20).
Work in floating-point arithmetic using a four- (decimal-)place mantissa.1.4-2 Estimate the error in evaluatingaround x = 2 if the absoluteerror in x is 10-6.1.4-3 Find a way to calculatecorrectly to the number of digits used when x is near zero for (a)-(c), very much larger thanfor (d).1.4-4 Assuming a computer with a four-decimal-place mantissa, add the following numbersfirst in ascending order (from smallest to largest) and then in descending order.
In doing soround off the partial sums. Compare your results with the correct sum x = 0.107101023 · 105.1.4-5 A dramatically unstable way to calculate f(x) = e x for negative x is provided by its-12by evaluating the Taylor series (1.36) at x = - 1 2 andTaylor series (1.36). Calculate e18NUMBER SYSTEMS AND ERRORScompare with the accurate value e-12 = 0.00000 61442 12354 · · · . [ Hint: By (1.36), thedifference between eX and the partial sumis less than the next termin absolute value, in case x is negative.
So, it would be all right to sum the series until1.4-6 Explain the result of Exercise 1.4-5 by comparing the condition of f(x) = e X nearx = - 12 with the condition of some of the functions fi involved in the computationalprocess. Then find a stable way to calculate e-12 from the Taylor series (1.36). (Hint:e-x = 1/ex.)1.5 COMPUTATIONAL METHODS FOR ERRORESTIMATIONThis chapter is intended to make the student aware of the possible sourcesof error and to point out some techniques which can be used to avoid theseerrors.
In appraising computer results, such errors must be taken intoaccount. Realistic estimates of the total error are difficult to make in apractical problem. and an adequate mathematical theory is still lacking.An appealing idea is to make use of the computer itself to provide us withsuch estimates. Various methods of this type have been proposed.
We shalldiscuss briefly five of them. The simplest method makes use of doubleprecision. Here one simply solves the same problem twice—once in singleprecision and once in double precision. From the difference in the resultsan estimate of the total round-off error can then be obtained (assumingthat all other errors are less significant).