Higham - Accuracy and Stability of Numerical Algorithms (523152), страница 6
Текст из файла (страница 6)
Thus 1.7320 has five significant digits,while 0.0491 has only three. What is meant by correct significant digits ina number that approximates another seems intuitively clear, but a precisedefinition is problematic, as we explain in a moment. First, note that for anumberwith p significant digits there are only p +1 possible answers to thequestion “how many correct significant digits doeshave?” (assuming isnot a constant such as 2.0 that is known exactly). Therefore the number ofcorrect significant digits is a fairly crude measure of accuracy in comparisonwith the relative error. For example, in the following two casesagrees withto three but not four significant digits by any reasonable definition, yet therelative errors differ by a factor of about 44:= 1.00000,= 9.00000,= 1.00499,= 8.99899,= 4.99 × 10-3,= 1.12 × 10-4.Here is a possible definition of correct significant digits: an approximationto has p correct significant digits if and round to the same number top significant digits.
Rounding is the act of replacing a given number by thenearest p significant digit number, with some rule for breaking ties when thereare two nearest. This definition of correct significant digits is mathematicallyelegant and agrees with intuition most of the time. But consider the numbers= 0.9949,= 0.9951.According to the definitiondoes not have two correct significant digitsbut does have one and three correct significant digits!6PRINCIPLES OF FINITE PRECISION COMPUTATIONor, if the data is itself the solution to another problem, it may be the resultof errors in an earlier computation.
The effects of errors in the data aregenerally easier to understand than the effects of rounding errors committedduring a computation, because data errors can be analysed using perturbationtheory for the problem at hand, while intermediate rounding errors requirean analysis specific to the given method. This book contains perturbationtheory for most of the problems considered, for example, in Chapters 7 (linearsystems), 19 (the least squares problem), and 20 (underdetermined systems).Analysing truncation errors, or discretization errors, is one of the major tasks of the numerical analyst. Many standard numerical methods (forexample, the trapezium rule for quadrature, Euler’s method for differentialequations, and Newton’s method for nonlinear equations) can be derived bytaking finitely many terms of a Taylor series.
The terms omitted constitutethe truncation error, and for many methods the size of this error dependson a parameter (often called h, “the stepsize”) whose appropriate value is acompromise between obtaining a small error and a fast computation.Because the emphasis of this book is on finite precision computation, withvirtually no mention of truncation errors, it would be easy for the reader togain the impression that the study of numerical methods is dominated by thestudy of rounding errors.
This is certainly not the case. Trefethen explains itwell when he discusses how to define numerical analysis [1016, 1992]:Rounding errors and instability are important, and numerical analysts will always be the experts in these subjects and at painsto ensure that the unwary are not tripped up by them.
But ourcentral mission is to compute quantities that are typically uncomputable, from an analytic point of view, and to do it with lightningspeed.In this quotation “uncomputable” means that approximations are necessary,and thus Trefethen’s point is that developing good approximations is a morefundamental task than analysing the effects of rounding errors on those approximations.A possible way to avoid rounding and truncation errors (but not dataerrors) is to try to solve a problem using a symbolic manipulation package,such as Maple5 [199, 1991] or Mathematica6 [1109, 1991].
Indeed, we haveused this approach to compute “exact answers” in some of our numericalexperiments, While we acknowledge the value of symbolic manipulation aspart of the toolkit of the scientific problem solver, we do not study it in thisbook.56Maple is a registered trademark of Waterloo Maple Software.Mathematica is a registered trademark of Wolfram Research Inc.1.3 SOURCES OF ERRORS5A definition of correct significant digits that does not suffer from the latteranomaly states thatagrees with to p significant digits ifis less thanhalf a unit in the pth significant digit of . However, this definition impliesthat 0.123 and 0.127 agree to two significant digits, whereas many peoplewould say that they agree to less than two significant digits.In summary, while the number of correct significant digits provides a usefulway in which to think about the accuracy of an approximation, the relativeerror is a more precise measure (and is base independent).
Whenever we givean approximate answer to a problem we should aim to state an estimate orbound for the relative error.Whenandare vectors the relative error is most often defined witha norm, asFor the commonly used norms:= maxi:=the inequality< ½ × 10-pimplies that componentswithhave about p correct significantdecimal digits, but for the smaller components the inequality merely boundsthe absolute error.A relative error that puts the individual relative errors on an equal footingis the componentwise relative errorwhich is widely used in error analysis and perturbation analysis (see Chapter 7,for example).As an interesting aside we mention the “tablemaker’s dilemma”.
Supposeyou are tabulating the values of a transcendental function such as the sinefunction and a particular entry is evaluated as 0.124|500000000 correct to afew digits in the last place shown, where the vertical bar follows the finalsignificant digit to be tabulated. Should the final significant digit be 4 or5? The answer depends on whether there is a nonzero trailing digit and, inprinciple, we may never be able answer the question by computing only afinite number of digits.1.3. Sources of ErrorsThere are three main sources of errors in numerical computation: rounding,data uncertainty, and truncation.Rounding errors, which are an unavoidable consequence of working in finiteprecision arithmetic, are largely what this book is about.
The remainder ofthis chapter gives basic insight into rounding errors and their effects.Uncertainty in the data is always a possibility when we are solving practicalproblems. It may arise in several ways: from errors in measuring physicalquantities, from errors in storing the data on the computer (rounding errors),1.4 PRECISION VERSUS ACCURACY71.4. Precision Versus AccuracyThe terms accuracy and precision are often confused or used interchangeably,but it is worth making a distinction between them.
Accuracy refers to theabsolute or relative error of an approximate quantity. Precision is the accuracy with which the basic arithmetic operations +,-,*,/ are performed, andfor floating point arithmetic is measured by the unit roundoff u (see (1.1)).Accuracy and precision are the same for the scalar computation c = a*b, butaccuracy can be much worse than precision in the solution of a linear systemof equations, for example.It is important to realize that accuracy is not limited by precision, at leastin theory. This may seem surprising, and may even appear to contradict manyof the results in this book.
However, arithmetic of a given precision can beused to simulate arithmetic of arbitrarily high precision, as explained in §25.9.(The catch is that such simulation is too expensive to be of practical use forroutine computation.) In all our error analyses there is an implicit assumptionthat the given arithmetic is not being used to simulate arithmetic of a higherprecision.1.5. Backward and Forward ErrorsSuppose that an approximationto y =is computed in an arithmeticof precision u, where f is a real scalar function of a real scalar variable. Howshould we measure the “quality” of ?In most computations we would be happy with a tiny relative error,but this cannot always be achieved. Instead of focusing on therelative error ofwe can ask “for what set of data have we actually solvedour problem?“, that is, for whatdo we have =? In general,there may be many suchso we should ask for the smallest one.
The valueof(or min), possibly divided by, is called the backward error.The absolute and relative errors ofare called forward errors, to distinguishthem from the backward error. Figure 1.1 illustrates these concepts.The process of bounding the backward error of a computed solution iscalled backward error analysis, and its motivation is twofold. First, it interprets rounding errors as being equivalent to perturbations in the data.
Thedata frequently contains uncertainties due to previous computations or errors committed in storing numbers on the computer. If the backward erroris no larger than these uncertainties then the computed solution can hardlybe criticized-it may be the solution we are seeking, for all we know. Thesecond attraction of backward error analysis is that it reduces the question ofbounding or estimating the forward error to perturbation theory, which formany problems is well understood (and only has to be developed once, for the8PRINCIPLES OF FINITE PRECISION COMPUTATIONInput spaceFigure 1.1.
Backward and forward errors for y =line = computed.Output spaceSolid line = exact; dottedgiven problem, and not for each method). We discuss perturbation theory inthe next section.A method for computing y =is called backward stable if, for anyit produces a computedwith a small backward error, that is,=for some small. The definition of “small” will be context dependent. Ingeneral, a given problem has several methods of solution, some of which arebackward stable and some not.As an example, assumption (1.1) says that the computed result of theoperation± y is the exact result for perturbed data (1 + δ) and y(1 + δ)with |δ| < u; thus addition and subtraction are, by assumption, backwardstable operations.Most routines for computing the cosine function do not satisfy= cos( +) with a relatively smallbut only the weaker relation + ∆y = cos( +), with relatively small ∆y and.