Higham - Accuracy and Stability of Numerical Algorithms (523152), страница 5
Текст из файла (страница 5)
Turing isremembered through the Turing Award of the Association for Computing Machinery (ACM), which has been awarded yearly since 1966 [3, 1987]. For moreabout Turing, read the superb biography by Hodges [575, 1983], described bya reviewer as “one of the finest pieces of scholarship to appear in the historyof computing” [182, 1984].Wilkinson, like Turing a Cambridge-trained mathematician, was Turing’sassistant at the National Physical Laboratory. When Turing left, Wilkinsonmanaged the group that built the Pilot ACE, contributing to the design andconstruction of the machine and its software. Subsequently, he used the machine to develop and study a variety of numerical methods.
He developedbackward error analysis in the 1950s and 1960s publishing the books Rounding Errors in Algebraic Processes [1088, 1963]† (REAP) and The AlgebraicEigenvalue Problem [1089, 1965]‡ (AEP), both of which rapidly achieved thestatus of classics. (AEP was reprinted in paperback in 1988 and, after beingout of print for many years, REAP is now also available in paperback.) TheAEP was described by the late Professor Leslie Fox as “almost certainly themost important and widely read title in numerical analysis”. Wilkinson also†‡REAP has been translated into Polish [1091,AEP has been translated into Russian [1094,xxvii19 6 7 ]1970 ].and German [1093,19 6 9 ].XXV111ABOUT THE DEDICATIONcontributed greatly to the development of mathematical software.
The volume Handbook for Automatic Computation, Volume II: Linear Algebra [1102,1971 ], co-edited with Reinsch, contains high-quality, properly documentedsoftware and has strongly influenced subsequent software projects such as theNAG Library, EISPACK, LINPACK, and LAPACK.Wilkinson received the 1970 Turing Award. In his Turing Award lecture he described life with Turing at the National Physical Laboratory in the1940s [1096, 1971].Wilkinson is remembered through SIAM’s James H. Wilkinson Prize inNumerical Analysis and Scientific Computing, awarded every 4 years; theWilkinson Prize for Numerical Software, awarded by Argonne National Laboratory, the National Physical Laboratory, and the Numerical AlgorithmsGroup; and the Wilkinson Fellowship in Scientific Computing at ArgonneNational Laboratory. For more about Wilkinson see the biographical memoir by Fox [403, 1987], Fox’s article [402, 1978], Parlett’s essay [821, 1990],the prologue and epilogue of the proceedings [252, 1990] of a conference heldin honour of Wilkinson at the National Physical Laboratory in 1987.
and thetributes in [23, 1987]. Lists of Wilkinson’s publications are given in [403, 1987]and in the special volume of the journal Linear Algebra and its Applications(88/89, April 1987) published in his memory.Previous HomeChapter 1Principles of Finite PrecisionComputationNumerical precision is the very soul of science.-SIR D’ARCY WENTWORTH THOMPSON, On Growth and Form (1942)There will always be a small but steady demand for error-analysts to . .
.expose bad algorithms’ big errors and, more important,supplant bad algorithms with provably good ones.-WILLIAM M. KAHAN, Interval Arithmetic Options inthe Proposed IEEE Floating Point Arithmetic Standard (1980)Since none of the numbers which we take out from logarithmic andtrigonometric tables admit of absolute precision,but are all to a certain extent approximate only,the results of all calculations performedby the aid of these numbers can only be approximately true .
. .It may happen, that in special cases theeffect of the errors of the tables is so augmented thatwe may be obliged to reject a method,otherwise the best, and substitute another in its place.-CARL FRIEDRICH GAUSS’, Theoria Motus (1809)Backward error analysis is no panacea;it may explain errors but not excuse them.-HEWLETT-PACKARD, HP-15C Advanced Functions Handbook (1982)1Cited in Goldstine [461,1977 ,p. 258].1NextPRINCIPLES OF FINITE PRECISION COMPUTATION2This book is concerned with the effects of finite precision arithmetic on numerical algorithms’, particularly those in numerical linear algebra. Centralto any understanding of high-level algorithms is an appreciation of the basicconcepts of finite precision arithmetic.
This opening chapter briskly impartsthe necessary background material. Various examples are used for illustration, some of them familiar (such as the quadratic equation) but several lesswell known. Common misconceptions and myths exposed during the chapterare highlighted towards the end, in §1.19.This chapter has few prerequisites and few assumptions are made aboutthe nature of the finite precision arithmetic (for example, the base, numberof digits, or mode of rounding, or even whether it is floating point arithmetic).
The second chapter deals in detail with the specifics of floating pointarithmetic.A word of warning: some of the examples from §1.12 onward are specialones chosen to illustrate particular phenomena. You may never see in practicethe extremes of behaviour shown here. Let the examples show you whatcan happen, but do not let them destroy your confidence in finite precisionarithmetic!1.1. Notation and BackgroundWe describe the notation used in the book and briefly set up definitions neededfor this chapter.Generally, we usecapitalsubscripted lower caselower caselower case GreekA ,B ,C ∆,Λlettersletters αi j , bij, cij , δi j ,letters, y, z, c, g, hlettersα, β, γ,φ , ρijforforforformatrices,matrix elements,vectors,scalars,following the widely used convention originally introduced by Householder [587,19 6 4 ].The vector space of all real m × n matrices is denoted by IR m × n and thevector space of real n-vectors by IRn .
Similarly,denotes the vectorspace of complex m × n matrices.Algorithms are expressed using a pseudocode based on the MATLAB language [232, 1988], [735, 1992]. Comments begin with the % symbol.Submatrices are specified with the colon notation, as used in MATLAB andFortran 90: A( p :q, r:s) denotes the submatrix of A formed by the intersectionof rows p to q and columns r to s. As a special case, a lone colon as the row orcolumn specifier means to take all entries in that row or column; thus A (:,j )is the jth column of A and A(i,:) the ith row.
The values taken by an integer2For the purposes of this book an algorithm is a MATLAB program; cf. Smale [924,1990].1.1 NOTATION AND BACKGROUND3variable are also described using the colon notation: “i = 1:n” means thesame as “i = 1,2, . . . , n” .Evaluation of an expression in floating point arithmetic is denoted ƒl(·),and we assume that the basic arithmetic operations op=+,-,*,/ satisfy|δ | < u .(1.1)Here, u is the unit roundoff (or machine precision), which is typically of order10-8 or 10-16 in single and double precision computer arithmetic, respectively,and between 10-10 and 10-12 on pocket calculators.
For more on floatingpoint arithmetic see Chapter 2.Computed quantities (and, in this chapter only, arbitrary approximations)wear a hat. Thusdenotes the computed approximation to .Definitions are often (but not always) indicated by “:=” or “=:”, with thecolon next to the object being defined.We make use of the floor and ceiling functions:is the largest integerless than or equal to , andis the smallest integer greater than or equalto .2The normal distribution with mean µ and varianceis denoted byWe measure the cost of algorithms in flops. A flop is an elementary floatingpoint operation: +,-,/, or *.
We normally state only the highest-order termsof flop counts. Thus, when we say that an algorithm for n × n matrices requires2n 3/3 flops, we really mean 2n 3/3+O(n 2) flops.Other definitions and notation are introduced when needed. Two topics, however, do not fit comfortably into the main text and are described inAppendix B: the singular value decomposition (SVD) and M-matrices.All our numerical experiments were carried out either in MATLAB 4.2 [735,1992], sometimes in conjunction with the Symbolic Math Toolbox [204, 1993],or with the Salford Software/Numerical Algorithms Group FTN903 Fortran 90compiler, Version 1.2 [888, 1993].
Whenever we say a computation was “donein Fortran 90” we are referring to the use of this compiler. All the resultsquoted were obtained on a 486DX workstation, unless otherwise stated, butmany of the experiments were repeated on a Sun SPARCstation, using theNAGWare4 FTN90 compiler [785, 1992]. Both machines use IEEE standardfloating point arithmetic and the unit roundoff is u = 2 -531.1 × 10 -16in M ATLAB and in double precision in Fortran 90. (Strictly speaking, inFortran 90 we should not use the terminology single and double precision butshould refer to the appropriate KIND parameters; see, e.g., Metcalf and Reid[749, 1990, §2.6].
However, these terms are vivid and unambiguous in IEEEarithmetic, so we use them throughout the book.)3FTN90 is a joint trademark of Salford Software Ltd. and The Numerical AlgorithmsGroup Ltd.4NAGWare is a trademark of The Numerical Algorithms Group Ltd.PRINCIPLES OF FINITE PRECISION COMPUTATION41.2. Relative Error and Significant DigitsLetbe an approximation to a real numberthe accuracy ofare its absolute error. The most useful measures ofand its relative error(which is undefined if= 0).
An equivalent definition of relative error is=(1+p). Some authors omit the absolute values= |ρ|, wherefrom these definitions. When the sign is important we will simply talk about“the error .In scientific computation, where answers to problems can vary enormouslyin magnitude, it is usually the relative error that is of interest, because it isscale independent: scalingandleavesunchanged.Relative error is connected with the notion of correct significant digits (orcorrect significant figures). The significant digits in a number are the firstnonzero digit and all succeeding digits.