c1-3 (Numerical Recipes in C)
Описание файла
Файл "c1-3" внутри архива находится в папке "Numerical Recipes in C". PDF-файл из архива "Numerical Recipes in C", который расположен в категории "". Всё это находится в предмете "цифровая обработка сигналов (цос)" из 8 семестр, которые можно найти в файловом архиве МГТУ им. Н.Э.Баумана. Не смотря на прямую связь этого архива с МГТУ им. Н.Э.Баумана, его также можно найти и в других разделах. Архив можно найти в разделе "книги и методические указания", в предмете "цифровая обработка сигналов" в общих файлах.
Просмотр PDF-файла онлайн
Текст из PDF
28Chapter 1.Preliminaries1.3 Error, Accuracy, and StabilityComputers store numbers not with infinite precision but rather in some approximation that can be packed into a fixed number of bits (binary digits) or bytes (groupsof 8 bits). Almost all computers allow the programmer a choice among severaldifferent such representations or data types. Data types can differ in the number ofbits utilized (the wordlength), but also in the more fundamental respect of whetherthe stored number is represented in fixed-point (int or long) or floating-point(float or double) format.A number in integer representation is exact. Arithmetic between numbers ininteger representation is also exact, with the provisos that (i) the answer is not outsidethe range of (usually, signed) integers that can be represented, and (ii) that divisionis interpreted as producing an integer result, throwing away any integer remainder.In floating-point representation, a number is represented internally by a sign bits (interpreted as plus or minus), an exact integer exponent e, and an exact positiveinteger mantissa M .
Taken together these represent the numbers × M × B e−E(1.3.1)where B is the base of the representation (usually B = 2, but sometimes B = 16),and E is the bias of the exponent, a fixed integer constant for any given machineand representation. An example is shown in Figure 1.3.1.Several floating-point bit patterns can represent the same number. If B = 2,for example, a mantissa with leading (high-order) zero bits can be left-shifted, i.e.,multiplied by a power of 2, if the exponent is decreased by a compensating amount.Bit patterns that are “as left-shifted as they can be” are termed normalized. Mostcomputers always produce normalized results, since these don’t waste any bits ofthe mantissa and thus allow a greater accuracy of the representation. Since thehigh-order bit of a properly normalized mantissa (when B = 2) is always one, somecomputers don’t store this bit at all, giving one extra bit of significance.Arithmetic among numbers in floating-point representation is not exact, even ifthe operands happen to be exactly represented (i.e., have exact values in the form ofequation 1.3.1).
For example, two floating numbers are added by first right-shifting(dividing by two) the mantissa of the smaller (in magnitude) one, simultaneouslyincreasing its exponent, until the two operands have the same exponent. Low-order(least significant) bits of the smaller operand are lost by this shifting. If the twooperands differ too greatly in magnitude, then the smaller operand is effectivelyreplaced by zero, since it is right-shifted to oblivion.The smallest (in magnitude) floating-point number which, when added to thefloating-point number 1.0, produces a floating-point result different from 1.0 istermed the machine accuracy m .
A typical computer with B = 2 and a 32-bitwordlength has m around 3 × 10−8 . (A more detailed discussion of machinecharacteristics, and a program to determine them, is given in §20.1.) RoughlySample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Although we assume no prior training of the reader in formal numerical analysis,we will need to presume a common understanding of a few key concepts.
We willdefine these briefly in this section.29aantiss-bitm23thbe is b“p it cha ount ldom”8bitexpsignbitonent1.3 Error, Accuracy, and Stability1000000010000000000000000000000(a)3 = 01000001011000000000000000000000(b)1⁄ 4= 00111111110000000000000000000000(c)10−7= 0011010011 1 0 ...10110101111111001010(d)= 01000001000000000000000000000000(e)3 + 10−7= 01000001011000000000000000000000(f )Figure 1.3.1. Floating point representations of numbers in a typical 32-bit (4-byte) format. (a) Thenumber 1/2 (note the bias in the exponent); (b) the number 3; (c) the number 1/4; (d) the number10−7 , represented to machine accuracy; (e) the same number 10−7 , but shifted so as to have the sameexponent as the number 3; with this shifting, all significance is lost and 10−7 becomes zero; shifting toa common exponent must occur before two numbers can be added; (f) sum of the numbers 3 + 10−7,which equals 3 to machine accuracy.
Even though 10−7 can be represented accurately by itself, it cannotaccurately be added to a much larger number.speaking, the machine accuracy m is the fractional accuracy to which floating-pointnumbers are represented, corresponding to a change of one in the least significantbit of the mantissa. Pretty much any arithmetic operation among floating numbersshould be thought of as introducing an additional fractional error of at least m . Thistype of error is called roundoff error.It is important to understand that m is not the smallest floating-point numberthat can be represented on a machine. That number depends on how many bits thereare in the exponent, while m depends on how many bits there are in the mantissa.Roundoff errors accumulate with increasing amounts of calculation.
If, in thecourse of obtaining a calculated value, you perform N such arithmetic operations,√you might be so lucky as to have a total roundoff error on the order of N m , ifthe roundoff errors come in randomly up or down. (The square root comes from arandom-walk.) However, this estimate can be very badly off the mark for two reasons:(i) It very frequently happens that the regularities of your calculation, or thepeculiarities of your computer, cause the roundoff errors to accumulate preferentiallyin one direction. In this case the total will be of order N m .(ii) Some especially unfavorable occurrences can vastly increase the roundofferror of single operations. Generally these can be traced to the subtraction of twovery nearly equal numbers, giving a result whose only significant bits are those(few) low-order ones in which the operands differed.
You might think that such a“coincidental” subtraction is unlikely to occur. Not always so. Some mathematicalexpressions magnify its probability of occurrence tremendously. For example, in thefamiliar formula for the solution of a quadratic equation,x=−b +√b2 − 4ac2a(1.3.2)the addition becomes delicate and roundoff-prone whenever ac b2 . (In §5.6 wewill learn how to avoid the problem in this particular case.)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).= 01⁄ 230Chapter 1.PreliminariesIt turns out (you can easily verify) that the powers φn satisfy a simple recursionrelation,φn+1 = φn−1 − φn(1.3.4)Thus, knowing the first two values φ0 = 1 and φ1 = 0.61803398, we cansuccessively apply (1.3.4) performing only a single subtraction, rather than a slowermultiplication by φ, at each stage.Unfortunately, the recurrence (1.3.4) also has another solution, namely the value√− 12 ( 5 + 1).
Since the recurrence is linear, and since this undesired solution hasmagnitude greater than unity, any small admixture of it introduced by roundoff errorswill grow exponentially. On a typical machine with 32-bit wordlength, (1.3.4) startsSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Roundoff error is a characteristic of computer hardware. There is another,different, kind of error that is a characteristic of the program or algorithm used,independent of the hardware on which the program is executed. Many numericalalgorithms compute “discrete” approximations to some desired “continuous” quantity. For example, an integral is evaluated numerically by computing a functionat a discrete set of points, rather than at “every” point. Or, a function may beevaluated by summing a finite number of leading terms in its infinite series, ratherthan all infinity terms.
In cases like this, there is an adjustable parameter, e.g., thenumber of points or of terms, such that the “true” answer is obtained only whenthat parameter goes to infinity. Any practical calculation is done with a finite, butsufficiently large, choice of that parameter.The discrepancy between the true answer and the answer obtained in a practicalcalculation is called the truncation error. Truncation error would persist even on ahypothetical, “perfect” computer that had an infinitely accurate representation and noroundoff error.
As a general rule there is not much that a programmer can do aboutroundoff error, other than to choose algorithms that do not magnify it unnecessarily(see discussion of “stability” below). Truncation error, on the other hand, is entirelyunder the programmer’s control. In fact, it is only a slight exaggeration to saythat clever minimization of truncation error is practically the entire content of thefield of numerical analysis!Most of the time, truncation error and roundoff error do not strongly interactwith one another.