Heath - Scientific Computing (523150), страница 11
Текст из файла (страница 11)
Depending onyour computing environment, several options are available for producing the required plots.In a Unix environment, simple plots can be made using the graph and plot commands (seethe corresponding man pages). In X-Windows, simple plots can be made on the screen withthe xgraph tool, and then hard copies can be made using the xwd and xpr utilities, or theirequivalents. Somewhat more sophisticated graphs can be made using free packages such asgnuplot (available by ftp from ftp.dartmouth.edu/pub/gnuplot) or plplot (availableby ftp from dino.ph.utexas.edu/plplot), which are available for Unix and several otheroperating systems.
Much more sophisticated and powerful scientific visualization systemsare also available, but their capabilities go well beyond the simple plots needed for theproblems in this book. If you use a PC or Mac, dozens of graphics programs are available,far too many to mention individually. Again, note that MATLAB and similar environmentshave built-in graphics, which is a great convenience.Another important programming consideration is performance. The performance oftoday’s microprocessor-based computer systems often depends critically on judicious exploitation of a memory hierarchy (registers, cache, RAM, disk, etc.) both by the user andby the optimizing compiler.
Thus, it is important not only to choose the right algorithmbut also to implement it carefully to maximize the reuse of data while they are held in theportions of the memory hierarchy with faster access times. Fortunately, the details of suchprogramming are usually hidden from the user inside the library routines recommendedin this text. This feature is just one of the many benefits of using existing, professionallywritten software for scientific computing whenever possible.If you use a scientific computing environment such as MATLAB, you should be awarethat there may be significant differences in performance between the built-in operations,which are generally very fast, and those you program explicitly yourself, which tend tobe much slower owing to the interpreted mode of operation and to memory managementoverhead.
Thus, one should be very careful in making performance comparisons underthese circumstances. For example, one algorithm may be inferior to another in principle,yet perform better because of more effective utilization of fast built-in operations.For general advice on many practical aspects of using workstations, Unix, X-Windows,graphics, and many other packages of interest in scientific computing, as well as performance1.5. HISTORICAL NOTES AND FURTHER READING25considerations in programming, see [67, 85, 157].1.5Historical Notes and Further ReadingThe subject we now call numerical analysis or scientific computing vastly predates theadvent of modern computers. Most of the concepts and many of the algorithms that arein use today were first formulated by pre-twentieth century giants—Newton, Gauss, Euler,Jacobi, and many others—whose names recur throughout this book.
The main concernthen, as it is now, was finding efficient methods for obtaining approximate solutions tomathematical problems that arose in physics, astronomy, surveying, and other disciplines.Indeed, efficient use of computational resources is even more critical when using pencil,paper, and brain power (or perhaps a hand calculator) than when using a modern highspeed computer.For the most part, modern computers have simply increased the size of problems thatare feasible to tackle. They have also necessitated more careful analysis and control ofrounding error, for the computation is no longer done by a human who can easily carryadditional precision as needed. There is no question, however, that the development ofdigital computers was the impetus for the flowering of numerical analysis into the fertileand vigorously growing field that has enabled the ubiquitous role computation now playsthroughout modern science and engineering.
Indeed, computation has come to be regardedas an equal and indispensable partner, along with theory and experiment, in the advanceof scientific knowledge and engineering practice [145].For an account of the early history of numerical analysis, see [100]; for the more recentdevelopment of scientific computing, see [188]. The literature of numerical analysis, fromtextbooks to research monographs and journals, is much too vast to be covered adequatelyhere.
This text will try to give appropriate credit for the major ideas presented (at leastthose not already obvious from the name) and cite (usually secondary) sources for furtherreading, but these citations and recommendations are by no means complete. There aretoo many excellent general textbooks on numerical analysis to mention them all, but manyof these still make worthwhile reading (even some of the older ones, several of which haverecently been reissued in inexpensive reprint editions). Only those of most direct relevanceto our discussion will be cited.Most numerical analysis textbooks contain a general discussion of error analysis. Theseminal reference on the analysis of rounding errors is [274], which is a treasure trove ofvaluable insights. Its author, James H.
Wilkinson, played a major role in developing andpopularizing the notion of backward error analysis and was also responsible for a numberof famous “computational counterexamples” that reveal various numerical instabilities inunsuspected problems and algorithms. A more recent work in a similar spirit is [126]. Forvarious approaches to automating error analysis, see [5, 175, 180]. A MATLAB toolbox forerror analysis is discussed in [36].Recent general treatments of computer arithmetic include [152, 193].
The book by Sterbenz [237], though somewhat dated, remains the only book-length treatment of floatingpoint arithmetic. See [150] for a more concise account. The effort to standardize floatingpoint arithmetic and the high quality of the resulting standard were largely inspired byWilliam Kahan, who is also responsible for many well known computational counterex-26CHAPTER 1. SCIENTIFIC COMPUTINGamples.
The IEEE floating-point standard can be found in [131]. A useful tutorial onfloating-point arithmetic and the IEEE standard is [97]. Although it is no substitute forcareful problem formulation and solution, extended precision arithmetic can occasionally beuseful for highly sensitive problems; several software packages providing multiple precisionfloating-point arithmetic are available, including MP(#524), FM(#693), and MPFUN(#719)from TOMS.For an account of the emergence of mathematical software as a subdiscipline of numericalanalysis and computer science, see the survey [41] and the collections [44, 73, 122, 134,209, 210].
Perhaps the earliest numerical methods textbook to be based on professionalquality software (not just code fragments for illustration) was [225], which is similar in tone,style, and content to the very influential book by Forsythe, Malcolm and Moler [82] thatpopularized this approach.
In addition to the books mentioned in Section 1.4.1, the followingnumerical methods textbooks focus on the specific software libraries or packages listed:IMSL [211], NAG [128, 151], MATLAB [165, 187, 262], and Mathematica [231]. Other textbooksthat provide additional discussion and examples at an introductory level include [11, 29,30, 38, 43, 94, 173, 240].
More advanced general textbooks include [47, 59, 103, 118, 132,149, 195, 222, 242]. The books of Acton [2, 3] entertainingly present practical advice onavoiding pitfalls in numerical computation.The book of Strang [243] provides excellent background and insights on many aspectsof applied mathematics that are relevant to numerical computation in general, and in particular to almost every chapter of this book. For an elementary introduction to scientificprogramming, see [261].
For advice on designing, implementing, and testing numerical software, as opposed to simply using it, see [174]. Additional computer exercises and projectscan be found in [45, 72, 85, 89, 107, 109, 158].Review Questions1.1 True or false:A problem is illconditioned if its solution is highly sensitiveto small changes in the problem data.1.2 True or false: Using higher-precisionarithmetic will make an ill-conditioned problem better conditioned.1.3 True or false: The conditioning of aproblem depends on the algorithm used tosolve it.1.4 True or false: A good algorithm will produce an accurate solution regardless of the condition of the problem being solved.1.5 True or false: The choice of algorithm forsolving a problem has no effect on the propagated data error.1.6 True or false: If two real numbers are exactly representable as floating-point numbers,then the result of a real arithmetic operationon them will also be representable as a floatingpoint number.1.7 True or false: Floating-point numbersare distributed uniformly throughout theirrange.1.8 True or false: Floating-point addition isassociative but not commutative.1.9 True or false: In a floating-point number system, the underflow level is the smallestpositive number that perturbs the number 1when added to it.1.10 Explain the distinction between truncation (or discretization) and rounding.1.11 Explain the distinction between absoluteerror and relative error.1.12 Explain the distinction between computational error and propagated data error.REVIEW QUESTIONS1.13 (a) What is meant by the conditioningof a problem?(b) Is it affected by the algorithm used to solvethe problem?(c) Is it affected by the precision of the arithmetic used to solve the problem?1.14 If a computational problem has a condition number of 1, is this good or bad? Why?1.15 When is an approximate solution to agiven problem considered to be good according to backward error analysis?1.16 For a given floating-point number system, describe in words the distribution of machine numbers along the real line.271.23 In a t-digit binary floating-point systemwith rounding to nearest, what is the value ofthe unit roundoff mach ?1.24 In a floating-point system with gradualunderflow (subnormal numbers), is the representation of each number still unique? Why?1.25 In a floating-point system, is the productof two machine numbers usually exactly representable in the floating-point system? Why?1.26 In a floating-point system, is the quotient of two nonzero machine numbers alwaysexactly representable in the floating-point system? Why?1.17 In floating-point arithmetic, which isgenerally more harmful, underflow or overflow? Why?1.27 (a) Give an example to show thatfloating-point addition is not necessarily associative.1.18 In floating-point arithmetic, which of thefollowing operations on two positive floatingpoint operands can produce an overflow?(a) Addition(b) Subtraction(c) Multiplication(d ) Division(b) Give an example to show that floatingpoint multiplication is not necessarily associative.1.19 In floating-point arithmetic, which of thefollowing operations on two positive floatingpoint operands can produce an underflow?(a) Addition(b) Subtraction(c) Multiplication(d ) Division1.29 Give examples of floating-point arithmetic operations that would produce each ofthe exceptional values Inf and NaN.1.20 List two reasons why floating-point number systems are usually normalized.1.21 In a floating-point system, what quantity determines the maximum relative error inrepresenting a given real number by a machinenumber?1.22 (a) Explain the difference between therounding rules “round toward zero” and“round to nearest” in a floating-point system.(b) Which of these two rounding rules is moreaccurate?(c) What quantitative difference does thismake in the unit roundoff mach ?1.28 Give an example of a number whose decimal representation is finite (i.e., it has only afinite number of nonzero digits) but whose binary representation is not.1.30 Explain why the cancellation that occurswhen two numbers of similar magnitude aresubtracted is often bad even though the resultmay be exactly correct for the actual operandsinvolved.1.31 Assume a decimal (base 10) floatingpoint system having machine precisionmach = 10−5 and an exponent range of ±20.What is the result of each of the followingfloating-point arithmetic operations?(a) 1 + 10−7(b) 1 + 103(c) 1 + 107(d ) 1010 + 103(e) 1010 /10−15(f ) 10−10 × 10−1528CHAPTER 1.