c2-6 (779464), страница 4
Текст из файла (страница 4)
You can easilymake the conversions, or else get the converted routines from the Numerical Recipesdiskette.)CITED REFERENCES AND FURTHER READING:Golub, G.H., and Van Loan, C.F. 1989, Matrix Computations, 2nd ed. (Baltimore: Johns HopkinsUniversity Press), §8.3 and Chapter 12.Lawson, C.L., and Hanson, R. 1974, Solving Least Squares Problems (Englewood Cliffs, NJ:Prentice-Hall), Chapter 18.Forsythe, G.E., Malcolm, M.A., and Moler, C.B. 1977, Computer Methods for MathematicalComputations (Englewood Cliffs, NJ: Prentice-Hall), Chapter 9. [1]Wilkinson, J.H., and Reinsch, C. 1971, Linear Algebra, vol. II of Handbook for Automatic Computation (New York: Springer-Verlag), Chapter I.10 by G.H. Golub and C. Reinsch. [2]Dongarra, J.J., et al. 1979, LINPACK User’s Guide (Philadelphia: S.I.A.M.), Chapter 11. [3]Smith, B.T., et al.
1976, Matrix Eigensystem Routines — EISPACK Guide, 2nd ed., vol. 6 ofLecture Notes in Computer Science (New York: Springer-Verlag).Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag),§6.7. [4]Golub, G.H., and Van Loan, C.F. 1989, Matrix Computations, 2nd ed. (Baltimore: Johns HopkinsUniversity Press), §5.2.6. [5]Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).}rv1[l]=0.0;rv1[k]=f;w[k]=x;2.7 Sparse Linear Systems712.7 Sparse Linear Systems••••••••••••tridiagonalband diagonal (or banded) with bandwidth Mband triangularblock diagonalblock tridiagonalblock triangularcyclic bandedsingly (or doubly) bordered block diagonalsingly (or doubly) bordered block triangularsingly (or doubly) bordered band diagonalsingly (or doubly) bordered band triangularother (!)You should also be aware of some of the special sparse forms that occur in thesolution of partial differential equations in two or more dimensions.
See Chapter 19.If your particular pattern of sparsity is not a simple one, then you may wish totry an analyze/factorize/operate package, which automates the procedure of figuringout how fill-ins are to be minimized. The analyze stage is done once only for eachpattern of sparsity. The factorize stage is done once for each particular matrix thatfits the pattern. The operate stage is performed once for each right-hand side toSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).A system of linear equations is called sparse if only a relatively small numberof its matrix elements aij are nonzero. It is wasteful to use general methods oflinear algebra on such problems, because most of the O(N 3 ) arithmetic operationsdevoted to solving the set of equations or inverting the matrix involve zero operands.Furthermore, you might wish to work problems so large as to tax your availablememory space, and it is wasteful to reserve storage for unfruitful zero elements.Note that there are two distinct (and not always compatible) goals for any sparsematrix method: saving time and/or saving space.We have already considered one archetypal sparse form in §2.4, the banddiagonal matrix.
In the tridiagonal case, e.g., we saw that it was possible to saveboth time (order N instead of N 3 ) and space (order N instead of N 2 ). Themethod of solution was not different in principle from the general method of LUdecomposition; it was just applied cleverly, and with due attention to the bookkeepingof zero elements. Many practical schemes for dealing with sparse problems have thissame character. They are fundamentally decomposition schemes, or else eliminationschemes akin to Gauss-Jordan, but carefully optimized so as to minimize the numberof so-called fill-ins, initially zero elements which must become nonzero during thesolution process, and for which storage must be reserved.Direct methods for solving sparse equations, then, depend crucially on theprecise pattern of sparsity of the matrix.
Patterns that occur frequently, or that areuseful as way-stations in the reduction of more general forms, already have specialnames and special methods of solution. We do not have space here for any detailedreview of these. References listed at the end of this section will furnish you with an“in” to the specialized literature, and the following list of buzz words (and Figure2.7.1) will at least let you hold your own at cocktail parties:.