Heath - Scientific Computing (523150), страница 21
Текст из файла (страница 21)
In most such subroutine libraries, more specialized routines areavailable for particular types of linear systems, such as positive definite, symmetric, banded,or combinations of these. Some of these routines are listed in Table 2.2; other routines thatare more storage efficient or cater to other special tasks may also be available.Conventional software for solving linear systems Ax = b is sometimes implemented asa single routine, or it may be split into two routines: one for computing a factorization andanother for solving the resulting triangular system. In either case, repeating the factorization should not be necessary if additional solutions are needed with the same matrix butdifferent right-hand sides.
The input typically required includes a two-dimensional arraycontaining the matrix A, a one-dimensional array containing the right-hand-side vector b68CHAPTER 2. SYSTEMS OF LINEAR EQUATIONSTable 2.1: Software for solving general linear systemsConditionSourceFactor SolveestimationFMMdecomp solveHSLma21ma21IMSLlftrglfsrglfcrgKMNsgefssgefssgefsLAPACKsgetrf sgetrs sgeconLINPACK sgefasgeslsgecoMATLABlu\rcondNAGf07adf f07aef f07agfNAPACKfactsolveconNRludcmp lubksbNUMALdecsolSLATECsgefasgeslsgecoTable 2.2: Software for solving special linear systemsSymmetricSymmetricGeneralSourcepositive definiteindefinitebandHSLma22ma29ma35IMSLlftds/lfsdslftsf/lfssflftrb/lfsrbLAPACKspotrf/spotrsssytrf/ssytrssgbtrf/sgbtrsLINPACK spofa/sposlssifa/ssislsgbfa/sgbslNAGf07fdf/f07feff07mdf/f07meff07bdf/f07befNAPACKsfact/ssolveifact/isolvebfact/bsolveNRcholdc/cholslbandec/banbksNUMALchldec2/chlsol2 decsym2/solsym2 decbnd/solbndSLATECspofa/sposlssifa/ssislsgbfa/sgbsl2.7.
SOFTWARE FOR LINEAR SYSTEMS69(or a two-dimensional array for multiple right-hand-side vectors), the integer order of thesystem n, the leading dimension of the array containing A (so that the subroutine caninterpret subscripts properly in the array), and possibly some work space and a flag indicating the particular task to be performed. On return, the solution x usually overwrites thestorage for b, and the matrix factorization overwrites the storage for A.
Additional outputmay include a status flag to indicate any errors or warnings and an estimate of the conditionnumber of the matrix (or sometimes the reciprocal of the condition number). Because ofthe additional cost of condition estimation, this feature is usually optional.Solving linear systems using an interactive environment such as MATLAB is simpler thanwhen using conventional software because the package keeps track internally of details suchas the dimensions of vectors and matrices, and many matrix operations are built into thesyntax and semantics of the language.
For example, the solution to the linear systemAx = b is given in MATLAB by the “left division” operator, denoted by backslash, so thatx = A \ b. Internally, the solution is computed by LU factorization and forward- and backsubstitution, but the user need not be aware of this. The LU factorization can be computedexplicitly, if desired, by the MATLAB lu function, [L, U] = lu(A).2.7.1LINPACK and LAPACKLINPACK is a standard software package for solving a wide variety of systems of linearequations, both general dense systems and those having various special properties, such assymmetric or banded.
Solving linear systems is of such fundamental importance in scientificcomputing that LINPACK has become a standard benchmark for comparing the performanceof computers. The LINPACK manual [63] is a useful source of practical advice on solvingsystems of linear equations.A more recent package called LAPACK updates the entire LINPACK collection for higherperformance on modern computer architectures, including some parallel computers. Inmany cases, the newer algorithms in LAPACK also achieve greater accuracy, robustness,and functionality than their predecessors in LINPACK. LAPACK includes both simple andexpert drivers for all of the major computational problems in linear algebra, as well as themany underlying computational and auxiliary routines required for various factorizations,triangular solutions, norm estimation, scaling, and iterative refinement.
Both LINPACK andLAPACK are available from netlib, and the linear system solvers in many other libraries andpackages are based directly on them.2.7.2Basic Linear Algebra SubprogramsThe high-level routines in LINPACK and LAPACK are based on lower-level Basic Linear AlgebraSubprograms (BLAS). The BLAS were originally designed to encapsulate basic operations onvectors so that they could be optimized for a given computer architecture while the highlevel routines that call them remain portable. New computer architectures have promptedthe development of higher-level BLAS that encapsulate matrix-vector and matrix-matrixoperations for better utilization of hierarchical memory such as cache, vector registers, andvirtual memory with paging.
A few of the most important BLAS routines of each level arelisted in Table 2.3.70CHAPTER 2. SYSTEMS OF LINEAR EQUATIONSThe key to good performance is data reuse, that is, performing as many arithmeticoperations as possible involving a given data item while it is held in the portion of thememory hierarchy with the most rapid access. The level-3 BLAS have greater opportunityfor data reuse because they perform O(n3 ) operations on O(n2 ) data items, whereas inthe lower-level BLAS the number of operations is proportional to the number of data items.Generic versions of the BLAS are available from netlib, and many computer vendors providecustom versions that are optimized for highest performance on their particular systems.Table 2.3: Examples of basic linear algebra subprograms (BLAS)Level TOMS # Work Examples Function1539O(n) saxpyScalar times vector plus vectorsdotInner product of two vectorssnrm2Euclidean norm of a vector2656O(n2 ) sgemvMatrix-vector multiplicationstrsvTriangular solutionsgerRank-one update33679O(n ) sgemmMatrix-matrix multiplicationstrsmMultiple triangular solutionsssyrkRank-k update2.8Historical Notes and Further ReadingElimination methods for solving systems of linear equations date from the nineteenth century and earlier.
Their careful error analysis, however, began only with the computer era.Indeed, a grave concern of the early pioneers of digital computation, such as von Neumannand Turing, was whether accumulated rounding error in solving large linear systems byGaussian elimination would render the results useless, and initially there was considerablepessimism on this score. Computational experience soon showed that the method wassurprisingly stable and accurate in practice, however, and analyses eventually followed toexplain this good fortune (see especially the work of Wilkinson [273, 274, 275]).As it turns out, Gaussian elimination with partial pivoting has a worse than optimaloperation count [248], is unstable in the worst case [273], and in a theoretical sense cannotbe implemented efficiently in parallel [264].
Yet it is consistently effective in practice,even on parallel computers, and is one of the principal workhorses of scientific computing.Most numerical algorithms obey Murphy’s law—“if anything can go wrong, it will”—butGaussian elimination seems to be a happy exception. For further discussion of some of the“mysteries” of this remarkable algorithm, see [257].For background on linear algebra, the reader may wish to consult the excellent textbooks by Strang [244, 246].
Additional examples, exercises, and practical applications ofcomputational linear algebra can be found in [127, 171]. The definitive reference on matrixcomputations is [104]. More tutorial treatments include [49, 96, 116, 138, 239, 258, 268]. Aninfluential early work on solving linear systems, and one of the first to include high-qualitysoftware, is [83]. A useful tutorial handbook on matrix computations, both in Fortran andREVIEW QUESTIONS71MATLAB, is [42].For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of conditionnumber estimation is given in [124]. A detailed survey of componentwise (as opposed tonormwise) perturbation theory in linear algebra is given in [125].
LINPACK and LAPACK aredocumented in [63] and [8], respectively. For the BLAS (Basic Linear Algebra Subprograms)see [61, 62, 164]. One of the earliest papers to examine the effect of the computing environment on the performance of Gaussian elimination and other matrix computations was [177].For a sample of the now large literature on this topic, see [55, 64, 65, 194].Review Questions2.1 True or false: If a matrix A is nonsingular, then the number of solutions to the linear system Ax = b depends on the particularchoice of right-hand-side vector b.2.2 True or false: If a matrix has a very smalldeterminant, then the matrix is nearly singular.2.3 True or false: If a triangular matrix hasa zero entry on its main diagonal, then thematrix is necessarily singular.2.4 True or false: If a matrix has a zero entry on its main diagonal, then the matrix isnecessarily singular.2.5 True or false: An underdetermined system of linear equations Ax = b, where A isan m × n matrix with m < n, always has asolution.2.6 True or false: The product of two uppertriangular matrices is upper triangular.2.7 True or false: The product of two symmetric matrices is symmetric.2.8 True or false: The inverse of a nonsingular upper triangular matrix is upper triangular.2.9 True or false: If the rows of an n × nmatrix A are linearly dependent, then thecolumns of the matrix are also linearly dependent.2.10 True or false: A system of linear equations Ax = b has a solution if and only if them×n matrix A and the augmented m×(n+1)matrix [ A b ] have the same rank.2.11 True or false: If A is any n × n matrixand P is any n × n permutation matrix, thenP A = AP .2.12 True or false: Provided row interchangesare allowed, the LU factorization always exists,even for a singular matrix A.2.13 True or false: If a linear system is wellconditioned, then pivoting is unnecessary inGaussian elimination.2.14 True or false: If a matrix is singular thenit cannot have an LU factorization.2.15 True or false: If a nonsingular symmetricmatrix is not positive definite, then it cannothave a Cholesky factorization.2.16 True or false: A symmetric positive definite matrix is always well-conditioned.2.17 True or false: Gaussian eliminationwithout pivoting fails only when the matrixis ill-conditioned or singular.2.18 True or false: Once the LU factorizationof a matrix A has been computed to solve alinear system Ax = b, then subsequent linear systems with the same matrix but differentright-hand-side vectors can be solved withoutrefactoring the matrix.2.19 True or false: In explicitly inverting amatrix by LU factorization and triangular solution, the majority of the work is due to thefactorization.2.20 True or false: If x is any n-vector, thenkxk1 ≥ kxk∞ .2.21 True or false: The norm of a singularmatrix is zero.2.22 True or false: If kAk = 0, then A = O.72CHAPTER 2.