Higham - Accuracy and Stability of Numerical Algorithms (523152), страница 104
Текст из файла (страница 104)
WILKINSON, The Automatic Computing Engine at theNational Physical Laboratory (1948)In spite of the self-contained nature of the linear algebra field,experience has shown that even herethe preparation of a fully tested set of algorithmsis a far greater task than had been anticipated.— J. H. WILKINSON and C. REINSCH, Handbook forAutomatic Computation: Linear Algebra (1971)585586P ROGRAM LIBRARIESIn this appendix we briefly describe some of the freely available program libraries that have been mentioned in this book. These packages are all availablefrom netlib (see §C.2).D.1. Basic Linear Algebra SubprogramsThe Basic Linear Algebra Subprograms (BLAS) are sets of Fortran primitivesfor matrix and vector operations. They cover all the common operations inlinear algebra.
There are three levels, corresponding to the types of objectoperated upon. In the examples below, x and y are vectors, A, B, C arerectangular matrices, and T is a square triangular matrix. Names of BLASroutines are given in parentheses. The leading “x” denotes the Fortran datatype, whose possible values are some or all ofSDCZrealdouble precisioncomplexcomplex* 16, or double complexLevel 1: [694, 1979] Vector operations. Inner product: x T y (xdot); yαx + y (xAXPY); vector 2-norm (yTy)1/2 (xNRM2); swap vectors xy(xSWAP); scale a vector x αx (xSCAL); and other operations.Level 2: [313, 1988], [314, 1988] Matrix-vector operations. Matrix times vector (gaxpy): yαAx + βy (xGEMV); rank-1 update: A A + αx y T(xGER); triangular solve: xT -lx (xTRSV); and variations on these.Level 3: [308, 1990], [309, 1990] Matrix-matrix operations.
Matrix multiplication: C αAB + βC (xGEMM); multiple right-hand side triangularsolve: AαT -1A (xTRSM); rank-r and rank-2r updates of a symmetricmatrix (xSYRK, xSYR2K); and variations on these.The BLAS are intended to be called in innermost loops of linear algebracodes. Usually, most of the computation in a code that uses BLAS calls isdone inside these calls. LINPACK [307, 1979] uses the level-l BLAS throughout (model Fortran implementations of the level-l BLAS are listed in [307,1979, App. D]). LAPACK [17, 1995] exploits all three levels, using the highestpossible level at all times.Each set of BLAS comprises a set of subprogram specifications only. Thespecifications define the parameters to each routine and state what the routinemust do, but not how it must do it.
Thus the implementor has freedom overthe precise implementation details (loop orderings, block algorithms, specialcode for special cases) and even the method (fast versus conventional matrixmultiply), but the implementation is required to be numerically stable, andD.2 EISPACK587code that tests the numerical stability is provided with the model implementations [309, 1990], [314, 1988].For more details on the BLAS and the advantages of using them, seethe defining papers listed above, or, for example, [315, 1991] or [470, 1989,Chap. 1].D.2. EISPACKEISPACK is a collection of Fortran 66 subroutines for computing eigenvaluesand eigenvectors of matrices [925, 1976], [415, 1977]. It contains 58 subroutines and 13 drivers. The subroutines are the basic building blocks for eigensystem computations; they perform such tasks as reduction to Hessenbergform, computation of some or all of the eigenvalues/vectors, and back transformations, for various types of matrix (real, complex, symmetric, banded,etc.).
The driver subroutines provide easy access to many of EISPACK’s capabilities; they call from one to five other EISPACK subroutines to do thecomputations.EISPACK is primarily based on Algol 60 procedures developed in the1960s by 19 different authors and published in the journal Numerische Mathematik. An edited collection of these papers was published in the Handbookfor Automatic Computation [1102, 1971].D.3. LINPACKLINPACK is a collection of Fortran 66 subroutines that analyse and solvelinear equations and linear least squares problems [307, 1979].
The packagesolves linear systems whose matrices are general, banded, symmetric indefinite, symmetric positive definite, triangular, or tridiagonal. In addition, thepackage computes the QR and singular value decompositions and applies themto least squares problems. All the LINPACK routines use calls to the level-lBLAS in the innermost loops; thus most of the floating point arithmetic inLINPACK is done within the level-l BLAS.D.4.
LAPACKLAPACK [17, 1995] was released on February 29, 1992. As the announcement stated, “LAPACK is a transportable library of Fortran 77 subroutinesfor solving the most common problems in numerical linear algebra: systemsof linear equations, linear least squares problems, eigenvalue problems, andsingular value problems. It has been designed to be efficient on a wide rangeof modern high-performance computers.”588P ROGRAM LIBRARIESLAPACK has been developed over a period that began in 1987 by a teamof 11 numerical analysts in the UK and the USA.
LAPACK can be regarded asa successor to LINPACK and EISPACK; it has virtually all their capabilitiesand much more besides. LAPACK improves on LINPACK and EISPACK infour main respects: speed, accuracy, robustness, and functionality. It wasdesigned at the outset to exploit the level-3 BLAS.Development of LAPACK continues under the auspices of two follow-onprojects, LAPACK 2 and ScaLAPACK. An object-oriented C++ extensionto LAPACK has been produced, called LAPACK++ [311, 1995]. CLAPACKis a C version of LAPACK, converted from the original Fortran version usingthe f2c converter [367, 1990]. ScaLAPACK comprises a subset of LAPACKroutines redesigned for distributed memory parallel machines [206, 1992], [205,1994]. Other work includes developing codes that take advantage of the carefulrounding and exception handling of IEEE arithmetic [298, 1994].
For moredetails of all these topics see [17, 1995].LAPACK undergoes regular updates, which are announced on the electronic newsletter NA-Digest. At the time of writing, the current release isversion 2.0, dated September 30, 1994, and the package contains over 1000routines and over 735,000 lines of Fortran 77 code, including testing and timing code.Mark 16 onward of the NAG Fortran 77 Library contains much of LAPACK in Chapters F07 and F08.D.4.1.
Structure of LAPACKThe LAPACK routines can be divided into three classes.The drivers solve a complete problem. The simple drivers have a minimalspecification, while the expert drivers have additional capabilities of interest to the sophisticated user. The computational routines perform individualtasks such as computing a factorization or reducing a matrix to condensedform; they are called by the drivers. The auxiliary routines perform relativelylow-level operations such as unblocked factorization, estimating or computing matrix norms, and solving a triangular system with scaling to preventoverflow.The driver and computational routines have names of the form xyyzzz.The first letter specifies the data type, which is one of S, D, C, and Z.
Thesecond two letters refer to the type of matrix. A partial list of types is asfollows (there are 27 types in all):D.4 LAPACKBDGBGEGTHSORPOPTSBSTSYTR589bidiagonalgeneral bandgeneralgeneral tridiagonalupper Hessenberg(real) orthogonalsymmetric or Hermitian positive definitesymmetric or Hermitian positive definite tridiagonal(real) symmetric band(real) symmetric tridiagonalsymmetrictriangular (or (quasi-triangular)The last three characters specify the computation performed.TRFTRSCONRFSTRIEQUfactorizesolve a (multiple right-hand side) linear system usingthe factorizationestimate 1/k1(A) (or compute it exactly when A istridiagonal and symmetric positive definite or Hermitian positive definite)apply fixed precision iterative refinement and compute the componentwise relative backward error anda forward error bounduse the factorization to compute A–1compute factors to equilibrate the matrixThe auxiliary routines follow a similar naming convention, with most ofthem having yy = LA.PreviousHomeAppendix EThe Test Matrix ToolboxThe Test Matrix Toolbox is a collection of MATLAB M-files containing testmatrices, routines for visualizing matrices, routines for direct search optimization, and miscellaneous routines that provide useful additions to MATLAB’S existing set of functions.
There are over 50 parametrized test matrices, which aremostly square, dense, nonrandom, and of arbitrary dimension. The test matrices include ones with known inverses or known eigenvalues; ill-conditionedor rank deficient matrices; and symmetric, positive definite, orthogonal, defective, involuntary, and totally positive matrices.
The visualization routinesdisplay surface plots of a matrix and its (pseudo-) inverse, the field of values,Gershgorin disks, and two- and three-dimensional views of pseudospectra.The direct search optimization routines implement the alternating directionsmethod, the multidirectional search method, and the Nelder–Mead simplexmethod (which are described in $24.2).Among the miscellaneous routines are one for rounding matrix elementsto a specified number of bits and ones that implement the classical and modified Gram-Schmidt methods and LU factorization without pivoting and withcomplete pivoting.The Test Matrix Toolbox was originally released in 1989, and a version waspublished as ACM Algorithm 694 [548, 1991].
The current version, version3.0, is described in the manual [561, 1995], which should be consulted forfurther details.The Test Matrix Toolbox is available by anonymous ftp from The MathWorks; the URL isftp://ftp.mathworks.com/pub/contrib/linalg/testmatrixThe manual [561, 1995] is testmatrix.ps in the same location.We summarize the contents of the toolbox in the following tables, whichlist the M-files by category, with short descriptions.591Next592T HE T EST M ATRIX T OOLBOXDemonstrationtmtdemo Demonstration of Test Matrix Toolbox.Test Matrices, A–KaugmentcauchychebspecchebvandchowcirculclementcompancondexcycoldingdongdorrdramadahfiedlerforsythefrankgallerygearmgfppgrcarhadamardhanowahilbinvhessinvolipjfactjordblockahankmskrylovAugmented system matrix.Cauchy matrix.Chebyshev spectral differentiation matrix.Vandermonde-like matrix for the Chebyshev polynomials.Chow matrix—a singular Toeplitz lower Hessenberg matrix.Circulant matrix.Clement matrix—tridiagonal with zero diagonal entries.Companion matrix.“Counterexamples” to matrix condition number estimators.Matrix whose columns repeat cyclically.Dingdong matrix—a symmetric Hankel matrix.Dorr matrix—diagonally dominant, ill conditioned,tridiagonal.A (O, 1) matrix whose inverse has large integer entries.Fiedler matrix—symmetric.Forsythe matrix—a perturbed Jordan block.Frank matrix—ill conditioned eigenvalues.Famous, and not so famous, test matrices.Gear matrix.Matrix giving maximal growth factor for Gaussian eliminationwith partial pivoting.Grcar matrix—a Toeplitz matrix with sensitive eigenvalues.Hadamard matrix.A matrix whose eigenvalues lie on a vertical line in the complexplane.Hilbert matrix.Inverse of an upper Hessenberg matrix.An involutory matrix.A Hankel matrix with factorial elements.Jordan block.Kahan matrix—upper trapezoidal.Kac–Murdock–Szego Toeplitz matrix.Krylov matrix.T HE T EST M ATRIX T OOLBOX593wilkTest Matrices.
L–ZLauchli matrix—rectangular.Lehmer matrix-symmetric positive definite.A tridiagonal matrix with real, sensitive eigenvalues.Lotkin matrix.A matrix with given Jordan canonical form.Symmetric positive definite matrix min( i , j ) .Moler matrix-symmetric positive definite.Singular matrix from the discrete Neumann problem (sparse).Random, orthogonal upper Hessenberg matrix.Orthogonal and nearly orthogonal matrices.Parter matrix—a Toeplitz matrix with singular values near p.Pascal matrix.Symmetric positive definite Toeplitz matrix.Pei matrix.Pentadiagonal Toeplitz matrix (sparse).Block tridiagonal matrix from Poisson’s equation (sparse).Prolate matrix—symmetric, ill-conditioned Toeplitz matrix.Random matrix with elements – 1, 0, or 1.Random matrix with pre-assigned singular values.A (0,1) matrix of Redheffer associated with the Riemannhypothesis.A matrix associated with the Riemann hypothesis.An upper quasi-triangular matrix.Smoke matrix-complex, with a “smoke ring”pseudospectrum.Tridiagonal matrix (sparse).Upper triangular matrix discussed by Wilkinson and others.Vandermonde matrix.Wathen matrix—a finite element matrix (sparse, randomentries).Various specific matrices devised/discussed by Wilkinson.fvgershpspscontseeField of values (or numerical range).Gershgorin disks.Dot plot of a pseudospectrum.Contours and colour pictures of pseudospectra.Pictures of a matrix and its (pseudo-) inverse.lauchlilehmerlesplotkinmakejcfminijmolerneumannohessorthogparterpascalpdtoeppeipentoeppoissonprolaterandorandsvdredheffriemannrschursmoketridiagtriwvendwathenVisualization594T HE T EST M ATRIX T OOLBOXDecompositions and FactorizationClassical Gram–Schmidt QR factorization.Cholesky factorization with pivoting of a positive semidefinitematrix.codComplete orthogonal decomposition.diagpiv Diagonal pivoting factorization with partial pivoting.geGaussian elimination without pivoting.gecpGaussian elimination with complete pivoting.Gauss-Jordan elimination to solve Ax = b.gjmgsModified Gram–Schmidt QR factorization.poldecPolar decomposition.signmMatrix sign decomposition.cgscholpDirect Search OptimizationadsmaxmdsmaxmmsmaxAlternating directions direct-search method.Multidirectional search method for direct search optimization.Nelder–Mead simplex method for direct search optimization.MiscellaneousBand reduction by two-sided unitary transformations.Round matrix elements.Comparison matrices.Matrix condition number in 1, 2, Frobenius, or -norm.Determine suitable axis for plot of complex vector.Dual vector with respect to Holder p-norm.Eigenvalue condition numbers.Householder matrix.Test Matrix Toolbox information and matrix access bynumber.matsignt Matrix sign function of a triangular matrix.pnormEstimate of matrix p-norm (1 < p < ).qmultPre-multiply by random orthogonal matrix.rqRayleigh quotient.seqaAdditive sequence.seqcheb Sequence of points related to Chebyshev polynomials.seqmMultiplicative sequence.showDisplay signs of matrix elements.skewpart Skew-symmetric (skew-Hermitian) part.sparsify Randomly sets matrix elements to zero.subPrincipal submatrix.symmpart Symmetric (Hermitian) part.trap2tri Unitary reduction of trapezoidal matrix to triangular form.bandredchopcompcondcpltaxesdualeigsenshousematrixPrevious HomeBibliography[1] Jan Ole Aasen.
On the reduction of a symmetric matrix to tridiagonal form.BIT, 11:233-242, 1971.[2] Nabih N. Abdelmalek. Round off error analysis for Gram-Schmidt methodand solution of linear least squares problems. BIT, 11 :345–368, 1971.[3] ACM Turing Award Lectures: The First Twenty Years, 1966–1985.
Addison-Wesley, Reading, MA, USA, 1987. xviii+483 pp. ISBN 0-201-54885-2.[4] Forman S. Acton. Numerical Methods That Work. Harper and Row, NewYork, 1970. xviii+541 pp. Reprinted by Mathematical Association of America, Washington, DC, with new preface and additional problems, 1990.
ISBN0-88385-450-3.[5] Duane A. Adams. A stopping criterion for polynomial root finding. Comm.ACM,<b>Текст обрезан, так как является слишком большим</b>.