Using MATLAB (779505), страница 71
Текст из файла (страница 71)
The approach solves the “semi-normal equations”R'*R*x = A'*b16-34Sparse Matrix Operationswithx = R\(R'\(A'*b))and then employs one step of iterative refinement to reduce roundoff errorr = b - A*xe = R\(R'\(A'*r))x = x + eIncomplete FactorizationsThe luinc and cholinc functions provide approximate, incompletefactorizations, which are useful as preconditioners for sparse iterativemethods.The luinc function produces two different kinds of incomplete LUfactorizations, one involving a drop tolerance and one involving fill-in level.
IfA is a sparse matrix, and tol is a small tolerance, then[L,U] = luinc(A,tol)computes an approximate LU factorization where all elements less than toltimes the norm of the relevant column are set to zero. Alternatively,[L,U] = luinc(A,'0')computes an approximate LU factorization where the sparsity pattern of L+U isa permutation of the sparsity pattern of A.For example,load west0479A = west0479;nnz(A)nnz(lu(A))nnz(luinc(A,1e-6))nnz(luinc(A,'0'))shows that A has 1887 nonzeros, its complete LU factorization has 16777nonzeros, its incomplete LU factorization with a drop tolerance of 1e–6 has10311 nonzeros, and its lu('0') factorization has 1886 nonzeros.The luinc function has a few other options.
See the luinc reference page fordetails.16-3516Sparse MatricesThe cholinc function provides drop tolerance and level 0 fill-in Choleskyfactorizations of symmetric, positive definite sparse matrices. See the cholincreference page for more information.Simultaneous Linear EquationsSystems of simultaneous linear equations can be solved by two different classesof methods:• Direct methods. These are usually variants of Gaussian elimination and areoften expressed as matrix factorizations such as LU or Choleskyfactorization. The algorithms involve access to the individual matrixelements.• Iterative methods. Only an approximate solution is produced after a finitenumber of steps. The coefficient matrix is involved only indirectly, through amatrix-vector product or as the result of an abstract linear operator.Direct MethodsDirect methods are usually faster and more generally applicable, if there isenough storage available to carry them out.
Iterative methods are usuallyapplicable to restricted cases of equations and depend upon properties likediagonal dominance or the existence of an underlying differential operator.Direct methods are implemented in the core of MATLAB and are made asefficient as possible for general classes of matrices.
Iterative methods areusually implemented in MATLAB M-files and may make use of the directsolution of subproblems or preconditioners.The usual way to access direct methods in MATLAB is not through the lu orchol functions, but rather with the matrix division operators / and \. If A issquare, the result of X = A\B is the solution to the linear system A*X = B. If Ais not square, then a least squares solution is computed.If A is a square, full, or sparse matrix, then A\B has the same storage class asB.
Its computation involves a choice among several algorithms:• If A is triangular, perform a triangular solve for each column of B.• If A is a permutation of a triangular matrix, permute it and perform a sparsetriangular solve for each column of B.• If A is symmetric or Hermitian and has positive real diagonal elements, finda symmetric minimum degree order p = symmmd(A), and attempt to compute16-36Sparse Matrix Operationsthe Cholesky factorization of A(p,p). If successful, finish with two sparsetriangular solves for each column of B.• Otherwise (if A is not triangular, or is not Hermitian with positive diagonal,or if Cholesky factorization fails), find a column minimum degree orderp = colmmd(A).
Compute the LU factorization with partial pivoting ofA(:,p), and perform two triangular solves for each column of B.For a square matrix, MATLAB tries these possibilities in order of increasingcost. The tests for triangularity and symmetry are relatively fast and, ifsuccessful, allow for faster computation and more efficient memory usage thanthe general purpose method.For example, consider the sequence below.[L,U] = lu(A);y = L\b;x = U\y;In this case, MATLAB uses triangular solves for both matrix divisions, since Lis a permutation of a triangular matrix and U is triangular.Using a Different Preordering.
If A is not triangular or a permutation of atriangular matrix, backslash (\) uses colmmd and symmmd to determine aminimum degree order. Use the function spparms to turn off the minimumdegree preordering if you want to use a better preorder for a particular matrix.If A is sparse and x = A\b can use LU factorization, you can use a columnordering other than colmmd to solve for x, as in the following example.spparms('autommd',0);q = colamd(A);x = A (:,q) \ b;x(q) = x;spparms('autommd',1);If A can be factorized using Cholesky factorization, then x = A\b can becomputed efficiently usingspparms('autommd',0);p = symamd(A);x = A(p,p) \ b(p);x (p) = x;spparms('autommd',1);16-3716Sparse MatricesIn the examples above, the spparms('autommd',0) statement turns theautomatic colmmd or symmmd ordering off. The spparms('autommd',1)statement turns it back on, just in case you use A\b later without specifying anappropriate pre-ordering. spparms with no arguments reports the currentsettings of the sparse parameters.Iterative MethodsNine functions are available that implement iterative methods for sparsesystems of simultaneous linear systems.Functions for Iterative Methods for Sparse SystemsFunctionMethodbicgBiconjugate gradientbicgstabBiconjugate gradient stabilizedcgsConjugate gradient squaredgmresGeneralized minimum residuallsqrLSQR implementation of Conjugate Gradients on theNormal EquationsminresMinimum residualpcgPreconditioned conjugate gradientqmrQuasiminimal residualsymmlqSymmetric LQThese methods are designed to solve Ax = b or min b – Ax .
For thePreconditioned Conjugate Gradient method, pcg, A must be a symmetric,positive definite matrix. minres and symmlq can be used on symmetricindefinite matrices. For lsqr, the matrix need not be square. The other five canhandle nonsymmetric, square matrices.All nine methods can make use of preconditioners.
The linear systemAx = b16-38Sparse Matrix Operationsis replaced by the equivalent system–1–1M Ax = M bThe preconditioner M is chosen to accelerate convergence of the iterativemethod. In many cases, the preconditioners occur naturally in themathematical model. A partial differential equation with variable coefficientsmay be approximated by one with constant coefficients, for example.Incomplete matrix factorizations may be used in the absence of naturalpreconditioners.The five-point finite difference approximation to Laplace’s equation on asquare, two-dimensional domain provides an example.
The followingstatements use the preconditioned conjugate gradient method preconditionerM = R’*R, where R is the incomplete Cholesky factor of A.A = delsq(numgrid('S',50));b = ones(size(A,1),1);tol = 1.e-3;maxit = 10;R = cholinc(A,tol);[x,flag,err,iter,res] = pcg(A,b,tol,maxit,R',R);Only four iterations are required to achieve the prescribed accuracy.Background information on these iterative methods and incompletefactorizations is available in [2] and [7].Eigenvalues and Singular ValuesTwo functions are available which compute a few specified eigenvalues orsingular values.
svds is based on eigs which uses ARPACK [6].Functions to Compute a Few Eigenvalues or Singular ValuesFunctionDescriptioneigsFew eigenvaluessvdsFew singular valuesThese functions are most frequently used with sparse matrices, but they can beused with full matrices or even with linear operators defined by M-files.16-3916Sparse MatricesThe statement[V,lambda] = eigs(A,k,sigma)finds the k eigenvalues and corresponding eigenvectors of the matrix A whichare nearest the “shift” sigma.
If sigma is omitted, the eigenvalues largest inmagnitude are found. If sigma is zero, the eigenvalues smallest in magnitudeare found. A second matrix, B, may be included for the generalized eigenvalueproblemAv = λBvThe statement[U,S,V] = svds(A,k)finds the k largest singular values of A and[U,S,V] = svds(A,k,0)finds the k smallest singular values.For example, the statementsL = numgrid('L',65);A = delsq(L);set up the five-point Laplacian difference operator on a 65-by-65 grid in anL-shaped, two-dimensional domain. The statementssize(A)nnz(A)show that A is a matrix of order 2945 with 14,473 nonzero elements.The statement[v,d] = eigs(A,1,0);computes the smallest eigenvalue and eigenvector.
Finally,L(L>0) = full(v(L(L>0)));x = -1:1/32:1;contour(x,x,L,15)axis square16-40Sparse Matrix Operationsdistributes the components of the eigenvector over the appropriate grid pointsand produces a contour plot of the result.10.80.60.40.20−0.2−0.4−0.6−0.8−1−1−0.500.51The numerical techniques used in eigs and svds are described in [6].16-4116Sparse MatricesSelected Bibliography[1] Amestoy, P. R., T. A. Davis, and I. S. Duff, “An Approximate MinimumDegree Ordering Algorithm,” SIAM Journal on Matrix Analysis andApplications, Vol.
17, No. 4, Oct. 1996, pp. 886-905.[2] Barrett, R., M. Berry, T. F. Chan, et al., Templates for the Solution of LinearSystems: Building Blocks for Iterative Methods, SIAM, Philadelphia, 1994.[3] Davis, T.A., Gilbert, J. R., Larimore, S.I., Ng, E., Peyton, B., “A ColumnApproximate Minimum Degree Ordering Algorithm,” Proc.















