Using MATLAB (779505), страница 70
Текст из файла (страница 70)
T = S(i,j) produces asparse result if S is sparse and either i or j is a vector. It produces a fullscalar if both i and j are scalars. Submatrix indexing on the left, as inT(i,j) = S, does not change the storage format of the matrix on the left.Permutation and ReorderingA permutation of the rows and columns of a sparse matrix S can be representedin two ways:• A permutation matrix P acts on the rows of S as P*S or on the columns asS*P'.• A permutation vector p, which is a full vector containing a permutation of1:n, acts on the rows of S as S(p,:), or on the columns as S(:,p).For example, the statementspIPeS16-26=====[1 3 4 2 5]eye(5,5);I(p,:);ones(4,1);diag(11:11:55) + diag(e,1) + diag(e,-1)Sparse Matrix Operationsproducep =134251000000010010000010000001111000122100013310001441000155P =S =You can now try some permutations using the permutation vector p and thepermutation matrix P.
For example, the statements S(p,:) and P*S produceans =110010110220033110014401001055Similarly, S(:,p) and S*P' produceans =11100001331000144112210000015516-2716Sparse MatricesIf P is a sparse matrix, then both representations use storage proportional to nand you can apply either to S in time proportional to nnz(S). The vectorrepresentation is slightly more compact and efficient, so the various sparsematrix permutation routines all return full row vectors with the exception ofthe pivoting permutation in LU (triangular) factorization, which returns amatrix compatible with earlier versions of MATLAB.To convert between the two representations, let I = speye(n) be an identitymatrix of the appropriate size.
Then,P = I(p,:)P' = I(:,p)p = (1:n)*P'p = (P*(1:n)')'The inverse of P is simply R = P'. You can compute the inverse of p withr(p) = 1:n.r(p) = 1:5r =14235Reordering for SparsityReordering the columns of a matrix can often make its LU or QR factorssparser. Reordering the rows and columns can often make its Cholesky, factorssparser.
The simplest such reordering is to sort the columns by nonzero count.This is sometimes a good reordering for matrices with very irregularstructures, especially if there is great variation in the nonzero counts of rowsor columns.The function p = colperm(S) computes this column-count permutation. Thecolperm M-file has only a single line.[ignore,p] = sort(full(sum(spones(S))));This line performs these steps:1 The inner call to spones creates a sparse matrix with ones at the location ofevery nonzero element in S.16-28Sparse Matrix Operations2 The sum function sums down the columns of the matrix, producing a vectorthat contains the count of nonzeros in each column.3 full converts this vector to full storage format.4 sort sorts the values in ascending order. The second output argument fromsort is the permutation that sorts this vector.Reordering to Reduce BandwidthThe reverse Cuthill-McKee ordering is intended to reduce the profile orbandwidth of the matrix.
It is not guaranteed to find the smallest possiblebandwidth, but it usually does. The function symrcm(A) actually operates onthe nonzero structure of the symmetric matrix A + A', but the result is alsouseful for asymmetric matrices. This ordering is useful for matrices that comefrom one-dimensional problems or problems that are in some sense “long andthin.”Minimum Degree OrderingThe degree of a node in a graph is the number of connections to that node. Thisis the same as the number of off-diagonal nonzero elements in thecorresponding row of the adjacency matrix. The minimum degree algorithmgenerates an ordering based on how these degrees are altered during Gaussianelimination or Cholesky factorization.
It is a complicated and powerfulalgorithm that usually leads to sparser factors than most other orderings,including column count and reverse Cuthill-McKee.MATLAB functions implement two methods for each of two types of matrices:symamd and symmmd for symmetric matrices, and colamd and colmmd fornonsymmetric matrices. colamd and colmmd also work for symmetric matricesof the form A*A' or A'*A.Because the most time-consuming part of a minimum degree orderingalgorithm is keeping track of the degree of each node, all four functions use anapproximation to the degree, rather the exact degree.
As a result:• Factorizations obtained using colmmd and symmmd tend to have more nonzeroelements than if the implementation used exact degrees.• colamd and symamd use a tighter approximation than colmmd and symmmd.They generate orderings that are as good as could be obtained using exactdegrees.16-2916Sparse Matrices• colamd and symamd are faster than colmmd and symmmd, respectively. This istrue particularly for very large matrices.You can change various parameters associated with details of the algorithmsusing the spparms function.For details on the algorithms used by colmmd and symmmd, see [4].
For detailson the algorithms used by colamd and symamd, see [5]. The approximate degreeused in colamd and symamd is based on [1].FactorizationThis section discusses four important factorization techniques for sparsematrices:• LU, or triangular, factorization• Cholesky factorization• QR, or orthogonal, factorization• Incomplete factorizationsLU FactorizationIf S is a sparse matrix, the statement below returns three sparse matrices L, U,and P such that P*S = L*U.[L,U,P] = lu(S)lu obtains the factors by Gaussian elimination with partial pivoting. Thepermutation matrix P has only n nonzero elements. As with dense matrices, thestatement [L,U] = lu(S) returns a permuted unit lower triangular matrix andan upper triangular matrix whose product is S. By itself, lu(S) returns L andU in a single matrix without the pivot information.The sparse LU factorization does not pivot for sparsity, but it does pivot fornumerical stability.
In fact, both the sparse factorization (line 1) and the fullfactorization (line 2) below produce the same L and U, even though the time andstorage requirements might differ greatly.16-30[L,U] = lu(S)% Sparse factorization[L,U] = sparse(lu(full(S)))% Full factorizationSparse Matrix OperationsYou can control pivoting in sparse matrices usinglu(S,thresh)where thresh is a pivot threshold in [0,1]. Pivoting occurs when the diagonalentry in a column has magnitude less than thresh times the magnitude of anysub-diagonal entry in that column. thresh = 0 forces diagonal pivoting.thresh = 1 is the default.MATLAB automatically allocates the memory necessary to hold the sparse Land U factors during the factorization. MATLAB does not use any symbolic LUprefactorization to determine the memory requirements and set up the datastructures in advance.Reordering and Factorization.
If you obtain a good column permutation p thatreduces fill-in, perhaps from symrcm or colamd, then computing lu(S(:,p))takes less time and storage than computing lu(S). Two permutations are thesymmetric reverse Cuthill-McKee ordering and the symmetric minimumdegree ordering.r = symrcm(B);m = symamd(B);The three spy plots produced by the lines below show the three adjacencymatrices of the Bucky Ball graph with these three different numberings. Thelocal, pentagon-based structure of the original numbering is not evident in theother three.spy(B)spy(B(r,r))spy(B(m,m))16-3116Sparse MatricesOriginalReverse Cuthill−McKeeMinimum Degree000101010202020303030404040505050600102030nz = 18040506060060102030nz = 1804050600102030nz = 180405060The reverse Cuthill-McKee ordering, r, reduces the bandwidth andconcentrates all the nonzero elements near the diagonal. The approximateminimum degree ordering, m, produces a fractal-like structure with largeblocks of zeros.To see the fill-in generated in the LU factorization of the Bucky ball, usespeye(n,n), the sparse identity matrix, to insert -3s on the diagonal of B.B = B - 3*speye(n,n);Since each row sum is now zero, this new B is actually singular, but it is stillinstructive to compute its LU factorization.
When called with only one outputargument, lu returns the two triangular factors, L and U, in a single sparsematrix. The number of nonzeros in that matrix is a measure of the time andstorage required to solve linear systems involving B. Here are the nonzerocounts for the three permutations being considered.Originallu(B)1022Reverse Cuthill-McKeelu(B(r,r))968Approximate minimum degree lu(B(m,m))636Even though this is a small example, the results are typical. The originalnumbering scheme leads to the most fill-in.
The fill-in for the reverseCuthill-McKee ordering is concentrated within the band, but it is almost as16-32Sparse Matrix Operationsextensive as the first two orderings. For the minimum degree ordering, therelatively large blocks of zeros are preserved during the elimination and theamount of fill-in is significantly less than that generated by the otherorderings.
The spy plots below reflect the characteristics of each reordering.OriginalReverse Cuthill−McKeeMinimum Degree000101010202020303030404040505050600102030nz = 102240506060060102030nz = 9684050600102030nz = 636405060Cholesky FactorizationIf S is a symmetric (or Hermitian), positive definite, sparse matrix, thestatement below returns a sparse, upper triangular matrix R so that R'*R = S.R = chol(S)chol does not automatically pivot for sparsity, but you can compute minimumdegree and profile limiting permutations for use with chol(S(p,p)).Since the Cholesky algorithm does not use pivoting for sparsity and does notrequire pivoting for numerical stability, chol does a quick calculation of theamount of memory required and allocates all the memory at the start of thefactorization.
You can use symbfact, which uses the same algorithm as chol,to calculate how much memory is allocated.QR FactorizationMATLAB computes the complete QR factorization of a sparse matrix S with[Q,R] = qr(S)16-3316Sparse Matricesbut this is usually impractical. The orthogonal matrix Q often fails to have ahigh proportion of zero elements. A more practical alternative, sometimesknown as “the Q-less QR factorization,” is available.With one sparse input argument and one output argumentR = qr(S)returns just the upper triangular portion of the QR factorization. The matrix Rprovides a Cholesky factorization for the matrix associated with the normalequations,R'*R = S'*SHowever, the loss of numerical information inherent in the computation ofS'*S is avoided.With two input arguments having the same number of rows, and two outputarguments, the statement[C,R] = qr(S,B)applies the orthogonal transformations to B, producing C = Q'*B withoutcomputing Q.The Q-less QR factorization allows the solution of sparse least squaresproblemsminimize Ax – bwith two steps[c,R] = qr(A,b)x = R\cIf A is sparse, but not square, MATLAB uses these steps for the linear equationsolving backslash operatorx = A\bOr, you can do the factorization yourself and examine R for rank deficiency.It is also possible to solve a sequence of least squares linear systems withdifferent right-hand sides, b, that are not necessarily known when R = qr(A)is computed.















