c2-7 (779465)
Текст из файла
2.7 Sparse Linear Systems712.7 Sparse Linear Systems••••••••••••tridiagonalband diagonal (or banded) with bandwidth Mband triangularblock diagonalblock tridiagonalblock triangularcyclic bandedsingly (or doubly) bordered block diagonalsingly (or doubly) bordered block triangularsingly (or doubly) bordered band diagonalsingly (or doubly) bordered band triangularother (!)You should also be aware of some of the special sparse forms that occur in thesolution of partial differential equations in two or more dimensions.
See Chapter 19.If your particular pattern of sparsity is not a simple one, then you may wish totry an analyze/factorize/operate package, which automates the procedure of figuringout how fill-ins are to be minimized. The analyze stage is done once only for eachpattern of sparsity. The factorize stage is done once for each particular matrix thatfits the pattern. The operate stage is performed once for each right-hand side toSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).A system of linear equations is called sparse if only a relatively small numberof its matrix elements aij are nonzero. It is wasteful to use general methods oflinear algebra on such problems, because most of the O(N 3 ) arithmetic operationsdevoted to solving the set of equations or inverting the matrix involve zero operands.Furthermore, you might wish to work problems so large as to tax your availablememory space, and it is wasteful to reserve storage for unfruitful zero elements.Note that there are two distinct (and not always compatible) goals for any sparsematrix method: saving time and/or saving space.We have already considered one archetypal sparse form in §2.4, the banddiagonal matrix.
In the tridiagonal case, e.g., we saw that it was possible to saveboth time (order N instead of N 3 ) and space (order N instead of N 2 ). Themethod of solution was not different in principle from the general method of LUdecomposition; it was just applied cleverly, and with due attention to the bookkeepingof zero elements.
Many practical schemes for dealing with sparse problems have thissame character. They are fundamentally decomposition schemes, or else eliminationschemes akin to Gauss-Jordan, but carefully optimized so as to minimize the numberof so-called fill-ins, initially zero elements which must become nonzero during thesolution process, and for which storage must be reserved.Direct methods for solving sparse equations, then, depend crucially on theprecise pattern of sparsity of the matrix. Patterns that occur frequently, or that areuseful as way-stations in the reduction of more general forms, already have specialnames and special methods of solution.
We do not have space here for any detailedreview of these. References listed at the end of this section will furnish you with an“in” to the specialized literature, and the following list of buzz words (and Figure2.7.1) will at least let you hold your own at cocktail parties:72Chapter 2.Solution of Linear Algebraic Equationszeroszeros(a)( b)(c)(d)(e)(f )(g)(h)(i)( j)(k)Figure 2.7.1. Some standard forms for sparse matrices. (a) Band diagonal; (b) block triangular; (c) blocktridiagonal; (d) singly bordered block diagonal; (e) doubly bordered block diagonal; (f) singly borderedblock triangular; (g) bordered band-triangular; (h) and (i) singly and doubly bordered band diagonal; (j)and (k) other! (after Tewarson) [1].be used with the particular matrix.
Consult [2,3] for references on this. The NAGlibrary [4] has an analyze/factorize/operate capability. A substantial collection ofroutines for sparse matrix calculation is also available from IMSL [5] as the YaleSparse Matrix Package [6].You should be aware that the special order of interchanges and eliminations,Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).zeros732.7 Sparse Linear SystemsSherman-Morrison FormulaSuppose that you have already obtained, by herculean effort, the inverse matrixA−1 of a square matrix A.
Now you want to make a “small” change in A, forexample change one element aij , or a few elements, or one row, or one column.Is there any way of calculating the corresponding change in A−1 without repeatingyour difficult labors? Yes, if your change is of the formA → (A + u ⊗ v)(2.7.1)for some vectors u and v. If u is a unit vector ei , then (2.7.1) adds the componentsof v to the ith row.
(Recall that u ⊗ v is a matrix whose i, jth element is the productof the ith component of u and the jth component of v.) If v is a unit vector ej , then(2.7.1) adds the components of u to the jth column. If both u and v are proportionalto unit vectors ei and ej respectively, then a term is added only to the element aij .The Sherman-Morrison formula gives the inverse (A + u ⊗ v)−1 , and is derivedbriefly as follows:(A + u ⊗ v)−1 = (1 + A−1 · u ⊗ v)−1 · A−1= (1 − A−1 · u ⊗ v + A−1 · u ⊗ v · A−1 · u ⊗ v − .
. .) · A−1= A−1 − A−1 · u ⊗ v · A−1 (1 − λ + λ2 − . . .)= A−1 −(A−1 · u) ⊗ (v · A−1 )1+λ(2.7.2)whereλ ≡ v · A−1 · u(2.7.3)The second line of (2.7.2) is a formal power series expansion. In the third line, theassociativity of outer and inner products is used to factor out the scalars λ.The use of (2.7.2) is this: Given A−1 and the vectors u and v, we need onlyperform two matrix multiplications and a vector dot product,z ≡ A−1 · uw ≡ (A−1 )T · vλ=v·z(2.7.4)to get the desired change in the inverseA−1→A−1 −z⊗w1+λ(2.7.5)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).prescribed by a sparse matrix method so as to minimize fill-ins and arithmeticoperations, generally acts to decrease the method’s numerical stability as comparedto, e.g., regular LU decomposition with pivoting. Scaling your problem so as tomake its nonzero matrix elements have comparable magnitudes (if you can do it)will sometimes ameliorate this problem.In the remainder of this section, we present some concepts which are applicableto some general classes of sparse matrices, and which do not necessarily depend ondetails of the pattern of sparsity.74Chapter 2.Solution of Linear Algebraic Equations(A + u ⊗ v) · x = b(2.7.6)then you proceed as follows.
Using the fast method that is presumed available forthe matrix A, solve the two auxiliary problemsA·y=bA·z= ufor the vectors y and z. In terms of these,v·yx=y−z1 + (v · z)(2.7.7)(2.7.8)as we see by multiplying (2.7.2) on the right by b.Cyclic Tridiagonal SystemsSo-called cyclic tridiagonal systems occur quite frequently, and are a goodexample of how to use the Sherman-Morrison formula in the manner just described.The equations have the form x1r1b 1 c1 0 · · ·β a 2 b 2 c2 · · · x2 r2 ··· · · · · = · · · (2.7.9) xN−1rN−1· · · aN−1 bN−1 cN−1α···0aNbNxNrNThis is a tridiagonal system, except for the matrix elements α and β in the corners.Forms like this are typically generated by finite-differencing differential equationswith periodic boundary conditions (§19.4).We use the Sherman-Morrison formula, treating the system as tridiagonal plusa correction.
In the notation of equation (2.7.6), define vectors u and v to be 1γ 0 0 . . .. .. (2.7.10)v=u= 0 0β/γαSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).The whole procedure requires only 3N 2 multiplies and a like number of adds (aneven smaller number if u or v is a unit vector).The Sherman-Morrison formula can be directly applied to a class of sparseproblems. If you already have a fast way of calculating the inverse of A (e.g., atridiagonal matrix, or some other standard sparse form), then (2.7.4)–(2.7.5) allowyou to build up to your related but more complicated form, adding for example arow or column at a time.
Notice that you can apply the Sherman-Morrison formulamore than once successively, using at each stage the most recent update of A−1(equation 2.7.5). Of course, if you have to modify every row, then you are back toan N 3 method. The constant in front of the N 3 is only a few times worse than thebetter direct methods, but you have deprived yourself of the stabilizing advantagesof pivoting — so be careful.For some other sparse problems, the Sherman-Morrison formula cannot bedirectly applied for the simple reason that storage of the whole inverse matrix A−1is not feasible. If you want to add only a single correction of the form u ⊗ v,and solve the linear system2.7 Sparse Linear Systems75Here γ is arbitrary for the moment.
Характеристики
Тип файла PDF
PDF-формат наиболее широко используется для просмотра любого типа файлов на любом устройстве. В него можно сохранить документ, таблицы, презентацию, текст, чертежи, вычисления, графики и всё остальное, что можно показать на экране любого устройства. Именно его лучше всего использовать для печати.
Например, если Вам нужно распечатать чертёж из автокада, Вы сохраните чертёж на флешку, но будет ли автокад в пункте печати? А если будет, то нужная версия с нужными библиотеками? Именно для этого и нужен формат PDF - в нём точно будет показано верно вне зависимости от того, в какой программе создали PDF-файл и есть ли нужная программа для его просмотра.