Heath - Scientific Computing (523150), страница 90
Текст из файла (страница 90)
Additional complications can occur, such as dealing with curvedboundaries. Basis functions typically used are bilinear or bicubic functions in two dimensions or trilinear or tricubic functions in three dimensions, analogous to the “hat” functionsor piecewise cubics in one dimension. Of course, the increase in dimensionality means thatthe linear system to be solved is much larger, but it is still sparse owing to the local support of the basis functions. Finite element methods for PDEs are extremely flexible andpowerful, but a detailed treatment of them is beyond the scope of this book.11.4Direct Methods for Sparse Linear SystemsAll types of boundary value problems, as well as implicit methods for time-dependent PDEs,give rise to systems of linear algebraic equations to solve.
The use of finite difference schemesinvolving only a few variables each, or the use of localized basis functions in a finite elementapproach, causes the matrix of the linear system to be sparse. This sparsity can be exploitedto reduce the storage and work required for solving the linear system to much less than theO(n2 ) and O(n3 ), respectively, that might be expected in a more naive approach.
In thissection we briefly consider direct methods for solving large sparse linear systems, and thenin the following section we will discuss iterative methods for such systems in somewhat moredetail.33811.4.1CHAPTER 11. PARTIAL DIFFERENTIAL EQUATIONSSparse Factorization MethodsGaussian elimination and its variants such as Cholesky factorization for symmetric positivedefinite matrices are applicable to solving large sparse systems, but a great deal of care mustbe exercised to achieve reasonable efficiency in both solution time and storage requirements.The key to this efficiency is to store and operate on only the nonzero entries of the matrix.Thus, special data structures are required rather than the simple two-dimensional arraysthat are so natural for storing dense matrices.For one-dimensional problems, the equations and unknowns can usually be orderedso that the nonzeros are concentrated in a relatively narrow band, which can be storedefficiently in a rectangular two-dimensional array by diagonals.
Algorithms are available forreducing the bandwidth, if necessary, by reordering the rows and columns of the matrix. Butfor problems in two or more dimensions, even the narrowest possible band often containsmostly zeros, and hence any type of two-dimensional array storage would be prohibitivelywasteful. In general, sparse systems require data structures in which individual nonzeroentries are stored, along with the indices required to identify their locations in the matrix.Explicitly storing the indices not only incurs additional storage overhead but also makesarithmetic operations on the nonzeros less efficient owing to the indirect addressing requiredto access the operands.
Thus, such a representation is worthwhile only if the matrix issufficiently sparse, which is often the case for very large problems arising from PDEs andmany other applications.When applied to a sparse matrix, LU or Cholesky factorization can be carried out in theusual manner, but taking linear combinations of rows or columns to annihilate unwantednonzero entries can in turn introduce new nonzeros into locations in the matrix that wereinitially zero.
Such new nonzeros, called fill , must then be stored and, depending on theirlocations, may eventually be annihilated themselves in order to obtain the triangular factors. In any case, the resulting triangular factors can be expected to contain at least asmany nonzeros as the original matrix and usually a significant amount of fill as well. Theamount of fill incurred is very sensitive to the order in which the rows and columns of thematrix are processed, so one of the central problems in sparse factorization is to reorder theoriginal matrix to limit the amount of fill that the matrix suffers during factorization. Exactminimization of fill turns out to be a very hard combinatorial problem (NP-complete), butheuristic algorithms are available, such as minimum degree and nested dissection, that doa good job of limiting fill for many types of problems.
We sketch these algorithms brieflyin the following example; see [68, 93] for further details.Example 11.5 Sparse Factorization. To illustrate sparse factorization, we consider amatrix arising from a typical two-dimensional elliptic boundary value problem, the Laplaceequation on the unit square (see Example 11.4). A 3 × 3 grid of interior mesh points isshown on the left in Fig. 11.6, with the points, or nodes, numbered in a natural, row-wiseorder. The Laplace equation is then approximated by a system of linear equations usingthe standard second-order finite difference approximation to the second derivatives. In thediagram, a pair of nodes is connected by a line, or edge, if both appear in the same equationin this system.
We say that two nodes are neighbors if they are connected by an edge.The nonzero pattern of the 9 × 9 symmetric positive definite matrix A of this linear11.4. DIRECT METHODS FOR SPARSE LINEAR SYSTEMS................................. 7 ........................................... 8 ........................................... 9 ...............................................................................................................................................................................................................6.......5.......4..............................................................................................................................................................................................1.......2.......3.....................mesh...............
×....... ×.............. ×...................................................................................................................................××× ××× ××× ×××× × ×××× ×××× ××× × ××× ×A............... ×....... ×.............. ×...............................................339×× ×+ + ×× + × ×× + × ×× + + ×× + × ×× + ×......................................................................× ..............LFigure 11.6: Finite difference mesh and nonzero patterns of corresponding sparse matrix A and itsCholesky factor L.system is shown in the center of Fig. 11.6, where a nonzero entry of the matrix is indicatedby × and zero entries are blank.
The diagonal entries of the matrix correspond to thenodes in the mesh, and the nonzero off-diagonal entries correspond to the edges in the mesh(i.e., aij 6= 0 ⇔ nodes i and j are neighbors). Note that the matrix is banded, but it alsohas many zero entries inside the band. More specifically, the matrix is block tridiagonal,with each nonzero block being either tridiagonal or diagonal, as expected for a row- orcolumn-wise ordering of a two-dimensional grid. Cholesky factorization of the matrix inthis ordering fills in the band almost completely, as shown on the right in Fig.
11.6, wherefill entries (new nonzeros) are indicated by +. We will see that there are other orderings inwhich the matrix suffers considerably less fill.Each step in the factorization process corresponds to the elimination of a node from themesh. Eliminating a node causes all of its neighboring nodes to become connected to eachother. If any such neighbors were not already connected, then fill results (i.e, new edgesin the mesh and new nonzeros in the matrix). Thus, a good heuristic for limiting fill is toeliminate first those nodes having fewest neighbors.
The number of neighbors of a givennode is called its degree, so this heuristic is known as minimum degree. At each step, theminimum degree algorithm selects for elimination a node of smallest degree, breaking tiesarbitrarily. After the node has been eliminated, its former neighbors all become connectedto each other, so the degrees of some nodes may change. The process is then repeated,with a new node of minimum degree eliminated next, and so on until all nodes have beeneliminated. A minimum degree ordering for our example problem is shown in Fig. 11.7,along with the correspondingly permuted matrix and resulting Cholesky factor.
Althoughthere is no obvious pattern to the nonzeros in the reordered matrix, the Cholesky factorsuffers much less fill than with the band ordering. This difference is much more pronouncedin larger problems, and more sophisticated variants of the minimum degree algorithm areamong the most effective general-purpose ordering algorithms known.Nested dissection is a divide-and-conquer strategy for determining a good ordering tolimit fill in sparse factorization. First, a small set of nodes whose removal splits the meshinto two pieces of roughly equal size is selected, and these separator nodes are numberedlast. Then the process is repeated recursively on each remaining piece of the mesh untilall nodes have been numbered.
A nested dissection ordering for our example problem isshown in Fig. 11.8, along with the correspondingly permuted matrix and resulting Choleskyfactor. Separating the mesh into two pieces means that no node in either piece is connected340CHAPTER 11. PARTIAL DIFFERENTIAL EQUATIONS................................. 3 ...........................................
6 ........................................... 4 ...............................................................................................................................................................................................................8.......9.......7..............................................................................................................................................................................................1.......5.......2....................................