Moukalled F., Mangani L., Darwish M. The finite volume method in computational fluid dynamics. An advanced introduction with OpenFOAM and Matlab (811443), страница 52
Текст из файла (страница 52)
The solution for the firstiteration is obtained as/1 ¼1/1 ¼ ð0 þ 3Þ ¼ 1 ) e1 ¼ j1:4548 1j ¼ 0:454831/2 ¼ ð2 1 þ 0 þ 4Þ ¼ 1 ) e2 ¼ j1:3645 1j ¼ 0:364561/3 ¼ ð2 1 þ 0 þ 5Þ ¼ 1:1667 ) e3 ¼ j1:2776 1:1667j ¼ 0:110961/4 ¼ ð2 1:1667 3Þ ¼ 0:09523 ) e4 ¼ j0:06354 þ 0:09523j ¼ 0:031697Computations proceed in the same manner with solution obtained treated asthe new guess. The results for the first five iterations are given in Table 10.2.Table 10.2 Summary of results obtained using the Gauss-Siedel iterative methodIter #/1e1/2e2/3e3/4001.454801.364501.27760e40.06354110.454810.36451.16670.1109−0.095230.0316921.33330.12151.30565.90E−021.25262.49E−02−7.07E−027.13E−0331.43521.97E−021.35381.07E−021.27284.76E−03−6.49E−021.36E−0341.45133.57E−031.36261.98E−031.27678.88E−04−6.38E−022.54E−0451.45426.61E−041.36423.68E−041.27741.65E−04−6.36E−024.72E−0510.3.3 Preconditioning and Iterative MethodsThe rate of convergence of iterative methods depends on the spectral properties ofthe iteration matrix B, which is contingent on the matrix of coefficients.
Based onthat an iterative method looks for a transformation of the system of equations into32810Solving the System of Algebraic Equationsan equivalent one that has the same solution, but of better spectral properties. Underthese conditions the eigenvalues of the equivalent system are more clusteredallowing the iterative solution to be obtained faster than with the original system.A preconditioner is defined as a matrix that effects such a transformation.A preconditioning matrix P is defined such that the systemP1 A/ ¼ P1 bð10:79Þhas the same solution as the original system A/ ¼ b, but the spectral properties ofits coefficient matrix P1 A are more conducive.
In defining the preconditioner P,the difficulty is to find a matrix that approximates A1 and is easy to invert (i.e., tofind P1 ) at a reasonable cost.Writing again Eq. (10.47), but now with P replacing M (i.e., M = P andA = P − N) the associated fixed point iteration system is given by/ðnÞ ¼ B/ðn1Þ þ Cb¼ P1 N/ðn1Þ þ P1 b¼ P1 ðP AÞ/ðn1Þ þ P1 b¼ I P1 A /ðn1Þ þ P1 bwhich in residual form can be written as/ðnÞ ¼ I P1 A /ðn1Þ þ P1 b¼ /ðn1Þ þ P1 b A/ðn1Þð10:80Þð10:81Þ¼ /ðn1Þ þ P1 rðn1ÞFrom both equations it is now clear that the iterative procedure is just a fixed-pointiteration on a preconditioned system associated with the decompositionA = P − N where the spectral properties are nowq I P1 A \1ð10:82ÞBy comparison, the preconditioning matrix for the Jacobi ðJ Þ and Gauss-SeidelðGSÞ methods are simplyPJ ¼ DPGS ¼ D þ Lð10:83Þwhere D and L are respectively the diagonal and lower triangular part of matrix A.Thus, preconditioning is a manipulation of the original system to improve itsspectral properties with the preconditioning matrix P used in the associated iterativeprocedure.
As will be described in the following sections, it is possible to develop10.3Iterative Methods329more advanced preconditioning matrices in which the coefficients are defined in amore complex way.10.3.4 Matrix Decomposition TechniquesThe low rate of convergence of the Gauss-Seidel and Jacobi methods was the primemotivator for the development of faster iterative techniques. One approach toaccelerate the convergence rate of solvers and to develop iterative methods isthrough the use of more advanced preconditioners. A simple, yet efficient, approachfor that purpose is to perform an incomplete factorization of the original matrix ofcoefficients A.
The stress on incomplete is essential since a complete factorizationof A into a lower L and an upper triangular matrix U is tantamount to a directsolution and is very expensive in term of memory requirements (fill in and loss ofsparsity) and computational cost.10.3.5 Incomplete LU (ILU) DecompositionAs can be seen in Example 1, the L and U matrices result in non-zero elements atlocations that were 0 in the original matrix A (this is known as fill-in). So if anincomplete LU (ILU) factorization of A is performed such that the resulting lowerL and upper U matrices have the same nonzero structure as the lower and upperparts of A, thenA ¼ LU þ Rð10:84Þwhere R is the residual of the factorization procedure.
The matrices L and U beingsparse (same structure as A) are easier to deal with then if they were obtained froma complete factorization. However, their product being an approximation to A,necessitates the use of an iterative solution procedure to solve the system ofequations. The first step in the solution process is to rewrite Eq. (10.1) asA/ ¼ b ) 0 ¼ b A/ ) ðA RÞ/ ¼ ðA RÞ/ þ ðb A/Þð10:85ÞDenoting values obtained from the previous iteration with a superscript ðn 1Þ,and values obtained at the current iteration with superscript ðnÞ, the iterative processis obtained by rewriting Eq. (10.85) in the following form:ðA RÞ/ðnÞ ¼ ðA RÞ/ðn1Þ þ b A/ðn1Þð10:86Þ33010Solving the System of Algebraic EquationsTherefore values of /ðnÞ at the current iteration can be obtained from knowledge of/ðn1Þ values obtained at the previous iteration.
Equation (10.86) is usually solvedin residual form whereby the solution /ðnÞ at iteration ðnÞ is expressed in terms ofthe solution /ðn1Þ at iteration ðn 1Þ plus a correction /0ðnÞ , i.e.,/ðnÞ ¼ /ðn1Þ þ /0ðnÞð10:87ÞThus Eq. (10.86) becomesð nÞðA RÞ/0¼ b A/ðn1Þð10:88ÞOnce /0ðnÞ is found, Eq. (10.87) is used to update ϕ at every iteration.The ILU factorization can be performed using Gaussian elimination whiledropping some non diagonal elements at preset locations.
The locations whereelements are to be dropped give rise to different ILU approximations.10.3.6 Incomplete LU Factorization with no Fill-in ILU(0)Many variants of the ILU factorization technique exist and the simplest is the onedenoted by ILU(0) [15–17]. In ILU(0) the pattern of zero elements in the combinedL and U matrices is taken to be precisely the pattern of zero elements in the originalmatrix A.
Using Gaussian elimination, computations are performed as in the case ofa full LU factorization, but any new nonzero element (‘ij and uij ) arising in theprocess is dropped if it appears at a location where a zero element exists in theoriginal matrix A. Hence, the combined L and U matrices have together the samenumber of non zeros as the original matrix A. With this approach, the fill-inproblem that usually arises when factorizing sparse matrices (i.e., the creation ofnonzero elements at locations where the original matrix has zeros) is eliminated.
Inthe process however, the accuracy is reduced thereby increasing the number ofrequired iterations for convergence to be reached. To remedy this shortcoming,more accurate ILU factorization methods, which are often more efficient and morereliable, have been developed. These methods, differing by the level of fill-inallowed, are denoted by ILU(p) where p represents the order of fill-ins. The higherthe level of fill-ins, the more expensive the ILU decomposition step becomes.Moreover, when used within a multigrid approach (to be explained in a latersection), the ILU(0) method is more than adequate as a smoother. For this reasonhigher level methods are not presented and for more information interested readersare referred, among others, to the book by Saad [12].An ILU(0) factorization algorithm which assumes L to be a unit lower triangularmatrix and for which the same matrix A is used to store the elements of the unitlower and upper triangular matrices L and U is as given next.10.3Iterative Methods33110.3.7 ILU(0) Factorization AlgorithmFor k = 1 to N 1{For i = k + 1 to N and if aik0 Do :{aik =aikakk(values ){For j = k + 1 to N and if aijaij = aij}}}aik * akj0 Do :(u values )It should be mentioned that the ILU decomposition of symmetric positive definite matrices is denoted by Incomplete Cholesky decomposition.
In this case thefactorization is made just of the lower (or upper) triangular part and the approximation to the original matrix is written asTLL Að10:89Þ is the factorized sparse lower triangular matrix (approximation of L), withwhere Lthe preconditioning matrix P given byTP ¼ LL Að10:90Þ10.3.8 ILU Factorization PreconditionersA very popular class of preconditioners is based on incomplete factorizations. In thediscussions of direct methods it was shown that decomposing a sparse matrix A intothe product of a lower and an upper triangular matrices may lead to substantial fill-in.Because a preconditioner is only required to be an approximation to A1 , it issufficient to look for an approximate decomposition of A such that A L U.Choosing P ¼ L U leads also to an efficient evaluation of the inverse of the preconditioned matrix P1 since the inversion can easily be performed by the forward33210Solving the System of Algebraic Equationsand backward substitution, as described above, in which the exact L and U are nowreplaced by the approximations L and U, respectively.For the ILU(0) method, the incomplete factorization mimics the nonzero elements sparsity of the original matrix such that the pre-conditioner has exactly thesize of the original matrix.
In order to reduce the storage needed, Pommerellintroduced a simplified version of the ILU called diagonal ILU (DILU) [18]. In theDILU the fill-in of the off-diagonal elements is eliminated (i.e., the upper and lowerparts of the matrix are kept unchanged) and only the diagonal elements aremodified.In this case it is possible to write the preconditioner in the formP ¼ ðD þ LÞD1 ðD þ UÞð10:91Þwhere L and U are the lower and upper triangular decomposition of A, and D isnow a proper diagonal matrix, different from the diagonal of A. The D matrix isthus defined, as shown below, in a way that the diagonal of the product of thematrices in Eq. (10.91) equals the diagonal of A.10.3.9 Algorithm for the Calculation of D in the DILUMethodFor i = 1 to N Do :{dii = aii}For i = 1 to N Do :{For j = i + 1 to N and if aij{d jj = d jj}}a ji* aijdii0,a ji0 Do :10.3Iterative Methods333In this case the inverse of the preconditioner, which is defined asP ¼ ðD þ LÞD1 ðD þ UÞ ¼ L U;L ¼ ðD þ LÞD1 ;orP ¼ ðD þ LÞ I þ D1 U ¼ L U;L ¼ ðD þ LÞ;U ¼ ðD þ UÞU ¼ I þ D1 Uð10:92Þneeded in the solution of P/0ðnþ1Þ ¼ rðnÞ to find the correction field/0ðnþ1Þ ¼ P1 rðnÞ , can easily be calculated using the following forward and backward substitution algorithm.10.3.10 Forward and Backward Solution Algorithmwith the DILU MethodFor i = 1 to N Do :{For j = 1 to i 1 Do :{ti = dii 1 (ri}ij*t j )}For i = N to 1 Do :{For j = i + 1 to N Do :{'i}= tidii 1 (uij *t j )}The clear advantage of the DILU, apart from its recursive formulation, is that itrequires only one extra diagonal of storage.10.3.11 Gradient Methods for Solving Algebraic SystemsAnother group of iterative procedures for solving linear algebraic systems ofequations is the Gradient Methods, which include the Steepest Descent and theConjugate Gradient methods.
They were initially developed for cases where the33410Solving the System of Algebraic Equationscoefficient matrix A is symmetric positive definite (SPD) to reformulate the problem as a minimization problem for the quadratic vector function Qð/Þ given by1Qð/Þ ¼ /T A/ bT / þ c2ð10:93Þwhere c is a vector of scalars, and other variables are as defined in Eq. (10.1). Theminimum of Qð/Þ is obtained when its gradient with respect to / is zero. Thegradient Q0 ð/Þ of a vector field Qð/Þ, at a given /, points in the direction ofgreatest increase of Qð/Þ.