Higham - Accuracy and Stability of Numerical Algorithms (523152), страница 63
Текст из файла (страница 63)
This scale independence applies to the Jacobi and SOR iterations, but not, for example, tothe stationary Richardson iteration, for which M = I. One of the benefits ofdoing a componentwise analysis is that under the above assumptions on Mand N the bound (16.13) largely shares the scale independence. In (16.13)the scalar c(A) is independent of the row and column scaling of A, and theterm |A– 1 |(|M| + |N|)|x| scales in the same way as x. Furthermore, θ x can beexpected to depend only mildly on the row and column scaling, because thebound in (16.2) for the rounding error terms has the correct scaling properties.What can be said about c(A)? In general, it can be arbitrarily large.Indeed, c(A) is infinite for the Jacobi and Gauss–Seidel iterations for anyn > 3 if A is the symmetric positive definite matrix with a ij = min(i, j),because A-1 is tridiagonal and (M - 1 N )k M–1 is not.If M-1 and M- 1N both have nonnegative elements then c(A) = 1; as wewill see in the next section, this condition holds in some important instances.Some further insight into c(A) can be obtained by examining the casewhere M– 1 Nis diagonal with eigenvaluesIt is easy to show thatc(A) = maxiso c(A) can be large only if p(M– 1 N) is closeto 1.
Although M– 1 N cannot be diagonal for the Jacobi or Gauss–Seidel332S TATIONARY I TERATIVE M ETHODSmethods, this formula can be taken as being indicative of the size of c(A)when M– 1 N is diagonalizable with a well-conditioned matrix of eigenvectors.We therefore have the heuristic inequality, for general A,(16.14)In practical problems where stationary iteration is used, we would expectc(A) to be of modest size (O(n), say) for two reasons. First, to achievea reasonable convergence rate, p(M– 1 N) has to be safely less than 1, whichimplies that the heuristic lower bound (16.14) for c(A) is not too large. Second,even if A is sparse, A–1 will usually be full, and so there are unlikely to bezeros on the right-hand side of (16.12).
(Such zeros are dangerous becausethey can make c(A) infinite.)Note that in (16.13) the only terms that depend on the history of theiteration are |Gm+ 1e0 | and θx . In using this bound we can redefine x 0. to beany iteratethereby possibly reducing θx . This is a circular argument ifused to obtain a priori bounds, but it does suggest that the potentially largeθ x term will generally be innocuous. Note that if xi = 0 for some i then θ x isinfinite unless= 0 for all k.
This difficulty with zero components of xcan usually be overcome by redefiningfor which the above bounds remain valid if θx is replaced by 2θ x .Finally, we note that (16.13) implies(16.15)If θxc(A) = O(1) and |M| + |N| < a|A|, with a = O(1), this bound is of theform cn cond(A, x)u as mand we have componentwise forward stability.Now we specialize the forward error bound (16.15) to the Jacobi, Gauss–Seidel, and SOR iterations.16.2.1. Jacobi’s MethodFor the Jacobi iteration, M = D = diag(A) and N = diag(A) – A.
Hence|M| + |N| = |M – N| = |A|, and so (16.15) yields(16.16)If A is an M-matrix then M-1 > 0 and M-1 > 0, so c(A) = 1. Hence inthis case we have componentwise forward stability as mif θx is suitablybounded.16.2 FORWARD ERROR ANALYSIS333Table 16.2. Jacobi method, a = l/2 – 8- j .jjjjj=====12345p(M-1N)0.750.970.9961.001.00Iters.9035219741122655412cond(A,x)3.404.764.975.005.002.22e-161.78-151.42e-141.14e-139.10e-131.27e-169.02e-167.12e-155.69e-144.55e-13Table 16.3.
Jacobi method, a = -(1/2-8 - j).jjjjj=====12345p(M - 1 N)0.750.970.9961.001.00Wozniakowski [1113,matrixIters.3927316629051382941978,cond(A,x)7.006.30e15.11e24.09e33.28e44.44e-164.88e-154.22e-143.41e-132.73e-125.55e-177.63e-178.24e-178.32e-178.33e-17Ex. 4.1] cites the symmetric positive definiteas a matrix for which the Jacobi method can be unstable, in the sense thatthere exist rounding errors such that no iterate has a relative error bounded byLet us see what our analysis predicts for this example. Straightforward manipulation shows that if a = 1/2 –thensoas(The heuristic lower bound (16.14) is approximatelyin this case.) Therefore (16.16) suggests that the Jacobi iteration canbe unstable for this matrix.
To confirm the instability we applied the Jacobimethod to the problem with x = [1, 1, 1] T and a = 1/2 – 8 – j, j = 1:5. Wetook a random x 0. with ||x – x0 ||2 = 10-10, and the iteration was terminatedwhen there was no decrease in the norm of the residual for 50 consecutiveiterations. Table 16.2 reports the smallest value ofover all iterations, for each j; the number of iterations is shown in the column“Iters.”The ratiotakes the values 8.02, 7.98, 8.02,334S TATIONARY I TERATIVE M ETHODS7.98 for j = 1:4, showing excellent agreement with the behaviour predictedby (16.16), sinceMoreover,in these tests and settingthe bound (16.16) is at most a factor 13.3 larger than the observederror, for each j.If –1/2 < a < 0 then A is an M-matrix and c(A) = 1.
The bound (16.16)shows that if we set a = –( 1/2 – 8 – j) and repeat the above experiment thenthe Jacobi method will perform in a componentwise forward stable manner(clearly,is to be expected). We carried out the modified experiment,obtaining the results shown in Table 16.3. All thevalues are lessthan cond(A,x) u, so the Jacobi iteration is indeed componentwise forwardstable in this case. Note that since p ( M – 1 N) and ||M- 1 N||2 take the samevalues for a and –a, the usual rate of convergence measures cannot distinguishbetween these two examples.16.2.2. Successive OverrelaxationThe SOR method can be written in the form Mxk+1=Nxk+ b , whereand where A = D + L + U, with L and U strictly lower triangular and uppertriangular, respectively.
The matrix |M| + |N| agrees with |A| everywhereexcept, possibly, on the diagonal, and the best possible componentwise inequality between these two matrices is(16.17)Note that f(w) = 1 for 1 < w < 2, andwe haveasFrom (16.15)If A is an Al-matrix and 0 < w < 1 then M -1 > 0 and M- 1 N > 0, soc(A) = 1. The Gauss–Seidel method corresponds to w = 1, and it is interestingto note that for this method the forward error bound has exactly the same formas that for the Jacobi method (though c(A) and θ x are, of course, differentfor the two methods).16.3. Backward Error AnalysisWe now turn our attention to bounding the residual vector, rk = b –From (16.3) and (16.4) we find that16.3 B ACKWARD E RROR A NALYSIS335It is easy to show that AG k = H k A, where H := NM-1 (recall that G =M – 1 N).
Therefore(16.18)Taking norms and using (16.2) gives, similarly to (16.8),(16.19)whereThe following bound shows that σ is small ifclose to 1:with q not tooA potentially much smaller bound can be obtained under the assumption thatH is diagonalizable. If H = XDX –1, with D = diagthen(16.20)Note thatso we see the reappearanceof the term in the heuristic bound (16.14). The bound (16.20) is of modestsize if the eigenproblem for H is well conditionedis small) and p(H)is not too close to 1. Note that real eigenvalues of H near +1 do not affectthe bound for σ, even though they may cause slow convergence.To summarize, (16.19) shows that, for large m, the normwise backwarderrorfor theis certainly no larger thanNote thatfor the Jacobi and Gauss-Seidel methods,and also for the SOR method if w > 1.336S TATIONARY I TERATIVE M ETHODSA componentwise residual bound can also be obtained, but it does not leadto any identifiable classes of matrix or iteration for which the componentwiserelative backward error is small.To conclude, we return to our numerical examples.
For the SOR exampleat the start of the chapter, c(A) = O(1045) and σ = O(1030), so our errorbounds for this problem are all extremely large. In this problem maxi |1 –whereso (16.14) is very weak; (16.20) isnot applicable since M– 1 N is defective.For the first numerical example in §16.2.1, Table 16.2 reports the minimumbackward errorsFor this problem it is straightforward toshow thatThe ratios of backward errors forsuccessive value of j are 7.10, 7.89, 7.99, 8.00, so we see excellent agreementwith the behaviour predicted by the bounds. Table 16.3 reports the normwisebackward errors for the second numerical example in §16.2.1.
The backwarderrors are all less than u, which again is close to what the bounds predict,since it can be shown that σ < 5 for –1/2 < a < 0. In both of the examplesof §16.2.1 the componentwise backward errorand inour practical experience this behaviour is typical or the Jacobi and SORiterations.16.4. Singular SystemsSingular linear systems occur in a variety of applications, including the computation of the stationary distribution vector in a Markov chain [94, 1994],[647, 1983] and the solution of a Neumann boundary value problem by finitedifference methods [834, 1976].