Nash - Compact Numerical Methods for Computers (523163), страница 29
Текст из файла (страница 29)
The procedure jacobi. . . is an extremely compact procedure and considerable care has been taken to ensure that both eigenvalues andeigenvectors are of the highest precision attainable with the word length that isused.’The last sentence above implies that while jacobi uses very little storage, stepshave been taken to optimise the program with respect to precision of the results.This is accomplished in part by saving and accumulating some intermediate resultsin a working vector. The problem requires, formally, n storage locations for the134Compact numerical methods for computerseigenvalues in addition to two n by n arrays to store the original matrix and theeigenvectors.
Thus 2n2+n elements appear to be needed. Rutishauser’s programrequires 2n2+2n elements (that is, an extra vector of length n) as well as anumber of individual variables to handle the extra operations used to guaranteethe high precision of the computed eigensolutions. Furthermore, the programdoes not order the eigenvalues and in a great many applications this is a necessaryrequirement before the eigensolutions can be used in further calculations.
Theordering the author has most frequently required is from most positive eigenvalueto most negative one (or vice versa).The extra code required to order the eigenvalues and their associated eigenvectors can be avoided by using different formulae to calculate intermediate results.That is to say, a slightly different organisation of the program permits an extrafunction to be incorporated without increasing the code length. This is illustratedby the second column of table 10.11.
It should be emphasised that the programresponsible for these results was a simple combination of Rutishauser’s algorithmand some of the ideas that have been presented in §3.3 with little attention to howwell these meshed to preserve the high-precision qualities Rutishauser has carefully built into his routine.
The results of the mongrel program are nonethelessquite precise.If the precision required is reduced a little further, both the extra vector oflength n and about a third to a half of the code can be removed. Here theuncertainty in measuring the reduction in the code is due to various choices suchas which DIMENSION-ing, input-output or constant setting operations are includedin the count.It may also seem attractive to save space by using the symmetry of the matrix Aas well as that of the intermediate matrices A(k).
This reduces the workspace byn (n –1)/2 elements. Unfortunately, the modification introduces sufficient extracode that it is only useful when the order, n, of A is greater than approximately10. However, 10 is roughly the order at which the Jacobi methods becomenon-competitive with approaches such as those mentioned earlier in this section.Still worse, on a single-precision machine, the changes appear to reduce theprecision of the results, though the program used to produce column 4 of table10.1 was not analysed extensively to discover and rectify sources of precision loss.Note that at order 10, the memory capacity of a small computer may already beused up, especially if the eigenprohlem is part of a larger computation.If the storage requirement is critical, then the methods of Hestenes (1958),Chartres (1962) and Kaiser (1972) as modified by Nash (1975) should beconsidered.
This latter method is outlined in §§10.1 and 10.2, and is one whichtransforms the original matrix A into another, B, whose columns are the eigenvectors of A each multiplied by its corresponding eigenvalue, that is(10.43)2where E is the diagonal matrix of eigenvalues. Thus only n storage locations arerequired and the code is, moreover, very short. Column 5 of table 10.1 shows theprecision that one may expect to obtain, which is comparable to that found usingsimpler forms of the traditional Jacobi method. Note that the residual andinner-product computations for table 10.1 were all computed in single precision.Previous Home NextChapter 11THE GENERALISED SYMMETRIC MATRIXEIGENVALUE PROBLEMConsider now the generalised matrix eigenvalue problemAx = e Bx(2.63)where A and B are symmetric and B is positive definite.
A solution to thisproblem will be sought by transforming it into a conventional eigenvalue problem.The trivial approach(11.1)B- 1Ax = exgives an eigenvalue problem of the single matrix B- 1A which is unfortunately notsymmetric. The approximation to B- 1A generated in a computation may thereforehave complex eigenvalues. Furthermore, methods for solving the eigenproblem ofnon-symmetric matrices require much more work than their symmetric matrixcounterparts. Ford and Hall (1974) discuss several transformations that convert(2.63) into a symmetric matrix eigenproblem.Since B is positive definite, its eigenvalues can be written as the squaresi = l, 2, . .
. , n, so that(11.2)B=ZD 2 ZTwhere Z is the matrix of eigenvectors of B normalised so thatZZ T =Z T Z= l n .(11.3)B-1/2 = ZD - 1ZT(11.4)( B-1/2AB-l/2 )(B 1 / 2X)=B1 / 2XE(11.5)Thenandis equivalent to the complete eigenproblemAX = BXE(11.5a)which is simply a matrix form which collects together all solutions to (2.63).Equation (11.5) can be solved as a conventional symmetric eigenproblemA 1V = VEwhereA 1= B- 1 / 2AB- 1 / 2(11.6)(11.7a)andV = B1 / 2 X.135(11.76)Compact numerical methods for computers136However, it is possible to use the decomposition (11.2) in a simpler fashion sinceAX = ZD2 ZT XE(11.8)so that(D- 1ZTAZD -l )(DZT X)=(DZT X)E(11.9)orA2Y = YE(11.10)Y= DZT X(11.11 a)A2= D - 1ZT AZD - 1 .(11.11 b)whereandAnother approach is to apply the Choleski decomposition (algorithm 7) to B sothatAX = LLT XE(11.12)where L is lower-triangular.
Thus we have(L- 1AL- T)(LT X)=(LT X) E(11.13)A3 W = WE.(11.14)orNote that A3 can be formed by solving the sets of equationsLG = A(11.15)A3 LT = G(11.16)andor(11.17)so that only the forward-substitutions are needed from the Choleski back-solutionalgorithm 8. Also.
the eigenvector transformation can be accomplished by solvingLT X = W(11.18)requiring only back-substitutions from this same algorithm.While the variants of the Choleski decomposition method are probably themost efficient way to solve the generalised eigenproblem (2.63) in terms of thenumber of arithmetic operations required, any program based on such a methodmust contain two different types of algorithm, one for the decomposition and oneto solve the eigenproblem (11.13).
The eigenvalue decomposition (11.2), on theother hand, requires only a matrix eigenvalue algorithm such as the Jacobialgorithm 14.Here the one-sided rotation method of algorithm 13 is less useful since there isno simple analogue of the Gerschgorin bound enabling a shifted matrixA' = A + k B(11.19)to be made positive definite by means of an easily calculated shift k. Furthermore,the vectors in the Jacobi algorithm are computed as the product of individualThe generalised symmetric matrix eigenvalue problem137plane rotations.
These post-multiply the current approximation to the vectors,which is usually initialised to the unit matrix l n. From (11.11a), however, we haveX = ZD- 1Y(11.20)where Y is the matrix of eigenvectors of A2 , and hence a product of planerotations. Therefore, by setting the initial approximation of the vectors to ZD - 1when solving the eigenproblem (11.10) of A2 , it is not necessary to solve (11.11a)explicitly, nor to save Z or D.The form (2.63) is not the only generalised eigenproblem which may arise.Several are discussed in Wilkinson and Reinsch (1971, pp 303-14). This treatment is based mainly on the Choleski decomposition. Nash (1974) also uses aCholeski-type decomposition for the generalised eigenproblem (2.63) in the casewhere A and B are complex Hermitian matrices.
This can be solved by ‘doublingup’ the matrices to give the real symmetric eigenproblem(11.21)which therefore requires 12n2 matrix elements to take the expanded matrices andresulting eigenvectors. Nash (1974) shows how it may be solved using only4n2+ 4 n matrix elements.Algorithm 15. Solution of a generalised matrix eigenvalue problem by two applicationsof the Jacobi algorithmprocedure genevJac( n : integer; {order of problem}var A, B, V : rmatrix); {matrices and eigenvectors}{alg15pas ==To solve the generalized symmetric eigenvalue problemAx = eBxwhere A and B are symmetric and B is positive-definite.Method: Eigenvalue decomposition of B via Jacobi method to form the‘matrix square root’ B-half. The Jacobi method is applied a second timeto the matrixC = Bihalf A Bihalfwhere Bihalf is the inverse of B-half.
The eigenvectors x are the columnsof the matrixX = Bihalf Vwhere V is the matrix whose columns are the eigenvectors of matrix C.Copyright 1988 J.C.Nash}vari,j,k,m : integer;s : real;initev : boolean;begin {STEPS 0 and 1}writeln(‘alg15.pas -- generalized symmetric matrix eigenproblem’);initev := true; {to indicate eigenvectors must be initialized}writeln(‘Eigensolutions of metric B’);evJacobi(n, B, V, initev); {eigensolutions of B -- STEP 2}138Compact numerical methods for computersAlgorithm 15. Solution of a generalised matrix eigenvalue problem by two applicationsof the Jacobi algorithm (cont.){**** WARNING **** No check is made to see that the sweep limithas not been exceeded.