Nash - Compact Numerical Methods for Computers (523163), страница 54
Текст из файла (страница 54)
We may wish to recalculate from raw data}else {count=n or pn>=rq so cannot proceed}begin {rest of STEPS 17 & 18}if itn=l then conv := true; {We cannot proceed in eitherreducing the eigenvalue approximation or changing theeigenvector, so must assume convergence if we are usingthe gradient (steepest descent) direction of search.}itn := n+l; {to force end of cg cycle}end; {STEP 24}until (itn>=n) or (count=n) or (gg<=tol) or conv; {end of cg loop}end {if gg>tol}else conv := true; {The gradient norm is small, so we presume aneigensolution has been found.}end {if rq<pa}else {we have not reduced Rayleigh quotient in a major cg cycle}beginconv := true; {if we cannot reduce the Rayleigh quotient}end;ta := 0.0; {Normalize eigenvector at each major cycle}for i := 1 to n do ta := ta+sqr(X[i]); ta := 1.0/sqrt(ta);for i := 1 to n do X[i] := ta*X[i];end; {while (ipr<=itlimit) and (not conv) }if ipr>itlimit then ipr := -ipr; {to inform calling program limit exceeded}writeln;end; {alg25.pas == rqmcg}Example 19.3.
Conjugate gradients for inverse iteration and Rayleigh quotientminimisationTable 19.3 presents approximations to the minimal and maximal eigensolutionsof the order-10 matrix eigenproblem (2.63) having as A the Moler matrix and asB the Frank matrix (appendix 1). The following notes apply to the table.(i) The maximal (largest eigenvalue) eigensolution is computed using (- A) insteadof A in algorithm 25.(ii) Algorithm 15 computes all eigensolutions for the problem. The maximumabsolute residual quoted is computed in my program over all these solutions, notsimply for the eigenvalue and eigenvector given.(iii) It was necessary to halt algorithm 10 manually for the case involving a shiftof 8·8.
This is discussed briefly in §9.3 (p 109).(iv) The three iterative algorithms were started with an initial vector of ones.250Compact numerical methods for computersTABLE 19.3. (a) Minimal and (b) maximal eigensolutions of Ax = eBx for A = Moler matrix, B = Frankmatrix (order 10).Algorithm 10Algorithm 15Section 19.3Algorithm 25ShiftEigenvalueIterations or sweepsMatrix productsRayleigh quotient02·1458E-64--2·53754E-67_-02·14552E-63232·1455E-6262·14512E-6Eigenvector:0·4330170·216510·1082585·41338E-22·70768E-21·35582E-26·81877E-33·48868E-31·90292E-31·26861E-30·4330170·216510·1082575·41337E-22·70768E-21·35583E-26·81877E-33·48866E-31·9029E-31·26858E-30·4330170·216510·1082575·41337E-22·70768E-21·35582E-26·81879E-33·48869E-31·90291E-31·26859E-3Maximum residualError sum of squares r T rGradient norm2 gT g2·17929E-7 <8·7738E-5 2·0558E-13-(a) Minimal eigensolution-0·433015-0·216509-0·108257-5·41331E-2-2·70767E-2-1·35583E-2-6·81877E-3-3·48891E-3-1·90299E-3-1·26864E-34·62709E-119·62214E-15(b) Maximal eigensolutionShiftEigenvalueIterations or sweepsMatrix productsRayleigh quotientEigenvector:Maximum residualError sum of squares r T rGradient norm2 g T g8·88·81652(see notes)0·217765-0·4599210·659884-0·7993080·865401-0·8521010·760628-0·5993750·383132-0·1317398·816447-8·88·816516166-0·2177640·459918-0·6598770·799302-0·8653960·8521-0·7606320·599376-0·3831320·1317390·219309-0·4626070·662815-0·8021110·867203-0·851420·757186-0·5948340·379815-0·1306487·62939E-6 <8·7738E-5 4·9575E-6-968·816510·219343-0·4627590·663062-0·8017590·866363-0·8511880·757946-0·5956270·379727-0·1303275·73166E-35·82802E-9(v) Different measures of convergence and different tolerances have been used inthe computations, which were all performed on a Data General NOVA in23-bit binary arithmetic.
That these measures are different is due to the variousoperating characteristics of the programs involved.Conjugate gradients method in linear algebra251Example 19.4. Negative definite property of Fröberg’s matrixIn example 19.1 the coefficient matrix arising in the linear equations ‘turns out tobe negative definite’. In practice, to determine this property the eigenvalues of thematrix could be computed. Algorithm 25 is quite convenient in this respect, sincea matrix A having a positive minimal eigenvalue is positive definite. Conversely,if the smallest eigenvalue of (-A) is positive, A is negative definite. The minimumeigenvalues of Fröberg coefficient matrices of various orders were thereforecomputed. (The matrices were multiplied by -1.)OrderRayleigh quotientof (-A)410501000·3501447·44406E-23·48733E-38·89398E-4Matrix products Gradient norm2neededgT g51126491·80074E-132·08522E-101·9187E-107·23679E-9These calculations were performed on a Data General NOVA in 23-bit binaryarithmetic.Note that because the Fröberg matrices are tridiagonal, other techniques maybe preferable in this specific instance (Wilkinson and Reinsch 1971).252Previous Home NextAppendix 1NINE TEST MATRICESIn order to test programs for the algebraic eigenproblem and linear equations, it isuseful to have a set of easily generated matrices whose properties are known.
Thefollowing nine real symmetric matrices can be used for this purpose.Hilbert segment of order nA i j = l / (i+ j- 1 ) .This matrix is notorious for its logarithmically distributed eigenvalues. While itcan be shown in theory to be positive definite, in practice it is so ill conditionedthat most eigenvalue or linear-equation algorithms fail for some value of n<20.Ding Dong matrixA i j =0·5/(n- i- j+1·5).The name and matrix were invented by Dr F N Ris of IBM, Thomas J WatsonResearch Centre, while he and the author were both students at Oxford. ThisCauchy matrix has few trailing zeros in any elements, so is always representedinexactly in the machine.
However, it is very stable under inversion by eliminationmethods. Its eigenvalues have the property of clustering near ± π /2.Moler matrixAi i =iA i j =min(i, j)-2for ij.Professor Cleve Moler devised this simple matrix. It has the very simple Choleskidecomposition given in example 7.1, so is positive definite. Nevertheless, it hasone small eigenvalue and often upsets elimination methods for solving linearequation systems.Frank matrixA i j=min(i,j).A reasonably well behaved matrix.Bordered matrixAi i =1Ai n =A n i =2 1 - iAi j = 0for i notherwise.The matrix has (n-2) eigenvalues at 1.
Wilkinson (1965, pp 94-7) gives somediscussion of this property. The high degree of degeneracy and the form of the253Compact numerical methods for computers254‘border’ were designed to give difficulties to a specialised algorithm for matrices ofthis form in which I have been interested from time to time.Diagonal matrixAi i =iAi j=0for ij.This matrix permits solutions to eigenvalue and linear-equation problems to becomputed trivially. It is included in this set because I have known severalprograms to fail to run correctly when confronted with it. Sometimes programsare unable to solve trivial problems because their designers feel they are ‘tooeasy.’ Note that the ordering is ‘wrong’ for algorithms 13 and 14.Wilkinson W+matrixA i i = [n/2]+1-min( i, n- i+1)Ai, i + l = A i + 1 ,i =1Ai j = 0for i=1, 2, .
. . , nfor i=1, 2, . . . , (n-1)for| j - i | > lwhere [b] is the largest integer less than or equal to b. The W+matrix (Wilkinson1965, p 308) is normally given odd order. This tridiagonal matrix then hasseveral pairs of close eigenvalues despite the fact that no superdiagonal element issmall. Wilkinson points out that the separation between the two largest eigenvalues is of the order of (n!)-2 so that the power method will be unable toseparate them unless n is very small.Wilkinson W-matrixAi i = [ n/ 2 ] + 1 -iAi, i + l = A i+ l , lAi j=0for i=1, 2, . . . , nfor i=1, 2, .
. . , (n-1)for|j-i|>1where [b] is the largest integer less than or equal to b. For odd order, this matrixhas eigenvalues which are pairs of equal magnitude but opposite sign. Themagnitudes of these are very close to some of those of the corresponding W+matrix.OnesAi j =1for all i,j.This matrix is singular.
It has only rank one, that is, (n-1) zero eigenvalues.The matrices described here may all be generated by the Pascal procedureMATRIXIN.PAS, which is on the software diskette. This procedure also allows forkeyboard entry of matrices.Previous Home NextAppendix 2LIST OF ALGORITHMSAlgorithm 1. Singular-value decompositionAlgorithm 2. Least-squares solution via singular-value decompositionAlgorithm 3.
Givens’ reduction of a real rectangular matrixAlgorithm 4. Givens’ reductions, singular-value decomposition and leastsquares solutionAlgorithm 5. Gauss elimination with partial pivotingAlgorithm 6. Gauss elimination back-substitutionAlgorithm 7. Choleski decomposition in compact storageAlgorithm 8.
Choleski back-substitutionAlgorithm 9. Bauer-Reinsch inversion of a positive definite symmetricmatrixAlgorithm 10. Inverse iteration via Gauss eliminationAlgorithm 11. Standardisation of a complex vectorAlgorithm 12. Residuals of a complex eigensolutionAlgorithm 26. Eigensolutions of a complex matrix by Eberlein’s methodAlgorithm 13. Eigensolutions of a real symmetric matrix via the singularvalue decompositionAlgorithm 14.
A Jacobi algorithm for eigensolutions of a real symmetricmatrixAlgorithm 15. Solution of a generalised matrix eigenvalue problem by twoapplications of the Jacobi algorithmAlgorithm 16. Grid search along a lineAlgorithm 17. Minimisation of a function of one variableAlgorithm 18. Root-finding by bisection and False PositionAlgorithm 19.
A Nelder-Mead minimisation procedureAlgorithm 20. Axial searchAlgorithm 27. Hooke and Jeeves minimiserAlgorithm 21. Variable metric minimiserAlgorithm 22. Function minimisation by conjugate gradientsAlgorithm 23. Modified Marquardt method for minimising a nonlinearsum-of-squares functionAlgorithm 24. Solution of a consistent set of linear equations by conjugategradientsAlgorithm 25. Rayleigh quotient minimisation by conjugate gradients255364251567577888999106111112113123128137149154162173179183192200212236246Previous Home NextAppendix 3LIST OF EXAMPLESExample 2.1.
Mass-spectrograph calibrationExample 2.2. Ordinary differential equations: a two-point boundaryvalue problemExample 2.3. Least squaresExample 2.4. Surveying-data fittingExample 2.5. Illustration of the matrix eigenvalue problemExample 3.1. The generalised inverse of a rectangular matrix via thesingular-value decompositionExample 3.2.
Illustration of the use of algorithm 2Example 4.1. The operation of Givens’ reductionExample 4.2. The use of algorithm 4Example 6.1. The use of linear equations and linear least-squares problemsExample 7.1. The Choleski decomposition of the Moler matrixExample 7.2. Solving least-squares problems via the normal equationsExample 8.1. The behaviour of the Bauer-Reinsch Gauss-Jordan inversionExample 9.1. Inverse iterationExample 9.2.
Eigensolutions of a complex matrixExample 10.1. Principal axes of a cubeExample 10.2. Application of the Jacobi algorithm in celestial mechanicsExample 11.1. The generalised symmetric eigenproblem: the anharmonic oscillatorExample 12.1. Function minimisation-optimal operation of a publiclotteryExample 12.2. Nonlinear least squaresExample 12.3. An illustration of a system of simultaneous nonlinearequationsExample 12.4.