Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 34
Текст из файла (страница 34)
SEE PRINTED ERRORCMESSAGE .C****** M E T H O D ******CINVERSE ITERATION, AS DESCRIBED IN THE TEXT, IS USED.C******THEFOLLOWINGT E R M I N A T I O NP A R A M E T E R S ARE SETCCHERE, A TOLERANCEE P S L O NON THE DIFFERENCE BETWEEN SUCCESSIVEC EIGENVALUE ITERATES, AND AN UPPER BOUND I T E R M A X ON THE NUMBERCOF ITERATION STEPS.DATA EPSLON, ITERMX /.000001,20/CCPUT 2 - (EGUESS)*IDENTITY INTO WDO 10 J=1 ,NDO 9 I=1,N9W(I,J) = B(I,J)10W(J,J) = W(J,J) - EGUESSCALL FACTOR ( W, N, D, IPIVOT, IFLAG )IF (IFLAG .EQ. 0) THENPRINT 610610FORMAT(' EIGENVALUE GUESS TOO CLOSE.,'NO EIGENVECTOR CALCULATED.')RETURNEND IFITERATION STARTS HERECPRINT 619619 FORMAT(' ITER EIGENVALUEEIGENVECTCR COMPONENTS'/)EVOLD = 0.DO 50 ITER=l,ITERMXNORMALIZE CURRENT VECTOR GUESSCSQNORM = 0DO 20 I=1,N20SQNORM = VGUESS(I)**2 + SQNORMSQNORM = SQRT(SQNORM)DO 21 I=1,NVGUESS(I) = VGUESS(I)/SQNORM21CGET NEXT VECTOR GUESSCALL SUBST ( w, IPIVOT, VGUESS, N, VECTOR )CCALCULATE RAYLEIGH QUOTIENTEVNEW = 0.DO 30 I=1,N30EVNEW = VGUESS(I)*VECTOR(I) + EVNEWEVALUE = EGUESS + 1./EVNEWCPRINT630,ITER,EVALUE,VECTORFORMAT(I3,E15.7, 2X,3E14.7/(20X,3E14.7))630CSTOP ITERATION IF CURRENT GUESS IS CLOSE TOCPREVIOUS GUESS FOR EIGENVALUEIF ( ABS(EVNEW-EVOLD) .LE.
EPSLON*ABS(EVNEW) )lRETURNEVOLD = EVNEWDO 50 I=1,N*4.850THE EIGENVALUE PROBLEM195VGUESS(I) = VECTOR(I)CIFLAG = 0PRINT 660,EPSLON,ITERMX660 FORMAT(' NO CONVERGENCE TO WITHIN’,E10.4,’ AFTER',I3,'RETURNENDSTEPS.')Example 4.13 For the matrix B of Example 4.12, we use the above FORTRAN routineINVITR with z = [1, 1, 1]T and p = 3.0165, which is the best guess forfrom thefirst sequence of ratios in Example 4.12.ITER EIGENVALUEEIGENVECTOR COMPONENTSThe output shows very rapid convergence of the eigenvector (a gain of about twodecimal places per iteration step), and an even more rapid convergence of the eigenvalue, because B is symmetric and a Rayleigh quotient was computed.As an illustration of the fact that, in contrast to the power method itself, inverseiteration may be used for any eigenvalue, we also start with z = [1, 1, 1] T and p = 0,hoping to catch thereby an absolutely smallest eigenvalue of B.ITER EIGENVALUEEIGENVECTOR COMPONENTSThe convergence is much slower since 0 is not particularly close to the eigenvalue - 1.but we have convergence after nine iterations, with the computed eigenvector of theform [0,0,1]T (rather than of the more general form [a, - a,b]T possible for the eigenvalue- 1 of B) .The power method and its variant, inverse iteration, are not universallyapplicable.
First of all, complex arithmetic has to be used, in general, ifcomplex eigenvalues are to be found. There are special tricks available tosneak up on a pair of complex conjugate eigenvalues of a real matrix B inreal arithmetic. A more serious difficulty is the possibly very slow convergence when the next largest eigenvalue is very close in absolute value to196MATRICES AND SYSTEMS OF LINEAR EQUATIONSthe largest.
While Aitken’sprocess (Algorithm 3.7) can be used toaccelerate convergence if there is some, there will be no convergence ingeneral in the extreme case when(for example,whenA remedy of sorts can at times be provided by anappropriate shift, that is, by working with the matrix B - pI rather than Bitself, so that(see Exercises 4.8-6 and 4.8-7).Finally, the power method loses its theoretical support (as we gave ithere) when we cannot write the starting vector z as a sum of eigenvectorsof B.
Since we do not know the eigenvectors of B, we can be sure that zcan be written as a sum of eigenvectors of the n × n matrix B only if weknow that every n-vector can be written as a sum of eigenvectors of B. Butthen we are asking, in effect, that B have enough eigenvectors to staff abasis. A basis for all n-vectors which consists entirely of eigenvectors forthe n × n matrix B is called a complete set of eigenvectors for B.
Clearly, ifz1, . . . , zn is a complete set of eigenvectors for the n × n matrix B—hencea basis for all n-vectors-then any particular n-vector z can be written as alinear combination of these eigenvectors,for suitable coefficientsthen yi = ai zi is also aneigenvector for B, while if ai = 0, we can drop the term ai zi from the sumwithout loss. In this way, we obtain z as a sum of eigenvectors of B (exceptfor the uninteresting case z = 0).Unfortunately, not every matrix has a complete set of eigenvectors, aswe saw earlier in Example 4.10.SimilarityThe fact that not every matrix has a complete set of eigenvectors is anindication of the complications which eigenvalue theory has to offer.
Itcorresponds to the statement that not every square matrix can be written inthe formfor some diagonal matrixa diagonal matrix ifand only if the columns of the matrix Y consist of eigenvectors of B, whilesuch an n × n matrix Y is invertible if and only if its columns form a basisfor the n-vectors.One says that two matrices A and B are similar iffor some (invertible) matrix C. Similar matrices have the same eigenvaluesand related eigenvectors.
Indeed, iffor some nonzerovector x, and A = CBC-1, then Cx is also nonzero, and AC = CB, henceIn short, to each eigenvalue-eigen-*4.8THE EIGENVALUE PROBLEM197vector pairof B there corresponds the eigenvalue-eigenvector pairof A.This suggests, as a first step in the calculation of the eigenvalues of B,a similarity transformation of B into a matrix A = C-1 BC for which theeigenvalues are easier to calculate, in some sense.For example, if one could find an upper triangular matrix T similar toB, one would know all the eigenvalues of B, since they would all be foundon the diagonal of T.
In fact, one can proveTheorem 4.13: Schur’s theorem Every square matrix B can be writtenas U-1TU, with T upper-triangular and U unitary, that is, UHU = I.The fact that U is unitary has the pleasant consequence that ||Ux||2 =||x||2 for all x, hence ||T||2 = ||B||2 , so that the upper triangular matrix Twhich is similar to B even has the same size as B.
Unfortunately, though, itusually takes an iterative process to construct such U and T.But it is always possible infloating-point operations totransform B by similarity into a matrix H = (hij) which is almost triangular or Hessenberg, that is, for whichThus the lower-triangular part of H is zero except perhaps for the firstband below the diagonal. One constructs H from B by a sequence of n - 2simple similarity transformations, each producing one more column ofzeros below the first subdiagonal.For example, one might employ Householder reflections, that is,matrices of the form(4.69)(k)as follows. Suppose that H has already zeros in columns 1, 2, . . .
, k - 1below the subdiagonal, as would be the case for k = 1 with H (1) = B.Then we want to formin such a way that the first k - 1 columns remain unchanged, while wenow have zeros also in column k below the first subdiagonal. For this, onenotes first of all that the inverse of R(y) is R(y) itself becauseandHence H ( k + 1 ) = R(y)H (k) R(y). One computes similarly that(4.70)This, incidentally, explains the name “reflection.”Next, one should realize that the economical way to form the matrixproduct AR(y) is to take each row xT of A and replace it by the row vector198MATRICES AND SYSTEMS OF LINEAR EQUATIONSHence, with the choice(4.7 1)the matrix H(k)R(y) has the same first k columns as does H(k).
Next, oneshould realize that the economical way to form the matrix product R(y)Ais to take each column x of A and replace it by the column vectorSince H(k) has zeros in columns 1 through k - 1 below rowk - 1, this shows that the choice (4.7 1) also ensures that R(y)H (k) R(y) hasthe same first k - 1 columns as H(k). This leaves us with the problem ofchoosing yk+1, . . .
, yn in such a way that the k th column of R(y)H(k) haszeros in rows k + 2, . . . , n. Because of (4.71), this means that R (y) shouldmap the vectorto the vectorfor some scalar[Here, we have writtenthe (i, j) entry of H(k).] By (4.70), this means thatforFurther,showing that y must be a scalar multiple of the vectorindicates that the following choice of y will do the job:This(4.72)i.e.,Here we have chosen the sign ofso as toavoid loss of significance in the calculation of yk+1. The correspondingcan be written simply as(4.73)In this way, one obtains after n - 2 such steps the matrixwith H Hessenberg, anda product of certain Householder reflections, henceA Householder reflection is clearly a real symmetric matrix (if y isreal), therefore H is real symmetric in case B is.
Thus, H is tridiagonal andsymmetric in case B is real symmetric.For convenience, we now give a formal description.*4.8THE EIGENVALUE PROBLEM199Algorithm 4.6: Similarity transformation into upper Hessenberg formusing Householder reflections Given the matrix A of order n as storedin the first n columns of a workarray H of order n × (n + 2).Then H contains the interesting part of an upper Hessenberg matrixsimilar to the input matrix A in the upper Hessenberg portion of itsfirst n columns and rows. It also contains complete information aboutthe vectors y and the scalars which determine the various Householder reflections used. This information is needed when the eigenvectors of that upper Hessenberg matrix have to be transformed back intoeigenvectors of the original matrix A.The currently recommended method for finding all the eigenvalues ofa general matrix B is the QR method.