Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 33
Текст из файла (страница 33)
By applying the rule recursively, we caneventually express det(A) as a sum of determinants of order 1. This rule isparticularly useful for the calculation of det(A) when A is a sparse matrix,so that most of the summands drop out. For example, expanding in minorsfor the first row,EXERCISES4.7-1 Use Theorem 4.11 and Eq. (4.57) to prove that if A is invertible, then4.7-2 Use Theorems 4.12 and 4.4 to prove that ifthen A is invertible.4.7-3 Determine the number of arithmetic operations necessary to calculate the solution of alinear system of order 2 (a) by elimination and back-substitution, (b) by Cramer’s rule.4.7-4 If n = 3, then direct evaluation of (4.55) takes 12 multiplications and 5 additions. Howmany multiplications and additions does the evaluation of a determinant of order 3 take ifexpansion by minors (Rule 6) is used? How many multiplications/divisions and additions arenecessary for the same task if elimination is used?4.7-5 Prove: If the coefficient matrix of the linear system Ax = b is invertible, then it isalways possible to reorder the equations (if necessary) so that the coefficient matrix of thereordered (equivalent) system has all diagonal entries nonzero.
[Hint: By Theorem 4.10 atleast one of the summands in (4.55) must be nonzero if A is invertible.]4.7-6 Verify Rules 1 to 5 in case all matrices in question are of order 2. Try to prove Rules 4and 5 for matrices of arbitrary order.4.7-7 Rove Theorem 4.11 in case A and B are matrices of order 2.4.7-8 Let A be a tridiagonal matrix of order n; for p = 1, 2, . .
. , n, let Ap be the p × p matrixobtained from A by omitting rows p + 1, , . . , n and columns p + 1, . . . , n. Use Rule 6 to*4.8THE ElGENVALUE PROBLEM189prove that, with det(A 0 ) = 1,Write a program for the evaluation of the determinant of a tridiagonal matrix based on thisrecursion formula.*4.8 THE EIGENVALUE PROBLEMEigenvalues are of great importance in many physical problems. Thestability of an aircraft, for example, is determined by the location in thecomplex plane of the eigenvalues of a certain matrix.
The naturalfrequency of the vibrations of a beam are actually eigenvalues of an(infinite) matrix. Eigenvalues also occur naturally in the analysis of manymathematical problems because they are part of a particularly convenientand revealing way to represent a matrix (the Jordan canonical form andsimilar forms). For this reason, any system of first-order ordinary lineardifferential equations with constant coefficients can be solved in terms ofthe eigenvalues of its coefficient matrix.
Again, the behavior of thesequence A, A2, A3, . . . of powers of a matrix is most easily analyzed interms of the eigenvalues of A. Such sequences occur in the iterativesolution of linear (and nonlinear) systems of equations.For these and other reasons, we give in this section a brief introduction to the localization and calculation of eigenvalues.
The state of the artis, unfortunately, much beyond the scope of this book. The encyclopedicbook by J. H. Wilkinson [24] and the more elementary book by G. W.Stewart [23] are ready sources of information about such up-to-datemethods as the QR method (with shifts), and for the many details omittedin the subsequent pages.We say that the (real or complex) numberis an eigenvalue of thematrix B provided for some nonzero (real or complex) vector y,(4.60)The n-vector y is then called an eigenvector of B belonging to theeigenvalueWe can write (4.60) in the form(4.6 1)Since y is to be a nonzero vector, we see that is an eigenvalue of B if andonly if the homogeneous system (4.61) has nontrivial solutions.
Hence thefollowing lemma is a consequence of Theorem 4.4.Lemma 4.4 The numberis an eigenvalue for the matrix B if andonly ifis not invertible.Note that (4.60) or (4.61) determines an eigenvector foronly up toscalar multiples. If y is an eigenvector belonging to and z is a scalarmultiple ofthen z is also an eigenvector belonging to since190MATRICES AND SYSTEMS OF LINEAR EQUATIONSExamples The identity matrix I satisfiesfor every vector y.
Hence 1 is an eigenvalueeigenvector for I belonging to 1. Since a vectornone), it follows that 1 is the only eigenvalue ofThe null matrix 0 has the number 0 as itsThe matrixof I, and every nonzero vector is ancan belong to only one eigenvalue (orI.one and only eigenvalue.has the eigenvalue -1, since Bi3 = -i3. Also, B(i1 + i2) = 3(i1 + i2) so thatisalso an eigenvalue for B. Finally, B(i1 - i2) = - (i1 - i2), so that the eigenvalue - 1 hasthe two linearly independent eigenvectors i3, and (i1 - i2).If the matrix B = (Bij) is upper-triangular, thenis an eigenvalue of Bif and only iffor some i.
For the matrixis then alsoupper-triangular; hence, by Theorem 4.6,is not invertible if andonly if one of its diagonal entries is zero, i.e., if and only ifforsome i. Hence the set of eigenvalues of a triangular m a t r i x c o i n c i d e s w i t hthe set of numbers to be found on its diagonal.Example 4.10 In particular, the only eigenvalue of the matrixis the number 0, and both i1 and i2 are eigenvectors for this B belonging to thiseigenvalue. Any other eigenvector of B must be a linear combination of these twoeigenvectors. For suppose that the nonzero 3-vector y ( = y 1i1 + y 2i2 + y3i3) is aneigenvector for B (belonging therefore to the only eigenvalue 0). ThenSinceit follows that y3 = 0, that is, y = y 1i1 + y2i2, showing that y is alinear combination of the eigenvectors i1 and i2.As an illustration of why eigenvalues might be of interest, we nowconsider briefly vector sequences of the form(4.62)Such sequences occur in the various applications mentioned at the beginning of this section.
We must deal with such sequences in Chap. 5, in thediscussion of iterative methods for the solution of systems of equations.Assume that the starting vector z in (4.62) can be written as a sum ofeigenvectors of B, that is(4.63)whereThe mth term in the sequence (4.62) then has the simple form(4.64)*4.8THE EIGENVALUE PROBLEM191Hence the behavior of the vector sequence (4.62) is completely determinedby the simple numerical sequencesIt follows, for example, that(4.65)Assume further that theare ordered by magnitude,which can always be achieved by proper ordering of the yi ’s.
Further, weassume that(4.66)This assumption requires not only that be different from all the other[which can always be achieved by merely adding all yi ’s in (4.63) whichbelong tothereby getting just one eigenvector belonging tobut alsothat there be no otherof the same magnitude asand it is this part thatmakes (4.66) a nontrivial assumption.Then, on dividing both sides of (4.64) bywe get thatBy our assumptions,Hence we conclude that(4.67)In words, if z can be written in the form (4.63) in terms of eigenvectors ofB so that the eigenvaluecorresponding to y1 is absolutely bigger than allthe other eigenvalues, then a properly scaled version of Bmz convergesto y1.Example 4.11 We saw earlier that the matrixhas the eigenvectors z1 = i1 + i2, z2 = i1 - i2, z3 = i3 with corresponding eigenvaluesThese eigenvectors are linearly independent (see Exercise 4.110), hence form a basis for all 3-vectors.
It follows that every 3-vector can be written as asum of eigenvectors of B. In particular, the vector z given by z T = [1 2 3] can bewrittenwhere192MATRICES AND SYSTEMS OF LINEAR EQUATIONSTable 4.1In Table 4.1, we have listedhas beenscaled to make its first entry equal to 1.
Evidently, the z(m) converge to the eigenvectori1 + i1 belonging toThe power method for the calculation of the absolutely largest eigenvalue of a given matrix B is based on this illustration. One picks somevector z, for example, z = i1; generates the (first few) terms of the sequence(4.62); and calculates ratios of the form(4.68)as one goes along. From (4.64)thereforeprovidedand providedNote that itpays to use the vector u = Bmz in (4.68) in case B is symmetric, that is,B = BT. The resulting ratiois called the Rayleigh quotient (for u and B) and is easily seen to equalhence equalsto withinExample 4.12 From the sequence generated in Example 4.11, we obtain, with u = i1, thesequence of ratioswhile, with u = i2, we get the sequenceBoth sequences appear to converge towhich does not appear to converge to 3.But, for u = i3, we get the sequence*4.8THE EIGENVALUE PROBLEM193Since B is symmetric, we also calculate the sequence of Rayleigh quotients and findthe ratiosThis sequence gains roughly one digit per term which corresponds to the fact that itshould agree with 3A clever variant of the power method is inverse iteration.
Here onechooses, in addition to the starting vector z satisfying (4.63), a number pnot equal to an eigenvalue of B and then forms the sequencewithNote that, for each of the eigenvectors y i of B in (4.63), (B - pI) yi =Therefore,This shows that z is also the sum of eigenvectors ofwithcorresponding eigenvaluesIf now p is quite closeand not to any other, thento one of the eigenvalueswill be quite large in absolute value compared with the otherand our earlier discussion of the power methodeigenvalueswould then allow the conclusion that a suitably scaled version of theconverges quite fast to the eigenvector yj corresequencesponding towhile the corresponding ratioswill converge equally fast to the numberThis makes inverseiteration a very effective method in the following situation: We havealready obtained a good approximation to an eigenvalue of B and wish torefine this approximation and/or calculate a corresponding eigenvector.As we described it, inverse iteration would require first the construction of the matrixBut, as discussed in Sec.
4.4, we do notconstruct such an inverse explicitly. Rather, withwe note thatConsequently, once we have obtained a PLU factorization for the matrixB - pI, we obtain z (m) from z(m-1) by the substitution Algorithm 4.4, thatis, inoperations. This is no more expensive than the explicitcalculation of the productif we had it.Here is a FORTRAN subroutine for carrying out inverse iteration. Atthe mth step, we have chosen u = Bm z, that is, we calculate the Rayleighquotient at each step.194MATRICES AND SYSTEMS OF LINEAR EQUATIONSSUBROUTINE INVITR ( B, N, EGUESS, W, D, IPIVOT,*EVALUE, VECTOR, IFLAG )CCALLS F A C T O R , S U B S T.INTEGER IFLAG,IPIVOT(N),I,ITER,ITERMX,JREAL B(N,N),D(N),EGUESS,EVALUE,VECTOR(N),VGUESS(N),W(N,N)*,EPSLON,EVNEW,EVOLD,SQNORMC****** I N P U T ******C B THE MATRIX OF ORDER N WHOSE EIGENVALUE/VECTOR IS SOUGHT.C N ORDER OF THE MATRIX B .C EGUESS A FIRST GUESS FOR THE EIGENVALUE.C VGUESS N-VECTOR CONTAINING A FIRST GUESS FOR THE EIGENVECTOR.C****** W O R K A R E A ******C w MATRIX OF ORDER NC D VECTOR OF LENGTH NC IPIVOT INTEGER VECTOR OF LENGTH NC****** O U T P U T ******C EVALUE COMPUTED APPROXIMATION TO EIGENVALUEC VECTOR COMPUTED APPROXIMATION TO EIGENVECTORC IFLAG AN INTEGER,= 1 OR -1 (AS SET IN FACTOR), INDICATES THAT ALL IS WELL,CC= 0 , INDICATES THAT SOMETHING WENT WRONG.