Deturck, Wilf - Lecture Notes on Numerical Analysis (523142), страница 28
Текст из файла (страница 28)
Christmas,however, comes just once an year, and since the general trend of the off-diagonal entries isdownwards, and the previous largest entry was uncommonly large, most of the time we’llbe replacing the former largest entry with a smaller entry. In that case we’ll just have tore-search the entire row to find the new champion.The number of rows that must be searched in their entireties in order to update theloc array is therefore at most two (for rows p and q) plus the number of rows i in whichloc[i] happens to be equal to p or to q.
It is reasonable to expect that the probability ofthe event loc[i]∈ {p, q} is about two chances out of n, since it seems that it ought not tobe any more likely that the winner was previously in those two columns than in any othercolumns. This has never been in any sense proved, but we will assume that it is so. Thenthe expected number of rows that will have to be completely searched will be about four,on the average (the pth, the qth, and an average of about two others).3.10 Corbató’s idea and the implementation of the Jacobi algorithm121It follows that the expected cost of maintaining the loc array is O(n) per rotation.The cost of finding the largest off-diagonal element has therefore been reduced to O(n) perrotation, after all bills have been paid.
Hence the cost of finding that element is comparablewith all of the other operations that go on in the algorithm, and it poses no special problem.Using Corbató’s suggestion, and subject to the equidistribution hypothesis mentionedabove, the cost of the complete Jacobi algorithm for eigenvalues and eigenvectors is O(n3 ).We show below the complete Maple procedure for updating the array loc immediately aftera rotation has been done in the plane of p and q.update:=proc(p,q) local i,r;global loc,A,n;for i from 1 to n-1 doif i=p or i=q then searchrow(i)elser:=loc[i];if r=p or r=q thenif abs(A[i,p])>=abs(A[i,loc[i]]) then loc[i]:=p; fi;if abs(A[i,q])>=abs(A[i,loc[i]]) then loc[i]:=q; fi;elseif abs(A[i,loc[i]])<=abs(A[i,r]) then loc[i]:=relse searchrow(i)fi;fi;fi;od;RETURN();end:The above procedure uses a small auxiliary routine:Procedure searchrow(i)This procedure searches the portion of row i of the n × n matrix A that lies above the maindiagonal and places the index of a column that contains an entry of largest absolute valuein loc[i].searchrow:=proc(i)local j,bigg;global loc,A,n,P;bigg:=0;for j from i+1 to n doif abs(A[i,j])>bigg thenbigg:=abs(A[i,j]);loc[i]:=j;fi;od;RETURN();end:We should mention that the search can be speeded up a little bit more by using a datastructure called a heap.
What we want is to store the locations of the biggest elements ineach row in such a way that we can quickly access the biggest of all of them.122Numerical linear algebraIf we had stored the set of winners in a heap, or priority queue, then we would havebeen able to find the overall winner in a single step, and the expense would have been themaintenance of the heap structure, at a cost of a mere O(log n) operations. This would havereduced the program overhead by an addition twenty percent or thereabouts, although theprogramming job itself would have gotten harder.To learn about heaps and how to use them, consult books on data structures, computerscience, and discrete mathematics.3.11Getting it togetherThe various pieces of the procedure that will find the eigenvalues and eigenvectors of areal symmetric matrix by the method of Jacobi are now in view.
It’s time to discuss theassembly of those pieces.Procedure jacobi(eps,dgts)The input to the jacobi procedure is the n × n matrix A, and a parameter eps thatwe will use as a reduction factor to test whether or not the current matrix is “diagonalenough,” so that the procedure can terminate its operation. Further input is dgts, whichis the number of significant digits that you would like to be carried in the calculation.The input matrix A itself is global, which is to say that its value is set outside of thejacobiprocedure, and it is available to all procedures that are involved.The output of procedure jacobiwill be an orthogonal matrix P that will hold theeigenvectors of A in its rows, and a linear array eig that will hold the eigenvalues of A.The input matrix A will be destroyed by the action of the procedure.The first step in the operation of the procedure will be to compute the original offdiagonal sum of squares test.
The Jacobi process will halt when this sum of squares of alloff-diagonal elements has been reduced to eps*test or less.Next we set the matrix P to the n × n identity matrix, and initialize the array loc bycalling the subr outine search for each row of A. This completes the initialization.The remaining steps all get done while the sum of the squares of the off-diagonal entriesof the current matrix A exceeds eps times test.We first find the largest off-diagonal element by searching through the numbers Ai,loc(i)for the largest in absolute value. If that is Ap,loc(p) , we will then know p, q, Apq , App amdAqq , and it will be time to compute the sine and cosine of the rotation angle θ.
This isa fairly ticklish operation, since the Jacobi method is sensitive to small inaccuracies inthese quantities. Also, note that we are going to calculate sin θ and cos θ without actuallycalculating θ itself. After careful analysis it turns out that the best formulas for the purpose3.11 Getting it together123are:x := 2Apqy := App − Aqqqx2 + y 2t :=sin θ := sign(xy)cos θ :=1 + |y|/t21 − |y|/t21/2(3.11.1)1/2.Having computed sin θ and cos θ, we can now call rotate twice, as discussed in section 3.9, first with opt=1 and again with opt=2.
The matrix A has now been transformedinto the next stage of its march towards diagonalization.Next we multiply the matrix P on the left by the same n×n orthogonal matrix of Jacobi(3.8.3), in order to update the matrix that will hold the eigenvectors of A on output. Noticethat this multiplication affects only rows p and q of the matrix P , so only 2n elements arechanged.Next we call update, as discussed in section 3.10, to modify the loc array to correspondto the newly rotated matrix, and we are at the end (or od) of the while that was starteda few paragraphs ago.
In other words, we’re finished. The Maple program for the Jacobialgorithm follows.jacobi:=proc(eps,dgts)local test,eig,i,j,iter,big,p,q,x,y,t,s,c,x1;global loc,n,P,A;with(linalg):# initializeiter:=0;n:=rowdim(A);Digits:=dgts;loc:=array(1..n);test:=add(add(2*A[i,j]^2,j=i+1..n),i=1..n-1); #sum squares o-d entriesP:=matrix(n,n,(i,j)->if i=j then 1 else 0 fi); #initialize eigenvector matrixfor i from 1 to n-1 do searchrow(i) od;#set up initial loc arraybig:=test;while big>eps*test do#begin next sweepx:=0;#find largest o.d. elementfor i from 1 to n-1 doif abs(A[i,loc[i]])>x then x:=abs(A[i,loc[i]]);p:=i; fi;od;q:=loc[p];x:=2*A[p,q]; y:=A[p,p]-A[q,q];#find sine and cosine of thetat:=evalf(sqrt(x^2+y^2));s:=sign(x*y)*evalf(sqrt(0.5*(1-abs(y)/t)));c:=evalf(sqrt(0.5*(1+abs(y)/t)));rotate(s,c,p,q,1);rotate(s,c,p,q,2);#apply rotations to Afor j from 1 to n do#update matrix of eigenvectorst:=c*P[p,j]+s*P[q,j];124Numerical linear algebraP[q,j]:=-s*P[p,j]+c*P[q,j];P[p,j]:=t;od;update(p,q);big:=big-x^2/2;iter:=iter+1;od;eig:=[seq(A[i,i],i=1..n)];print(eig,P,iter);RETURN();end:#update loc array#go do next sweep#end of while#output eigenvalue array#print eigenvals, vecs, and# no.
of sweeps neededTo use the programs one does the following. First, enter the four procedures jacobi,update, rotate, searchrow into a Maple worksheet. Next enter the matrix A whoseeigenvalues and eigenvectors are wanted. Then choose dgts, the number of digits of accuracyto be maintained, and eps, the fraction by which the original off diagonal sum of squaresmust be reduced for convergence.As an example, a call jacobi(.00000001,15) will carry 15 digits along in the computation, and will terminate when the sum of squares of the off diagonal elements is .00000001times what it was on the input matrix.3.12RemarksFor a parting volley in the direction of eigenvalues, let’s review some connections with thefirst section of this chapter, in which we studied linear mappings, albeit sketchily.It’s worth noting that the eigenvalues of a matrix really are the eigenvalues of the linearmapping that the matrix represents with respect to some basis.In fact, suppose T is a linear mapping of E n (Euclidean n-dimensional space) to itself.If we choose a basis for E n then T is represented by an n × n matrix A with respect to thatbasis.
Now if we change to a different basis, then the same linear mapping is representedby B = HAH −1 , where H is a nonsingular n × n matrix. The proof of this fact is by astraightforward calculation, and can be found in standard references on linear algebra.First question: what happens to the determinant if we change the basis? Answer:nothing, becausedet(HAH −1 ) = det(H) det(A) det(H −1 )(3.12.1)= det(A).Hence the value of the determinant is a property of the linear mapping T , and will be thesame for every matrix that represents T in some basis. Hence we can speak of det(T ), thedeterminant of the linear mapping itself.Next question: what happens to the eigenvalues if we change basis? Suppose x is aneigenvector of A for the eigenvalue λ.