Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 29
Текст из файла (страница 29)
Thereason for this exhortation is as follows: Calculating the vector A-1b f o rgiven b amounts to finding the solution of the linear system Ax = b. Oncethe triangular factorization for A has been calculated by Algorithm 4.2, thecalculation of A-1b can therefore be accomplished by Algorithm 4.4 inexactly the same number of multiplications and additions as it takes toform the product of A-1 with the vector b, as was pointed out earlier.Hence, once the triangular factorization is known, no advantage forcalculating A-1b can be gained by knowing A-1 explicitly.
(Since formingthe product A-1B amounts to multiplying each column of B by A-1 , theseremarks apply to calculating such matrix products as well.) On the otherhand, a first step toward calculating A-1 is finding the triangular factorization for A, which is then followed by n applications of the substitutionalgorithm; hence calculating A-1 presents a considerable initial computational outlay when compared with the work of calculating A - 1b Inaddition, the matrix so computed is only an approximate inverse and is, ina sense, less accurate than the triangular factorization, since it is derivedfrom the factorization by further calculations.
Hence nothing can be4.4THE TRIANGULAR FACTORIZATION167gained, and accuracy can be lost, by using A-1 explicitly in the calculationof matrix products involving A-1 .Below, we have listed a FORTRAN program for the calculation of theinverse of a given N × N matrix A. This program uses the subprogramsFACTOR and SUBST mentioned earlier. Sample input and the resultingoutput are also listed. The following remarks might help in the understanding of the coding. The order N of the matrix A is part of the input tothis program; hence it is not possible to specify the exact dimension of thematrix A during compilation. On the other hand, both FACTOR andSUBST expect matrices A and/or W of exact dimension N × N.
In theFORTRAN program below, the matrix A is therefore stored in a one-dimensional array, making use of the FORTRAN convention that the (I,J)entry of a two-dimensional (N,M) array is the ((J - 1)*N + I) entry in anequivalent one-dimensional array. The same convention is followed instoring the entries of the Jth column of A-1 in the one-dimensional arrayAINV: the subroutine SUBST is given the ((J - 1)*N + 1) entry of AINVas the first entry of the N-vector called X in SUBST, into which thesolution of the system Ax = ij is to be stored.FORTRAN PROGRAM FOR CALCULATING THE INVERSEOF A GIVEN MATRIXC PROGRAM FOR CALCULATING THE INVERSE OF A GIVEN MATRIXC CALLSF A C T 0 R , S U B S T .PARAMETERNMAX=30,NMAXSQ=NMAX*NMAXINTEGERI,IBEG,IFLAG,IPIVOT(NMAX),J,N,NSQREALA(NMAXSQ),AINV(NMAXSQ),B(NMAX)1 READ 501, N501 FORMAT(I2)IF (N .LT.
1 .OR. N .GT. NMAX)STOPREAD IN MATRIX ROW BY ROWC=N*NNSQDO 10 I=1,N10READ 510, (A(J) ,J=I,NSQ,N)510 FORMAT(5E15.7)CCALL FACTOR ( A, N, B, IPIVOT, IFLAG )IF (IFLAG .EQ. 0) THENPRINT 611FORMAT('1MATRIX IS SINGULAR')611GO TO 1END IFDO 21 I=1,N21B(I) = 0.IBEG = 1DO 30 J=1,NB(J) = 1.CALL SUBST ( A, IPIVOT, B, N, AINV(IBEG))B(J) = 0.30IBEG = IBEG + NPRINT 630630 FORMAT('1THE COMPUTED INVERSE IS '//)DO 31 I=1,N31PRINT 631, I, (AINV(J),J=I,NSQ,N)631 FORMAT('0ROW ', I2,8E15.7/(7X,8E15.7))GO TO 1END168MATRICES AND SYSTEMS OF LINEAR EQUATIONSSAMPLE INPUTRESULTING OUTPUTTHE COMPUTED INVERSE ISEXERCISES4.4-1 Modify the FORTRAN program for the calculation of A-1 given in the text to obtaina program which solves the more general problem of calculating the product C = A-1B ,where A is a given (invertible) n × n matrix and B is a given n × m matrix.4.4-2 Calculate the inverse of the coefficient matrix A of the system of Exercise 4.2-8; thencheck the accuracy of the computed inverse4.4-3 Show that the matrixis invertible, but that A cannot be written as the product of a lower-triangular matrix with anupper-triangular matrix.4.4-4 Prove that the sum and the product of two lower- (upper-) triangular matrices is lower(upper-) triangular and that the inverse of a lower- (upper-) triangular matrix is lower(upper-) triangular.4.4-5 Prove that a triangular factorization is unique in the following sense: If A is invertibleand L I U l = A = L 2 U z , where L1 , L2 are unit-lower-triangular matrices and U 1 , U2 areupper-triangular matrices, then L1 = L2 and U 1 = U 2 .
(Hint: Use Exercise 4.1-8 to provethat U1, L2 must be invertible; then show thatmust hold, which implies,with Exercise 4.4-4, that L2-1L1 must be a diagonal matrix; hence, since both L1 and L2 have1's on their diagonal,4.4-6 Use the results of Exercise 4.4-5 to show that if A is symmetric (A = AT) and has atriangular factorization, A = LU, then U = DLT , with D the diagonal matrix having thesame diagonal entries as U.4.4-7 Prove: If the tridiagonal matrix A can be factored as A = LU, where L is lower-triangular and U is upper-triangular, then both L and U are also tridiagonal. Interpret Algorithm4.3 as a way to factor tridiagonal matrices.4.5ERROR AND RESIDUAL OF AN APPROXIMATE SOLUTION; NORMS1694.4-8 Compact schemes construct the triangular factors L and U for A using Eqs.
(4.23) inthe formto derive the interesting entries of L and U. In effect, the final content of the work array W isderived by carrying out, for each entry, all modifications at one time, thus avoiding thewriting down of the various intermediate results. Of course, this has to be done in somesystematic order.
For lij (for i > j) cannot be calculated unless one already knows lir for r < jand urj for r < j. Again, one must know already lir and urj for r < i in order to calculate uij(for i < j).(a) Devise an algorithm for the construction of L and U from A in this compactmanner.(b) Modify your algorithm to allow for scaled partial pivoting.(c) If your algorithm is not already done this way, modify it so that the innermost loopsrun over row indices (see Exercise 4.2-7 for motivation).4.4-9: Choleski's method If the matrix A of order n is real, symmetric (A = AT), and positivedefinite (that is, xTAx > 0 for all nonzero n -vectors x), then it is possible to factor A asLDLT, where L is a real unit-lower triangular matrix and D = (dij) is a (positive) diagonalmatrix.
Thus, from (4.23),whileWrite a FORTRAN subroutine based on these equations for the generation of (theinteresting part of) L and D, and a subroutine for solving A x = b for x by substitution once Land D are known.4.4-10 Show that Choleski’s method is applicable whenever the matrix A is of the form BB Twith B an invertible matrix.4.5 ERROR AND RESIDUAL OF AN APPROXIMATESOLUTION; NORMSAny computed solution of a linear system must, because of roundoff, beconsidered an approximate solution. In this section, we discuss the difficultproblem of ascertaining the error of an approximate solution (withoutknowing the solution).
In the discussion, we introduce and use norms as aconvenient means of measuring the “size” of vectors and matrices.Ifis a computed solution for the linear system A x = b, then its erroris the differenceThis error is, of course, usually not known to us (for otherwise, we wouldknow the solution x, making any further discussions unnecessary). But wecan always compute the residual (error)since Ax is just the right side b.
The residual then measures how well170MATRICES AND SYSTEMS OF LINEAR EQUATIONSsatisfies the linear system Ax = b. If r is the zero vector, then is the(exact) solution; that is, e is then zero. One would expect each entry of r tobe small, at least in a relative sense, ifis a good approximation to thesolution x.Example 4.5 Consider the simple linear systemwhose unique solution x has the entries x 1 = x 2 = 1. The approximate solutionso that a “small”residual (relative to the right side) corresponds to a relatively “small” error in this case.On the other hand, the approximate solutionbut residualhence still a relatively “small” residual, while the error is now relatively“large.” By taking a different right side, we can achieve the opposite effect.
The linearsystemhas the unique solution x1 = 100, x2 = - 100. The approximate solutionhas errorbut residualhence the residual is now relatively“large,” while the error is relatively “small” (only 1 percent of the solution).of anAs this example shows, the size of the residualapproximate solutionis not always a reliable indicator of the size of thein this approximate solution. Whether or not a “small”errorresidual implies a “small” error depends on the “size” of the coefficientmatrix and of its inverse, in a manner to be made precise below. For thisdiscussion, we need a means of measuring the “size” of n-vectors andn × n matrices.The absolute value provides a convenient way to measure the “size” ofreal numbers or even of complex numbers.
It is much less certain how oneshould measure the size of an n-vector or an n × n matrix. There iscertainly not any one way of doing this which is acceptable in all situations.For example, a frequently used measure for the size of an n -vector a isthe nonnegative number(4.31)Assume now that the computed solution to Ax = b is known to havesix-place accuracy in this way of measuring size; i.e.,(4.32)4.5ERROR AND RESIDUAL OF AN APPROXIMATE SOLUTION; NORMS171Then this would indicate a very satisfactory computed solution in case theunknowns are, say, approximate values of the well-behaved solution of acertain differential equation.
But if one of the unknowns happens to beyour annual income while another is the gross national product, then (4.32)gives no hint as to whether or not x is a satisfactory computed solution (asfar as you are concerned), since, with (4.32) holding, the error in yourcomputed yearly income (even if received for only one year) might makeyou independently wealthy or put you in debt for life. A measure like(assuming your yearly income to be the first unknown) would give youmuch more information, as would certain measures of size which useseveral numbers (rather than just one nonnegative number) to describe the“size” of an n-vector.For most situations, however, it suffices to measure the size of ann-vector by a norm.