Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 26
Текст из файла (страница 26)
The150MATRICES AND SYSTEMS OF LINEAR EQUATIONS(k - 1)st system has the following form:In words, the first k equations are already in upper-triangular form, whilethe last n - k equations involve only the unknowns xk, . . . , xn. From this,the kth system A(k) x = b(k) is derived during the kth step of Gausselimination as follows: The first k equations are left unchanged; further, ifthe coefficientof xk in equation k is not zero, then mik =times equation k is subtracted from equation i, therebyeliminating the unknown xk from equation i, i = k + 1, . . . , n.
The resulting system A(k)x = b(k) is clearly equivalent to A(k-1)x = b(k-1) hence byinduction, to the original system; further, the kth system has its first k + 1equations in upper-triangular form.After n - 1 steps of this procedure, one arrives at the system A(n-1) x= b(n-1), whose coefficient matrix is upper-triangular, so that this systemcan now be solved quickly by back-substitution.Example 4.2 Consider the following linear system:(a) 2x1 + 3x2 - x3 = 5(b) 4x1 + 4x2 - 3x3 = 3(c) -2x1 + 3x 2 - x3 = 1To eliminate x1 from equations (b) and (c), we addequation (b), getting the new equation(4.17)times equation (a) toAlso, adding -(-2)/2 = 1 times equation (a) to equation (c), we get the new equation(c),This gives the new system A(1)x = b (l) :(a) 2x1 + 3x2 - x3 = 5(4.18)(b)- 2x 2 - x 3 = -7(c)6x2 - 2x3 - 6completing the first step of Gauss elimination for this system.
In the second (and for thisexample, last) step, we eliminate x 2 from equation (c) by adding -6/(-2) = 3 timesequation (b) to equation (c), getting the new equation (c),4.2THE SOLUTION OF LINEAR SYSTEMS BY ELIMINATION151hence the new and final system(4.19)By Theorem 4.7, this system is equivalent to the original system (4.17) but has anupper-triangular coefficient matrix; hence can be solved quickly by back-substitution, aswe did in Example 4.1.In the simple description of Gauss elimination just given, we used thekth equation to eliminate xk from equations k + 1, .
. . , n during the k t hstep of the procedure. This is of course possible only if, at the beginning ofof x k in equation k is not zero.the kth step, the coefficientUnfortunately, it is not difficult to devise linear systems for which thiscondition is not satisfied. If, for example, the linear system Ax = b is(4.20)then it is impossible to use equation (a) to eliminate xl from the otherequations. To cope with this difficulty and still end up with a triangularsystem equivalent to the given one, we have to allow at each step morefreedom in the choice of the pivotal equation for the step, i.e., the equationwhich is used to eliminate one unknown from certain of the other equations.In the system (4.20), for example, we could use equation (b) as the pivotal equationduring the first step of elimination.
In order to keep within our earlier format, we firstbring equation (b) into the top position by interchanging it with (a). In this newordering, the coefficient of x1 in equation (a) is now nonzero and we can proceed asbefore, getting the new system A(1)x = b (l) :From this, the second (and last) step of Gauss elimination proceeds without any furtherdifficulty and yields the final upper triangular systemwhose solution, by back-substitution, givesThis greater freedom in the choice of the pivotal equation is necessarynot only because of the possibility of zero coefficients.
Experience hasshown that this freedom is also essential in combating rounding error152MATRICES AND SYSTEMS OF LINEAR EQUATIONSeffects (see Sec. 4.3). The additional work is quite small: At the beginningof the kth elimination step, one looks for a nonzero coefficient for xk inequations k, k + 1, . . . , n, and, if it is found in some equation j > k, oneinterchanges equations j and k.Incidentally, there must be such a nonzero coefficient in case A isinvertible.
For otherwise our present linear system would contain then - k + 1 equations(4.21)which involve in effect only the n - k unknowns xk+1, . . . , xn. By Theorem 4.3, this subsystem (4.21) would therefore not be solvable for someright side; hence our whole present system would not be solvable for someright side, and therefore, by Theorem 4.4, the coefficient matrix of ourpresent system would not be invertible. But since our present system isequivalent to the original system Ax = b, it would then follow that A is notinvertible.
This proves our assertion.When this process is carried out with the aid of a computer, the noriginal equations and the various changes made in them have to berecorded in some convenient and systematic way. Typically, one uses ann × (n + 1) working array or matrix which we will call W and whichcontains initially the coefficients and right side of the n equations A x = b.Whenever some unknown is eliminated from an equation, the changedcoefficients and right side for this equation are calculated and stored in theworking array W in place of the previous coefficients and right side.
Forreasons to be made clear below, we store the multiplier m i k =(used to eliminate xk from the ith equation) in wik in place ofsince the latter is (supposed to be) zero anyway. We alsothe numberrecord the row interchanges made with the aid of an integer array p.Algorithm 4.2: Gauss elimination Given the n × (n + 1) matrix Wcontaining the matrix A of order n in its first n columns and then -vector b in its last column.Initialize the n-vector p to have pi = i, i = 1, . . . , nwik =If wnn = 0, signal that A is not invertible and stop4.2THE SOLUTION OF LINEAR SYSTEMS BY ELIMINATION153Otherwise, the original system Ax = b is now known to be equivalent to the system Ux = y, where U and y are given in terms of thefinal entries of W by(4.22)In particular, U is an upper-triangular matrix with all diagonal entriesnonzero; hence Algorithm 4.1 can now be used to calculate thesolution x.It is often possible to reduce the computational work necessary forsolving Ax = b by taking into account special features of the coefficientmatrix A, such as symmetry or sparseness.
As an example we now discussbriefly the solution of tridiagonal systems.We say that the matrix A = (aij) of order n is tridiagonal ifIn words, A is tridiagonal if the only nonzero entries of A lie on thediagonal of A, aii , i = 1, . . . , n, or the subdiagonal of A, ai, i-1, i =2, . . . , n, or the superdiagonal of A, ai, i+1, i = 1, . . . , n - 1. Thus thefollowing matrices are all tridiagonal.Assume that the coefficient matrix A of the linear system Ax = b istridiagonal, and assume further that, for each k, we can use equation k asthe pivotal equation during step k. Then, during the kth step of Algorithm4.2, the variable xk needs to be eliminated only from equation k + 1,k = 1, . . .
, n - 1. Further, during back-substitution, only xk+1 needs tobe substituted into equation k in order to find xk, k = n - 1, . . . , 1.Finally, there is no need to store any of the entries of A known to be zero.Rather, only three vectors need to be retained, containing the subdiagonal,the diagonal, and the superdiagonal of A, respectively.Consider now more specifically the following tridiagonal system oforder n:154MATRICES AND SYSTEMS OF LINEAR EQUATIONSAssumingnew equationwe eliminate x1 from the second equation, getting thewithNext, assumingwe use this equation to eliminate x2 from the thirdequation, getting the new equationwithContinuing in this manner, we eliminate, during step k, xk from equationgetting the new equationwithfor k = 1, 2, . .
. , n - 1.During back-substitution, we first get, assumingand then, for k = n - 1, . . . , 1,Algorithm 4.3: Elimination for tridiagonal systems Given thecoefficients ai , di , ci and right-side bi of the tridiagonal systemi = 1 , . . . , n (with a1 = cn = 0)ai xi-1 + di xi + ci xi+1 = biIf dn = 0, signal failure and stopOtherwise,and continue4.2THE SOLUTION OF LINEAR SYSTEMS BY ELIMINATION155Example 43 Solve the linear systemwhen n = 10.The following FORTRAN program solves this problem. Note that we havetranslated Algorithm 4.3 into a subroutine calledwhere SUB, DIAG, SUP, B, are N-vectors which are expected to contain the coefficientsand right side of the tridiagonal system1[with SUB(l) and SUP(N) ignored].
The subroutine alters the contents of DIAG andreturns the solution vector in B.The exact solution of the given system isHence the computed solution is in error in the sixth place after the decimal point. Thisprogram was run on an IBM 360.C FORTRAN PROGRAM FOR EXAMPLE 4.3PARAMETER N=10INTEGER IREAL A(N),8(N),C(N),D(N)DO 10 I=1,NA(I) = -1.D(I) = 2.C(I) = -1.10B(I) = 0.B(1) = 1.CALL TRID ( A, D, C, B, N )PRINT 610, (I,B(I),I=1,N)610 FORMAT('lTHE SOLUTION IS '/(I5,E15.7))STOPENDSUBROUTINE TRID ( SUB, DIAG, SUP, B, N )IINTEGER N,REALB(N),DIAG(N),SUB(N),SUP(N)C THE TRIDIAGONAL LINEAR SYSTEMSUB(I)*X(I-1) + DIAG(I)*X(I) + SUP(I)*X(I+1) = B(I), I=1,...,NC(WITH SUB(l) AND SUP(N) TAKEN TO BE ZERO) IS SOLVED BY FACTORIZATIONCC AND SUBSTITUTION.
THE FACTORIZATION IS RETURNED IN SUB , DIAG , SUPC AND THE SOLUTION IS RETURNED IN B .IF (N .LE. 1) THENB(1) = B(1)/DIAG(1)RETURNEND IFDO 11 I=2,NSUB(I) = SUB(I)/DIAG(I-1)DIAG(I) = DIAG(I) - SUB(I)*SUP(I-1)B(I) = B(I) - SUB(I)*B(I-1)11B(N) = B(N)/DIAG(N)DO 12 I=N-1,1,-l12B(I) = (B(I) - SUP(I)*B(I+l))/DIAG(I)RETURNEND156MATRICES AND SYSTEMS OF LINEAR EQUATIONSOUTPUTTHE SOLUTION IS1 0.9090915E 002 0.8181832E 003 0.7272751E 004 0.6363666E 005 0.5454577E 006 0.4545485E 007 0.3636391E 008 0.2727295E 009 0.1818197E 0010 0.9090990E -01EXERCISES4.2-1 One measure of the efficiency of an algorithm is the number of arithmetic operationsrequired to obtain the solution.