Heath - Scientific Computing (523150), страница 14
Текст из файла (страница 14)
Such a system of equations asks the question, “Can the vector b beexpressed as a linear combination of the columns of the matrix A?” If so, the coefficientsof this linear combination are given by the components of the solution vector x. There mayor may not be a solution; and if there is a solution, it may or may not be unique. In thischapter we will consider only square systems, which means that m = n, i.e., the matrix hasthe same number of rows and columns. In later chapters we will consider systems wherem 6= n.2.1.1Singularity and NonsingularityAn n × n matrix A is said to be singular if it has any one of the following equivalentproperties:1.
A has no inverse (i.e, there is no matrix M such that AM = M A = I, the identitymatrix).2. det(A) = 0.3738CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS3. rank(A) < n (the rank of a matrix is the maximum number of linearly independent rowsor columns it contains).4. Az = o for some vector z 6= o.Otherwise, the matrix is nonsingular . The solvability of a system of linear equations Ax = bis determined by whether the matrix A is singular or nonsingular. If the matrix A isnonsingular, then its inverse, denoted by A−1 , exists, and the system Ax = b always has aunique solution x = A−1 b regardless of the value for b.
If, on the other hand, the matrixA is singular, then the number of solutions is determined by the right-hand-side vector b:for a given value of b there may be no solution, but if there is a solution x, so that Ax = b,then we also have A(x + γz) = b for any scalar γ, where the vector z is as in the foregoingdefinition. Thus, if a singular system has a solution, then the solution cannot be unique. Tosummarize the possibilities, for a given matrix A and right-hand-side vector b, the systemmay haveOne solution:No solution:Infinitely many solutions:nonsingularsingularsingularIn two dimensions, each linear equation determines a straight line in the plane.
Thesolution of the system is the intersection point of the two lines. If the two straight lines arenot parallel, then they have a unique intersection point (the nonsingular case). If the twostraight lines are parallel, then either they do not intersect at all (there is no solution) orthe two lines are the same (any point along the line is a solution). In higher dimensions,each equation determines a hyperplane. In the nonsingular case, the unique solution is theintersection point of all of the hyperplanes.Example 2.1 Singularity and Nonsingularity.
The 2 × 2 system2x1 + 3x2 = b1 ,5x1 + 4x2 = b2 ,or in matrix-vector notation2534x1x2b1=,b2is nonsingular regardless of the value of b. If b = [ 8 13 ]T , for example, then the uniquesolution is x = [ 1 2 ]T .The 2 × 2 system 2 3x1b= 14 6x2b2is singular regardless of the value of b. With b = [ 4b = [ 4 8 ]T , thenγx=(4 − 2γ)/3is a solution for any real number γ.7 ]T , there is no solution. With2.2. SOLVING LINEAR SYSTEMS2.239Solving Linear SystemsTo solve a linear system, the general strategy outlined in Section 1.1.1 suggests that weshould transform the system into one whose solution is the same as that of the originalsystem but is easier to compute.
What type of transformation of a linear system leaves thesolution unchanged? The answer is that we can premultiply (i.e., multiply from the left)both sides of the linear system Ax = b by any nonsingular matrix M without affecting thesolution. To see why this is so, note that the solution to the linear system M Ax = M b isgiven byx = (M A)−1 M b = A−1 M −1 M b = A−1 b.Example 2.2 Permutations. An important example of such a transformation is thefact that the rows of A and corresponding entries of b can be reordered without changingthe solution x.
This is intuitively obvious: all of the equations in the system must besatisfied simultaneously in any case, so the order in which they happen to be written downis irrelevant; they may as well have been drawn randomly from a hat. Formally, such areordering of the rows is accomplished by premultiplying both sides of the equation by apermutation matrix P , which is a square matrix having exactly one 1 in each row andcolumn and zeros elsewhere (i.e., an identity matrix with its rows and columns permuted).For example, 0 0 1b1b3 1 0 0 b2 = b1 .0 1 0b3b2A permutation matrix is always nonsingular; in fact, its inverse is simply its transpose,P −1 = P T (the transpose of a matrix M , denoted by M T , is a matrix whose columns arethe rows of M , that is, if N = M T , then nij = mji ).
Thus, the reordered system can bewritten P Ax = P b, and the solution x is unchanged.Postmultiplying (i.e., multiplying from the right) by a permutation matrix reorders thecolumns of the matrix instead of the rows. Such a transformation does change the solution,but only in that the components of the solution are permuted. To see this, observe that thesolution to the system AP x = b is given byx = (AP )−1 b = P −1 A−1 b = P T (A−1 b).Example 2.3 Diagonal Scaling. Another simple but important type of transformationis diagonal scaling. Recall that a matrix D is diagonal if dij = 0 for all i 6= j, that is, theonly nonzero entries are dii , i = 1, .
. . , n, on the main diagonal . Premultiplying both sidesof a linear system Ax = b by a nonsingular diagonal matrix D multiplies each row of thematrix and right-hand side by the corresponding diagonal entry of D, and hence is calledrow scaling. In principle, row scaling does not change the solution to the linear system,but in practice it can affect the numerical solution process and the accuracy that can beattained for a given problem, as we will see.40CHAPTER 2. SYSTEMS OF LINEAR EQUATIONSColumn scaling—postmultiplying the matrix of a linear system by a nonsingular diagonalmatrix D—multiplies each column of the matrix by the corresponding diagonal entry ofD. Such a transformation does alter the solution, in effect changing the units in which thecomponents of the solution are measured.
The solution to the scaled system ADx = b isgiven byx = (AD)−1 b = D −1 A−1 b,and hence the solution to the original system is given by D.2.2.1Triangular Linear SystemsThe next question is what type of linear system is easy to solve. Suppose there is an equationin the system Ax = b that involves only one of the unknown solution components (i.e., onlyone entry in that row of A is nonzero). Then that equation can easily be solved (by division)for that unknown. Now suppose there is another equation in the system that involves onlytwo unknowns, one of which is the one already determined.
By substituting the one solutioncomponent already determined into this second equation, we can then easily solve for itsother unknown. If this pattern continues, with only one new unknown component arisingper equation, then all of the solution components can be computed in succession. A matrixwith this special property is called triangular , for reasons that will soon become apparent.Because triangular linear systems are easily solved by this successive substitution process,they are a suitable target in transforming a general linear system.Although the general triangular form just described is all that is required to enablethe system to be solved by successive substitution, it is convenient to define two specifictriangular forms for computational purposes.
A matrix A is upper triangular if all of itsentries below the main diagonal are zero (i.e., if aij = 0 for i > j). Similarly, a matrix islower triangular if all of its entries above the main diagonal are zero (i.e., if aij = 0 fori < j). For an upper triangular system Ax = b, the successive substitution process is calledback-substitution and can be expressed as follows:xi = bi −nXj=i+1xn = bn /ann ,aij xj /aii ,i = n − 1, . .
. , 1.Similarly, for a lower triangular system Ax = b, the successive substitution process is calledforward-substitution and can be expressed as follows:x1 = b1 /a11 ,i−1Xxi = bi −aij xj /aii ,i = 2, . . . , n.j=1A matrix that is triangular in the more general sense defined earlier can be permuted intoupper or lower triangular form by a suitable permutation of its rows or columns.2.2.
SOLVING LINEAR SYSTEMS41Example 2.4 Triangular Linear System. Consider the upper triangular linear system 2 4 −2x120 11 x2 = 4 .0 04x38The last equation, 4x3 = 8, can be solved directly for x3 = 2. This value can then besubstituted into the second equation to obtain x2 = 2, and finally both x3 and x2 aresubstituted into the first equation to obtain x1 = −1.2.2.2Elementary Elimination MatricesOur strategy then is to devise a nonsingular linear transformation that transforms a givengeneral linear system into a triangular linear system that we can then solve easily by successive substitution. Thus, we need a transformation that replaces selected nonzero entriesof the given matrix with zeros.
This can be accomplished by taking appropriate linearcombinations of the rows of the matrix, as we will now show.Consider the 2-vector a = [ a1 a2 ]T . If a1 6= 0, then1−a2 /a101a1a2=a1.0More generally, given an n-vector a, we can annihilate all of its entries below the kthposition, provided that ak 6= 0, by the following transformation:1 ...0Mk a = 0. ..0···0......···1· · · −mk+1......· · · −mn0 ···.. . ...0 ···1 ···.. . ...0 ··· a10a1.. .. .
. . .. 0 ak ak = ,0 ak+1 0 . . .. . .. .. 1an0where mi = ai /ak , i = k + 1, . . . , n. The divisor ak is called the pivot. A matrix of thisform is sometimes called an elementary elimination matrix or Gauss transformation, and itseffect on a vector is to add a multiple of row k to each subsequent row, with the multipliersmi chosen so that the result in each case is zero.
Note the following about these elementaryelimination matrices:1. Mk is a lower triangular matrix with unit main diagonal, and hence it must be nonsingular.2. Mk = I − meTk , where m = [0, . . . , 0, mk+1 , . . . , mn ]T and ek is the kth column of theidentity matrix.3. Mk−1 = I + meTk , which means that Mk−1 , which we will denote by Lk , is the same asMk except that the signs of the multipliers are reversed.42CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS4.
If Mj , j > k, is another elementary elimination matrix, with vector of multipliers t,thenMk Mj = I − meTk − teTj + meTk teTj = I − meTk − teTj ,since eTk t = o. Thus, their product is essentially their “union.” Because they have thesame form, a similar result holds for the product of their inverses, Lk Lj . Note that theorder of multiplication is significant; these results do not hold for the reverse product.Example 2.5 Elementary Elimination Matrices. If a = [ 2 4 −2 ]T , then 1 0 0221 0 022M1 a = −2 1 04 = 0 , and M2 a = 0 1 04 = 4.11 0 1−200 2 1−20We also note thatL1 = M1−1and1= 2−11M1 M2 = −212.2.300,1010011200,1L2 = M2−11= 001L1 L2 = 2−101− 1201− 2100,100.1Gaussian Elimination and LU FactorizationWith elementary elimination matrices, it is a fairly simple matter to reduce a general linearsystem Ax = b to upper triangular form.