Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 23
Текст из файла (страница 23)
If all subdiagonal entries of the square matrix A are zero,we call A an upper (or right) triangular matrix, while if all superdiagonalentries of A are zero, then A is called lower (or left) triangular. Clearly, amatrix is diagonal if and only if it is both upper and lower triangular.Figure 4.1132MATRICES AND SYSTEMS OF LINEAR EQUATIONSExamples In the following examples, matrices A and C are diagonal; matrices A, B, Care upper-triangular and matrices A, C, and D are lower-triangular, and matrix E hasnone of these properties.The Identity Matrix and Matrix InversionIf a diagonal matrix of order n has all its diagonal entries equal to 1, thenwe call it the identity matrix of order n and denote it by the special letter I,or I n if the order is important.
The name identity matrix was chosen forthis matrix becauseThe matrix I acts just like the number 1 in ordinary multiplication.Division of matrices is, in general, not defined. However, for squarematrices, we define a related concept, matrix inversion. We say that thesquare matrix A of order n is invertible provided there is a square matrix Bof order n such that(4.8)The matrix, for instance, is invertible sinceOn the other hand, the matrixis not invertible. For if B werea matrix such that BA = I, then it would follow thatHence we should have b11 + 2b12 = 1 and, at the same time 2(b11 + 2b12)= 2b11 + 4b12 = 0, which is impossible.We note that (4.8) can hold for at most one matrix B. For ifwhere B and C are square matrices of the same order as A, then4.1PROPERTIES OF MATRICES133showing that B and C must then be equal.
Hence, if A is invertible, thenthere exists exactly one matrix B satisfying (4.8). This matrix is called theinverse of A and is denoted by A-1.It follows at once from (4.8) that if A is invertible, then so is A-1, andits inverse is A; that is,(4.9)Further, if both A and B are invertible square matrices of the same order,then their product is invertible and(4.10)Note the change in order! The proof of (4.10) rests on the associativity ofmatrix multiplication:Example The matrixhas inversehas inverseFurtherOn the other hand0while the matrixHence by (4.10),andso that A-1 B-1 cannot be the inverse of AB.Matrix Addition and Scalar MultiplicationIt is possible to multiply a matrix by a scalar ( = number) and to add twomatrices of the same order in a reasonable way.
First, if A = (aij ) andB = (bij) are matrices and d is a number, we say that B is the product of dwith A, or B = dA, provided B and A have the same order andFurther, if A = (aij ) and B = (bij ) are matrices of the same order andC = (cij) is a matrix, we say that C is the sum of A and B, or C = A + B,provided C is of the same order as A and B andHence multiplication of a matrix by a number and addition of matrices isdone entry by entry.
The following rules regarding these operations, andalso matrix multiplication, are easily verified: Assume that A, B, C arematrices such that all the sums and products mentioned below are defined,134MATRICES AND SYSTEMS OF LINEAR EQUATIONSand let a, b be some numbers. Then(i) A + B = B + A(ii) (A + B) + C = A + (B + C)(iii) a(A + B) = aA + aB(iv) (a + b)A = aA + bA(4.11)(v) (A + B)C = AC + BC(vi) A(B + C) = AB + AC(vii) a(AB) = (aA)B = A(aB)(viii) Ifand A is invertible, then aA is invertible and (aA)-1 =-1(1/a)AFor the sake of illustration we now give a proof of (vi). With A an m × nmatrix and B and C n × p matrices, both sides of (vi) are well-definedm × p matrices.
Further,Finally, if the m × n matrix A has all its entries equal to 0, then wecall it the null matrix of order m × n and denote it by the special letter O.A null matrix has the obvious property thatB + O = Bfor all matrices B of the same orderLinear CombinationsThe definition of sums of matrices and products of numbers with matricesmakes it, in particular, possible to sum n-vectors and multiply n-vectors bynumbers or scalars. If x(l), .
. . , x(k) are k n-vectors and b1, b2, . . . , bk arek numbers, then the weighted sumis also an n-vector, called the linear combination of x(l), . . . , x(k) withweights, or coefficients, b1, . . . , bk.Consider now, once more, our system of equations (4.1). For j =1, . .
. , n, let aj denote the jth column of the m × n coefficient matrix A;that is, aj is the m-vector whose ith entry is the number aij, i = 1, . . . , m.4.1PROPERTIES OF MATRICES135Then we can write the m-vector Ax asi.e., as a linear combination of the n columns of A with weights the entriesof x.
The problem of solving (4.1) has therefore the equivalent formulation: Find weights x1 , . . . , xn so that the linear combination of the ncolumns of A with these weights adds up to the right-side m -vector b.Consistent with this notation, we denote thejth column of the identitymatrix I by the special symbolClearly, ij has all its entries equal to zero except for thejth entry, which is1. It is customary to call ij the jth unit vector.
(As with the identity matrix,we do not bother to indicate explicitly the length or order of ij, it beingunderstood from the context.) With this notation, we havefor every n-vector b = (bi ). Further, the jth column aj of the matrix A canbe obtained by multiplying A with ij that is,Hence, if C = AB, thenso that the jth column of the product AB is obtained by multiplying thefirst factor A with the jth column of the second factor B.Existence and Uniqueness of Solutions to (4.1)In later sections, we will deal exclusively with linear systems which have asquare coefficient matrix.
We now justify this by showing that our system(4.1) cannot have exactly one solution for every right side unless thecoefficient matrix is square.Lemma 4.1 If x = x1 is a solution of the linear system Ax = b thenany solution x = x2 of this system is of the formwhere x = y is a solution of the homogeneous system Ax = 0.Indeed, if both x1 and x2 solve Ax = b, theni.e., then their difference y = x2 - x1 , solves the homogeneous systemAx = 0.136MATRICES AND SYSTEMS OF LINEAR EQUATIONSExample The linear systemhas the solution xl = x2 = 1. The corresponding homogeneous systemhas the solution x1 = - 2a, x2 = a, where a is an arbitrary scalar. Hence any solutionof the original system is of the form x1 = 1 - 2a, x2 = 1 + a for some number a.The lemma implies the following theorem.Theorem 4.1 The linear system Ax = b has at most one solution (i.e.,the solution is unique if it exists) if and only if the correspondinghomogeneous system Ax = 0 has only the “trivial” solution x = 0.Next we prove that we cannot hope for a unique solution unless ourlinear system has at least as many equations as unknowns.Theorem 4.2 Any homogeneous linear system with fewer equationsthan unknowns has nontrivial (i.e., nonzero) solutions.We have to prove that if A is an m × n matrix withthen we can findsuch that Ay = 0.
This we do by induction on n.First, consider the case n = 2. In this case, we can have only one equation,and this equation has the nontrivial solution x1 = 0, x2 = 1, if a12 = 0;otherwise, it has the nontrivial solution x1 = a12, x2 = - a11. This provesour statement for n = 2. Let now n > 2, and assume it proved that anyhomogeneous system with less equations than unknowns and with lessthan n unknowns has nontrivial solutions; further, let Ax = 0 be a homogeneous linear system with m equations and n unknowns where m < n. Wehave to prove that this system has nontrivial solutions. This is certainly soif the nth column of A is zero, i.e., if an = 0; for then the nonzero n-vectorx = in is a solution. Otherwise, some entry of an must be different from 0,say,In this case, we consider the m × (n - 1) matrix B whose jth column is4.1PROPERTIES OF MATRICES137If we can show that the homogeneous systemhas nontrivial solutions, then we are done.
For if we can find numbersx1, . . . , xn-1 not all zero such thatthen it follows from the definition of the bj’s thatthus providing a nontrivial solution to Ax = 0. Hence it remains only toshow that Bx = 0 has nontrivial solutions. For this, note that for each j,the ith entry of bj isso that the ith equation of Bx = 0 looks likeand is therefore satisfied by any choice of x1, . . . , xn-1. It follows thatx = y solves Bx = 0 if and only if x = y solves the homogeneous systemwhich we get from Bx = 0 by merely omitting the ith equation. But nowis a homogeneous linear system with m - 1 equations in n - 1unknowns, hence with less equations than unknowns and with less than nunknowns.
Therefore, by the induction hypothesis,has nontrivialsolutions, which finishes the proof.Example Consider the homogeneous linear system Ax = 0 given byso that m = 2, n = 3. Following the argument for Theorem 4.2, we construct anontrivial solution as follows: Sincewe pick i = 2 and getThe smaller homogeneous system Bx = 0 is thereforeWe can ignore the last equation and get, then, the homogeneous systemconsists of just one equation,which138MATRICES AND SYSTEMS OF LINEAR EQUATIONSA nontrivial solution for this is x1 = 1, x2 = - 2.