Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 24
Текст из файла (страница 24)
Hence, withthe 3-vector x = (xj) is a nontrivial solution of the original system.Next we prove that we cannot expect to get a solution to our linearsystem (4.1) for all possible choices of the right side b unless we have nomore equations than unknowns.Lemma 4.2 If A is an m × n matrix and the linear system Ax = b hasa solution for every m-vector b, then there exists an n × m matrix Csuch thatSuch a matrix C can be constructed as follows: By assumption, we canfind a solution to the system Ax = b no matter what b is.
Hence, choosingb to be the jth column of I, we can find an n-vector cj, such thatBut then, with C the n × m matrix whose jth column is cj, j = 1, . . . , m,we getshowing that the jth column of the product AC agrees with the j th columnof I, j = 1, . . .
, m. But that says that AC = I.Lemma 4.3 If B and C are matrices such thatthen the homogeneous system Cx = 0 has only the trivial solutionx = 0.Indeed, if Cx = 0, thenTheorem 4.3 If A is an m × n matrix and the linear system A x = bhas a solution for every possible m-vector b, then m < n.For the proof, we get from Lemma 4.2 thatfor some n × m matrix C. But this implies by Lemma 4.3 that thehomogeneous system Cx = 0 has only the trivial solution x = 0.
Therefore,by Theorem 4.2, C must have at least as many rows as columns, that is,n > m, which finishes the proof.4.1PROPERTIES OF MATRICES139We now know that we cannot expect to get exactly one solution to oursystem (4.1) for every possible right side unless the system has exactly asmany equations as unknowns, i.e.,unless the coefficient matrix is square.We will therefore consider from now on only linear systems with a squarecoefficient matrix.
For such square matrices, we prove a final theorem.Theorem 4.4 Let A be an n × n matrix. Then the following areequivalent:(i) The homogeneous system Ax = 0 has only the trivial solutionx = 0.(ii) For every right-side b, the system Ax = b has a solution.(iii) A is invertible.First we prove that (i) implies (ii). Let b be a given n-vector. We have toprove that Ax = b has a solution. For this, let D be the m × ( n + 1)matrix whose first n columns agree with those of A, while the ( n + 1)stcolumn is b. Since D has more columns than rows, we can find, byTheorem 4.2, a nonzero (n + 1)-vector y such that Dy = 0, that is, suchthat(4.12)Clearly, the number yn+1 cannot be zero. For if yn+1 were zero, then asat least one of the numbers y1, .
. . , yn would have to be nonzero,while at the same timeBut this would say that Ax = 0 admits the nontrivial solution xi = yi ,i = 1, . . . , n, which contradicts (i). Hence, sincewe can solve(4.12; for b to get thatBut this says that Ax = b has a solution, viz., the solution xi =- (yi /yn+1), i = 1, . . . , n, which proves (ii).Next we prove that (ii) implies (iii). Assuming (ii), it follows withLemma 4.2 that there exists an n × n matrix C such thatHence, by Lemma 4.3, the equation Cx = 0 has only the trivial solutionx = 0.
This says that the n × n matrix C satisfies (i); hence, by theargument we just went through, C satisfies (ii); therefore, by Lemma 4.2,there exists an n × n matrix D such thatBut now we are done. For we showed earlier that if140MATRICES AND SYSTEMS OF LINEAR EQUATIONSwith A, C, D square matrices, then C is invertible andHence A is the inverse of an invertible matrix, therefore invertible.Finally, Lemma 4.3 shows that (iii) implies (i).Example We showed in an earlier example that the 2 × 2 matrixis not invertible and, in another example, that for this matrix the homogeneous systemA x = 0 has nontrivial solutions.
By Theorem 4.4, the linear system Ax = b shouldtherefore not be solvable for some 2-vector b. Indeed, with b = i1, we get the systemwhich has no solution since the second equation demands thatwhile the first equation demands thatAs a simple application of Theorem 4.4, we now prove that A squareand AB = I implies B = A-1 and BA = I. Indeed, if A is of order n × n,then AB = I implies that B is of order n × n, and that, for all n -vectors b,A(Bb) = b.
But this says that we can solve Ax = b for x no matter what b,hence A is invertible by Theorem 4.4, and that then x = B b is the solution,hence Bb = A-1 b for all b, or B = A-1. But then, finally, BA = I.Linear Independence and BasesLet a1, . . . , an be n m-vectors, and let A be the m × n matrix whose jthcolumn is a j, j = l, . . . , n. We say that these m-vectors are linearlyindependentifx 1 a1 + · · · + xn an = 0implies thatx1 = · · · = xn = 0Otherwise, we call the vectors linearly dependent. Clearly, these n m-vectors are linearly independent if and only if the homogeneous systemAx = 0 has only the trivial solution x = 0. Hence we can infer fromTheorem 4.2 that any set of more than m m-vectors must be linearlydependent.Let a1,written asa1, .
. . , anonly if them -vector b,. . . , an be linearly independent. If every m-vector b can bea linear combination of these n m-vectors, then we calla basis (for all m -vectors). Clearly, a1, . . . , an is a basis if andlinear system Ax = b has exactly one solution for everythat is, if and only if every m-vector can be written in exactly4.1PROPERTIES OF MATRICES141one way as a linear combination of the m-vectors a1, .
. . , an. In particular,a basis (for all m-vectors) consists of exactly m m-vectors (that is, n = m),and the corresponding matrix is invertible.Examples The vectorsare linearly independent; but they do not form a basis since there are only two 3-vectors.Further, every 2-vector can be written as a linear combination of the three 2-vectorsbut these three 2-vectors do not form a basis since they must be linearly dependent.Finally, the three 3-vectorsdo form a basis, since the corresponding matrix is invertible. To see this, it is, byTheorem 4.4, sufficient to prove that the systemhas only the trivial solution xl = x2 = x3 = 0. But that is obvious.The Transposed MatrixFinally, there is an operation on matrices which has no parallel in ordinaryarithmetic, the formation of the transposed matrix.
If A = (a ij ) andB = (bij) are matrices, we say that B is the transpose of A, or B = AT,provided B has as many rows as A has columns and as many columns as Ahas rows andIn words, one forms the transpose AT of A by “reflecting A across thediagonal.”Ifthen A is said to be symmetric.The matrices142MATRICES AND SYSTEMS OF LINEAR EQUATIONShave the transposeIn particular, the transpose bT of a column vector b is a row vector.One easily verifies the following rules regarding transposition:1. If A and B are matrices such that AB is defined, then BTAT is definedand(AB) T = B T A T .Note the change in order!2.
For any matrix A, (AT)T = A.3. If the matrix A is invertible, then so is AT, and (AT)-1=(A-1)T.To prove Rule 1, let A be an m × n matrix and B an n × p matrix sothat AB is an m × p matrix and (AB)T is a p × m matrix. Then AT isn × m, BT is p × n; therefore the product BTAT is well defined and ap × m matrix. Finally,As to Rule 3, we get from Rule 1 thatwhich proves Rule 3.If a and b are n-vectors, then bTa is a 1 × 1 matrix or number, calledthe scalar product of a and b in case a and b are real vectors.For matrices with complex entries (of interest in the discussion ofeigenvalues), there is the related notion of the conjugate transposed orHermitian AH of the matrix A.
For this, we recall that the conjugate of acomplex number z is obtained by changing the imaginary part of z to itsnegative. Ifthen is the unique number for whichTheHermitian AH is obtained from A just as the transposed AT except that allentries of AT are replaced by their complex conjugate. Thus AH = (bij) incase4.1PROPERTIES OF MATRICES143Hence, AH = AT in case A is a real matrix. Note that, for n-vectors a andb with complex entries, the customary scalar product is the number bHa,not bTa, since it is aHa which then gives the square of the length of thevector a.Permutations and Permutation MatricesA permutation of degree n is any rearrangement of the first n integers; i.e.,it is a sequence of n integers in which each integer between 1 and nappears at least once, hence at most once, therefore exactly once.
Thereare many ways of writing a permutation of degree n. For our purposes, it issufficient (and in a sense quite rigorous) to think of a permutation as ann-vector p = (pi ) withall i, andThereare n! permutations of degree n. A permutation p is said to be even or odddepending on whether the number of inversions in p is even or odd. Herethe number of inversions in a permutation p = (p i ) is the number ofinstances an integer precedes a smaller one. For example, in the permutation p with pT = [7, 2, 6, 3, 4, 1, 5],7 precedes 2, 6, 3, 4, 1, 52 precedes 16 precedes 3, 4, 1, 53 precedes 14 precedes 1givinggivinggivinggivinggivingHence p has altogether61411inversionsinversioninversionsinversioninversion13 inversionsNote that any interchange of two entries in a permutation changes thenumber of inversions by an odd amount.A permutation matrix of order n is any n × n matrix P whose columns(rows) are a rearrangement or permutation of the columns (rows) of theidentity matrix of order n.
Precisely, the n × n matrix P is a permutationmatrix if(4.13)for some permutation p = (pi ) of degree n.Theorem 4.5 Let P be the permutation matrix satisfying (4.13). Then(i) PT is a permutation matrix, satisfyingHence PTP = I; therefore P is invertible, and P-1 = PT.(ii) If A is an m × n matrix, then AP is the m × n matrix whose j t hcolumn equals the pjth column of A, j = 1, . . .