Hutton - Fundamentals of Finite Element Analysis, страница 83
Описание файла
PDF-файл из архива "Hutton - Fundamentals of Finite Element Analysis", который расположен в категории "". Всё это находится в предмете "численные методы" из 6 семестр, которые можно найти в файловом архиве . Не смотря на прямую связь этого архива с , его также можно найти и в других разделах. Архив можно найти в разделе "книги и методические указания", в предмете "численные методы и алгоритмы" в общих файлах.
Просмотр PDF-файла онлайн
Текст 83 страницы из PDF
.... am1 am2 · · · amnSuch an array is known as a matrix, and the scalar values that compose the arrayare the elements of the matrix. The position of each element ai j is identified bythe row subscript i and the column subscript j.The number of rows and columns determine the order of a matrix. A matrixhaving m rows and n columns is said to be of order “m by n” (usually denoted asm × n ). If the number of rows and columns in a matrix are the same, the matrixis a square matrix and said to be of order n.
A matrix having only one row iscalled a row matrix or row vector. Similarly, a matrix with a single column is acolumn matrix or column vector.If the rows and columns of a matrix [ A] are interchanged, the resultingmatrix is known as the transpose of [ A], denoted by [ A]T . For the matrix definedin Equation A.1, the transpose isa11 a21 · · · am1 a12 a22 · · · am2 [A]T = ..(A.2)...... .... a1n a2n · · · amnand we observe that, if [ A] is of order m by n, then [ A]T is of order n by m.
For447Hutton: Fundamentals ofFinite Element Analysis448Back MatterAPPENDIX AAppendix A: MatrixMathematics© The McGraw−HillCompanies, 2004Matrix Mathematicsexample, if [ A] is given by[ A] =the transpose of [ A] is24−10322 4[A]T = −1 0 3 2Several important special types of matrices are defined next. A diagonalmatrix is a square matrix composed of elements such that ai j = 0 and i = j .Therefore, the only nonzero terms are those on the main diagonal (upper left tolower right). For example,2 0 0[A] = 0 1 0 0 0 3is a diagonal matrix.An identity matrix (denoted [I ]) is a diagonal matrix in which the value ofthe nonzero terms is unity.
Hence,1 0 0[A] = [I ] = 0 1 0 0 0 1is an identity matrix.A null matrix (also known as a zero matrix [0] ) is a matrix of any order inwhich the value of all elements is 0.A symmetric matrix is a square matrix composed of elements such that thenondiagonal values are symmetric about the main diagonal. Mathematically,symmetry is expressed as ai j = a ji and i = j . For example, the matrix2 −2 0[A] = −2 4 −3 0 −3 1is a symmetric matrix. Note that the transpose of a symmetric matrix is the sameas the original matrix.A skew symmetric matrix is a square matrix in which the diagonal terms aiihave a value of 0 and the off-diagonal terms have values such that ai j = −a ji . Anexample of a skew symmetric matrix is0 −2 0[A] = 2 0 3 0 −3 0For a skew symmetric matrix, we observe that the transpose is obtained bychanging the algebraic sign of each element of the matrix.Hutton: Fundamentals ofFinite Element AnalysisBack MatterAppendix A: MatrixMathematics© The McGraw−HillCompanies, 2004A.2 Algebraic OperationsA.2 ALGEBRAIC OPERATIONSAddition and subtraction of matrices can be defined only for matrices of the sameorder.
If [ A] and [B] are both m × n matrices, the two are said to be conformablefor addition or subtraction. The sum of two m × n matrices is another m × nmatrix having elements obtained by summing the corresponding elements of theoriginal matrices. Symbolically, matrix addition is expressed as[C ] = [ A] + [B](A.3)whereci j = a i j + bi ji = 1, mj = 1, n(A.4)The operation of matrix subtraction is similarly defined. Matrix addition and subtraction are commutative and associative; that is,[ A] + [B] = [B] + [ A](A.5)[ A] + ([B] + [C ]) = ([ A] + [B]) + [C ](A.6)The product of a scalar and a matrix is a matrix in which every element ofthe original matrix is multiplied by the scalar.
If a scalar u multiplies matrix [ A],then[B] = u[ A](A.7)where the elements of [B] are given bybi j = uai ji = 1, mj = 1, n(A.8)Matrix multiplication is defined in such a way as to facilitate the solution ofsimultaneous linear equations. The product of two matrices [ A] and [B] denoted[C ] = [ A][ B](A.9)exists only if the number of columns in [ A] is the equal to the number of rows in[B]. If this condition is satisfied, the matrices are said to be conformable formultiplication. If [ A] is of order m × p and [B] is of order p × n , the matrixproduct [C ] = [ A][ B] is an m × n matrix having elements defined byci j =paik bk j(A.10)k=1Thus, each element ci j is the sum of products of the elements in the ith row of [ A]and the corresponding elements in the jth column of [B].
When referring to thematrix product [ A][ B] , matrix [ A] is called the premultiplier and matrix [B] isthe postmultiplier.In general, matrix multiplication is not commutative; that is,[ A][ B] = [B][ A](A.11)449Hutton: Fundamentals ofFinite Element Analysis450Back MatterAPPENDIX AAppendix A: MatrixMathematics© The McGraw−HillCompanies, 2004Matrix MathematicsMatrix multiplication does satisfy the associative and distributive laws, and wecan therefore write([ A][ B])[C ] = [ A]([B][C ])[ A]([B] + [C ]) = [ A][ B] + [ A][C ]([ A] + [B])[C ] = [ A][C ] + [B][C ](A.12)In addition to being noncommutative, matrix algebra differs from scalaralgebra in other ways. For example, the equality [ A][ B] = [ A][C ] does not necessarily imply [B] = [C ] , since algebraic summing is involved in forming thematrix products. As another example, if the product of two matrices is a nullmatrix, that is, [ A][ B] = [0] , the result does not necessarily imply that either [ A]or [B] is a null matrix.A.3 DETERMINANTSThe determinant of a square matrix is a scalar value that is unique for a givenmatrix.
The determinant of an n × n matrix is represented symbolically as a11 a12 · · · a1n a21 a22 · · · a2n det[A] = |A| = ..(A.13)...... ... .an1 an2 · · · annand is evaluated according to a very specific procedure. First, consider the 2 × 2matrixa11 a12[ A] =(A.14)a21 a22for which the determinant is defined asaa12 |A| = 11≡ a11 a22 − a12 a21a21 a22 (A.15)Given the definition of Equation A.15, the determinant of a square matrix of anyorder can be determined.Next, consider the determinant of a 3 × 3 matrix a11 a12 a13 |A| = a21 a22 a23 (A.16)a31 a32 a33defined as|A| = a11 (a22 a33 − a23 a32 ) − a12 (a21 a33 − a23 a31 ) + a13 (a21 a32 − a22 a31 )(A.17)Note that the expressions in parentheses are the determinants of the second-ordermatrices obtained by striking out the first row and the first, second, and thirdcolumns, respectively.
These are known as minors. A minor of a determinant isHutton: Fundamentals ofFinite Element AnalysisBack MatterAppendix A: MatrixMathematics© The McGraw−HillCompanies, 2004A.4 Matrix Inversionanother determinant formed by removing an equal number of rows and columnsfrom the original determinant. The minor obtained by removing row i and column j is denoted |M i j |. Using this notation, Equation A.17 becomes| A| = a11 |M 11 | − a12 |M 12 | + a13 |M 13 |(A.18)and the determinant is said to be expanded in terms of the cofactors of the firstrow. The cofactors of an element ai j are obtained by applying the appropriatealgebraic sign to the minor |M i j | as follows.
If the sum of row number i and column number j is even, the sign of the cofactor is positive; if i + j is odd, the signof the cofactor is negative. Denoting the cofactor as C i j we can writeC i j = (−1) i+ j |M i j |(A.19)The determinant given in Equation A.18 can then be expressed in terms of cofactors as| A| = a11 C 11 + a12 C 12 + a13 C 13(A.20)The determinant of a square matrix of any order can be obtained by expanding the determinant in terms of the cofactors of any row i asn| A| =ai j C i j(A.21)j=1or any column j as| A| =nai j C i j(A.22)i=1Application of Equation A.21 or A.22 requires that the cofactors C i j be furtherexpanded to the point that all minors are of order 2 and can be evaluated byEquation A.15.A.4 MATRIX INVERSIONThe inverse of a square matrix [ A] is a square matrix denoted by [ A]−1 andsatisfies[ A]−1 [ A] = [ A][ A]−1 = [I ](A.23)that is, the product of a square matrix and its inverse is the identity matrix oforder n.
The concept of the inverse of a matrix is of prime importance in solvingsimultaneous linear equations by matrix methods. Consider the algebraic systema11 x 1 + a12 x 2 + a13 x 3 = y1a21 x 1 + a22 x 2 + a23 x 3 = y2a31 x 1 + a32 x 2 + a33 x 3 = y3(A.24)which can be written in matrix form as[ A]{x } = {y}(A.25)451Hutton: Fundamentals ofFinite Element Analysis452Back MatterAPPENDIX AAppendix A: MatrixMathematics© The McGraw−HillCompanies, 2004Matrix Mathematicswherea11[A] = a21a31is the 3 × 3 coefficient matrix,a12a22a32{x} =x1x2x3a13a23 a33(A.26)(A.27)is the 3 × 1 column matrix (vector) of unknowns, and y{y} =1y2y3(A.28)is the 3 × 1 column matrix (vector) representing the right-hand sides of the equations (the “forcing functions”).If the inverse of matrix [ A] can be determined, we can multiply both sides ofEquation A.25 by the inverse to obtain[ A]−1 [ A]{x } = [ A]−1 {y}(A.29)[ A]−1 [ A]{x } = ([ A]−1 [ A]){x } = [I ]{x } = {x }(A.30)Noting thatthe solution for the simultaneous equations is given by Equation A.29 directly as{x } = [ A]−1 {y}(A.31)While presented in the context of a system of three equations, the result represented by Equation A.31 is applicable to any number of simultaneous algebraicequations and gives the unique solution for the system of equations.The inverse of matrix [ A] can be determined in terms of its cofactors anddeterminant as follows.
Let the cofactor matrix [C ] be the square matrix having aselements the cofactors defined in Equation A.19. The adjoint of [ A] is defined asadj[ A] = [C ] T(A.32)The inverse of [ A] is then formally given by[ A]−1 =adj[ A]| A|(A.33)If the determinant of [ A] is 0, Equation A.33 shows that the inverse does notexist. In this case, the matrix is said to be singular and Equation A.31 providesno solution for the system of equations. Singularity of the coefficient matrixindicates one of two possibilities: (1) no solution exists or (2) multiple (nonunique) solutions exist.
In the latter case, the algebraic equations are not linearlyindependent.Hutton: Fundamentals ofFinite Element AnalysisBack MatterAppendix A: MatrixMathematics© The McGraw−HillCompanies, 2004A.4 Matrix InversionCalculation of the inverse of a matrix per Equation A.33 is cumbersome andnot very practical. Fortunately, many more efficient techniques exist. One suchtechnique is the Gauss-Jordan reduction method, which is illustrated using a2 × 2 matrix:a11 a12[ A] =(A.34)a21 a22The gist of the Gauss-Jordan method is to perform simple row and column operations such that the matrix is reduced to an identity matrix. The sequence ofoperations required to accomplish this reduction produces the inverse. If wedivide the first row by a11 , the operation is the same as the multiplication1a120 a11 a121=[B1 ][A] = a11a11 (A.35)a21 a22a21 a220 1Next, multiply the first row by a21 and subtract from the second row, which isequivalent to the matrix multiplication a12 a12a12 11 a11 a1110 1 a =11[B2 ][B1 ][A] ==−a21 1a|A|12a21 a220 a22 −a210a11a11Multiply the second row by a11 /| A| :[B3 ][B2 ][B1 ][A] = 10 10 a11 |A|0(A.36)a12a11 1=|A| 0a11a12 a11 (A.37)1Finally, multiply the second row by a12 /a11 and subtract from the first row:a12 a1211 −10a11 =a11[B4 ][B3 ][B2 ][B1 ][A] == [I ](A.38)0 10101Considering Equation A.23, we see that[ A]−1 = [B4 ][ B3 ][ B2 ][ B1 ]and carrying out the multiplications in Equation A.39 results in1a22−a12[ A]−1 =| A| −a21 a11(A.39)(A.40)This application of the Gauss-Jordan procedure may appear cumbersome, but theprocedure is quite amenable to computer implementation.453Hutton: Fundamentals ofFinite Element Analysis454Back MatterAPPENDIX AAppendix A: MatrixMathematics© The McGraw−HillCompanies, 2004Matrix MathematicsA.5 MATRIX PARTITIONINGAny matrix can be subdivided or partitioned into a number of submatrices oflower order.