Shampine, Allen, Pruess - Fundamentals of Numerical Computing (523185), страница 9
Текст из файла (страница 9)
. . ,xn satisfy (2.1), then they obviously satisfy the set of equations that is thesame as (2.1) except for the kth equation, which is (2.3). Conversely, because α 0,if x1, x2, . . . ,xn satisfy this second set of equations, they obviously satisfy the first.Second, suppose we replace equation i by the result of subtracting the multiple a ofequation k from equation i:If x1, x2, . . . ,xn satisfy (2.l), then by definitionandaklxl + ak2x2 + ··· + aknxn = bk,so that( ai1x1 + ai2x2 + ··· + ainxn) - α (ak1x1 + ··· + aknxn ) = bi - αb k .Thus xl, x2 , . . .
,xn satisfy all the equations of (2.4). To work in reverse, suppose nowthat x1, x2,. . . ,xn satisfy (2.4). Then in particular they satisfy equations i and k,so that2.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING33which is justThus x1, x2 , . . . ,xn also satisfy (2.1). Third, writing the equations in a different orderclearly does not affect the solution, so interchanging rows results in an equivalent setof equations.Example 2.3.Consider the problem3 x12 x1x1+++6x2 + 9x3 = 395x2 - 2x3 = 33x2 - x3 = 2.(2.5)If we subtract a multiple α of the first equation from the second, we getChoosing α = 2/3 makes the coefficient of x1 zero, so that the unknown x1 no longerappears in this equation:x2 - 8x3 = -23.We say that we have “eliminated” the unknown x1 from the equation.
Similarly, weeliminate x1 from equation (2.5) by subtracting l/3 times the first equation from it:x2 - 4x3 = -11.The system of equations (2.5)-(2.5) has been reduced to the equivalent system3 x1+6x2 + 9x3 = 39x2 - 8x3 = -23x2 - 4x3= -11.(2.6)Now we set aside the first equation and continue the elimination process with the lasttwo equations in the system (2.6) involving only the unknowns x2 and x3. Multiply thesecond equation by 1 and subtract from the third to produce4 x3 = 12,(2.7)a single equation in one unknown.
The equations (2.6) have now become the equivalent set of equations3 x1+6x2 + 9x3 = 39x2 - 8x3 = -234 x3=12.(2.8)The system (2.8)-(2.8) is easy to solve. From (2.8) x3 = 12/4 = 3. The known valueof x3 is then used in (2.8) to obtain x2, that is,x2= 8x3 - 23 = 8 × 3 - 23 = 1.Finally, the values for x2 and x3 are used in (2.8) to obtain x1,x1 = (-6x2 - 9x3 + 39)/3=(-6×1-9×3+39)/3=2.34CHAPTER 2SYSTEMS OF LINEAR EQUATIONSBecause this set of equations is equivalent to the original set of equations, the solutionof (2.5)-(2.5) is x1 = 2, x2 = 1, x3 = 3.nLet us turn to the general problem (2.1), which we now write with superscripts tohelp explain what follows:Ifwe can eliminate the unknown x1 from each of the succeeding equations.A typical step is to subtract from equation i the multipleof the first equation.The results will be denoted with a superscript 2.
The step is carried out by first formingand then formingandThe multiple of the first equation is chosen to makethat is, to eliminate theunknown x1 from equation i. Of course, if mil = 0, the variable x1 does not appear inequation i, so it does not need to be eliminated. By recognizing this, the arithmetic ofelimination can be avoided. Doing this for each i = 2,. . . ,n we arrive at the systemNotice that if we start out with A stored in a C or FORTRAN array, we can savea considerable amount of storage by overwriting thewith theas they arecreated. Also, we can save the multipliers mi1 in the space formerly occupied by theentries of A and just remember that all the elements below the diagonal in the firstcolumn are really zero after elimination. Later we shall see why it is useful to save themultipliers. Similarly, the original vector b can be overwritten with theas they areformed.2.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING35Now we set the first equation aside and eliminate x2 from equations i = 3,.
. . , n inthe same way. If0, then for i = 3, 4,. . . , n we first formand thenandThis results inAs before, we set the first two equations aside and eliminate x3 from equations i =4, . . . n. This can be done as long asare calledThe elementspivot elements or simply pivots. Clearly, the process can be continued as long as nopivot vanishes. Assuming this to be the case, we finally arrive atIn the computer implementation, the elements of the original matrix A are successivelyoverwritten by theas they are formed, and the multipliers mij are saved in theplaces corresponding to the variables eliminated.
The process of reducing the systemof equations (2.1) to the form (2.9) is called (forward) elimination. The result is asystem with a kind of coefficient matrix called upper triangular. An upper triangularmatrix U = ( u ij ) is one for whichIt is easy to solve a system of equations (2.9) with an upper triangular matrix by aprocess known as back substitution. Ifwe solve the last equation for xn,36CHAPTER 2SYSTEMS OF LINEAR EQUATIONSUsing the known value of xn, we then solve equation n - 1 for x n-1, and so forth. Atypical step is to solve equation k,for xk using the previously computed xn, xn-1,. .
. ,xk+1 :The only way this process can break down (in principle) is if a pivot element is zero.Example 2.4.Consider the two examples0 · xl + 2x2 = 34 x1 + 5x2 = 6and0 · x1 + 2x2 = 30 · x1 + 5x2 = 6.so it cannot be used as a pivot, but there is a simple remedy for theThe entrydifficulty. We merely interchange the equations to get the equivalent set4 x 1 + 5x2 = 60 · x1 + 2x2 = 3.For this problem the difficulty was easily avoided. This device will not work on theother problem, however, as it is singular. The first equation of this set requires x2 = 3/2nand the second requires x2 = 6/5, so there is no solution at all.In the general case, suppose we have arrived atandWe examine the elementsl,in column k for j > k.
If for some indexwe interchange equations k and l. This does not affect the solution, sowe rename the coefficients in the same way as before. The new pivotis the oldwhich was nonzero, so the elimination process can now proceed as usual. If,however,we have a difficulty of another sort: the2.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING37matrix is singular. We prove this by showing that if a solution exists, it cannot beunique. Assume x is a solution to the problem.
Set zk+l = xk+l , . . . , zn = xn and let zkbe arbitrary. The quantities zk,zk+l , . . . ,zn satisfy equations k through n because theunknown xk does not appear in any of those equations. Now values for z1, . . . , zk-1may be determined by back substitution so that equations 1 through k - 1 are satisfied:This can be done because none of these pivot elements vanishes. Since all of theequations are satisfied, we have produced a whole family of solutions, namely zl, z2,.
. ., zk, xk+l,. . ., xn with zk arbitrary. This shows that the matrix is singular.Example 2.5. The following problems illustrate how singular systems are revealedduring elimination. In the systemxl2 x13 x1+++2x24x26x2- x3+ x3- 2x3===277,one step of elimination yieldsxl+2x20 x20 x2++x3 = 273x3 = 3x3 = 1.Since we cannot find a nonzero pivot for the second elimination step, the system issingular. It is not hard to show that the solutions arex1 = 3 - 2cx2 = cx3 = 1for all real numbers c.
The systemx1 - x2 + x3 = 02x1 + x2 - x3 = -3xl + 2x2 - 2x2 = -2is also singular, since two steps of elimination givexl-x23 x2+-x33x30 x3===0-31.38CHAPTER 2SYSTEMS OF LINEAR EQUATIONSnIn this case there is no solution at all.We conclude that by using interchanges, the elimination process has a zero pivotonly if the original problem is singular.
This statement is fine in theory, but the distinction between singular and nonsingular problems is blurred in practice by roundoffeffects. Unless a pivot is exactly zero, interchange of equations is unnecessary in theory. However, it is plausible that working with a pivot that is almost zero will lead toproblems of accuracy in finite precision arithmetic, and this turns out to be the case.Example 2.6.The following example is due to Forsythe and Moler [6]:0.000100x1 + 1.00x2 = 1.001.00x1 + 1.00x2 = 2.00.Using three-digit decimal rounded floating point arithmetic, one step in the eliminationprocess without interchanging equations yields for the second equation[ 1.00 - (10,000) (1.00)]x2 = [2.00 - (10,000) (1.00)]or-10,000x2=-10,000.Clearly, x2 = 1.00 and, by back substitution, x1 = 0.00. Notice that all informationcontained in the second equation was lost at this stage.
This happened because thesmall pivot caused a large multiplier and subsequently the subtraction of numbers ofvery different size. With interchange we have1.00x1+1.00x21.00x2==2.001.00and xl = 1.00, x2 = 1.00. The true solution is about x1 = 1.00010, x2 = 0.99990.nmay lead to inaccurate results. As we saw in the lastsmall pivot elementsexample, when eliminating the variable xk in row i, a small pivot element leads to aWhenlarge multiplier mik =is much larger thanis formed, there is a loss of information wheneverinformation that may be needed later. A large multiplier is also likely to result in alarge entry in the upper triangular matrix resulting from elimination.
In the solution ofthe corresponding linear system by back substitution, we computeIf the pivot (the denominator) is small and the true value xk is of moderate size, itmust be the case that the numerator is also small. But if there are entriesof2.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING39the upper triangular matrix that are large, this is possible only if cancellation occursin the numerator. The large entries might well have been computed with a modestrelative error, but because the entries are large this leads to a large absolute error inthe numerator after cancellation.
The small denominator amplifies this and there is asubstantial relative error in xk.Partial pivoting is the most popular way of avoiding small pivots and controllingthe size of theWhen we eliminate xk, we select the largest coefficient (in magnitude) of xk in the last n - k + 1 equations as the pivot. That is, iftheis the largest offor j = k, k + 1,. .
. , n, we interchange row k and row l. By renaming the rowswe can assume that the pivothas the largest magnitude possible. Partial pivotingavoids small pivots and nicely controls the size of the multipliersControlling the size of the multipliers moderates the growth of the entries in the uppertriangular matrix resulting from elimination. Let a = maxi , jNowand it is easy to go on to show thatThis implies that(2.10)when partial pivoting is done. The growth that is possible here is very important tobounds on the error of Gaussian elimination.