Shampine, Allen, Pruess - Fundamentals of Numerical Computing (523185), страница 14
Текст из файла (страница 14)
When the elimination process has been completed, the kth component of PVTIDX (pivotindex) is the index of thekth pivot row and the nth component is set to (- 1)” where m is thenumber of interchanges. Computation of the determinant of A requiresPVTIDX(NEQ) in FORTRAN or pivot_index[neq-l] in C and C++ (seeExercise 2.23).The argument list for Solve isSOLVE(A,MAXROW,NEQ,PVTIDX,B)2.4 ROUTINES FACTOR AND SOLVE63in FORTRAN; in C and C++ it isSolve(a,neq,pivotindex,b);The variables A, MAXROW, NEQ, PVTIDX are as specified in the Factor list. RoutineSolve uses the arrays A and PVTIDX as output from Factor and the right-hand sidecontained in the vector B to solve Ax = b.
The solution x is returned in B.Example 2.14.To illustrate the codes, we solve the problemfor two right sides b = (39, 3, 2)T and b = (6, 7, - 12)T. The main program sets up theproblem and calls the routines Factor/Solve to obtain a solution. Note that after thecall to Factor, the variable FLAG is checked to see if any pivots are zero. If FLAG >0 and Solve were used, a zero divide error would result.
When FLAG = 0 the routineSolve is used once for each right-hand side to obtain the solutions. Note that Factor isnot (and should not be) used twice since it does not act on b. The output is as follows(the floating point values may vary slightly from one computer to another).C o n d i t i o n n u m b e r = 106.642857142857100Solution of the first system2.0000000000000001.000000000000000Solution of the second system76.75000000000000-31.000000000000003.000000000000000-4.250000000000000nEXERCISES2.11 Solve the system in Exercise 2.2 using Factor/Solve.Compare to the true solution.If we form the matrix2.12 Solve the system in Exercise 2.3 using Factor/Solve.Compute the residual.2.13 The codes Factor/Solve can be used to find the elements of A-1. The inverse is used to estimate certainstatistical parameters.
It is also used to study the sensitivity of the solution to errors in the data. Let xidenote the solution ofit is easy to show that X = A-1. Do so. Use Factor/Solve to find the inverse of the matrixAx=bi, i = 1, 2 ,..., n,where the ith right-hand side b i is2.14 Consider the linear system of Exercise 2.4.(a) Using the method of Exercise 2.13, find the inverse of the original linear system. Calculate the exact64CHAPTER 2SYSTEMS OF LINEAR EQUATIONScondition number.(d) Compute the residuals for(b) and for (c). Do theygive any indication of singularity?(b) Use Factor/Solve on the system in Exercise 2.4b.What is COND? Is the condition number inequal- 2.17 In analyzing environmental samples taken from theity (2.22) valid for this problem? (Useatmosphere, a simple model with m samples and n0.003333.)sources and chemicals produces AX = B, where ajkis the average concentration of element i from source2.15 Suppose we are given the electrical network shownk, xki is the mass of particles from source k contributin Figure 2.1, and we desire to find the potentials atingto sample j, bij is the concentration of element ijunctions (1) through (6).
The potential applied bein sample j, and 1 < i < n, 1 < k < n, 1 < j < m. Iftween A and B is V volts. Denoting; the potentialsm = 4, n = 3, thenby v1, v2, ..., v6, application of Ohm’s law and Kirchhoff’s current law yield the following set of linearequations for the vi:11v1 - 5v2 - v6 = 5V-20v1 + 41v2 - 15v3 - 6v5 = 0-3v2 + 7v3 - 4v4 = 0-v3 + 2v4 - v5 = 0-3v2 - 10v4 + 28v5 - 15v6 = 0(a) What is X? What does COND tell you about thereliability of this result? First assume exact data, thenthat the entries of A and B are rounded to the displayedvalues.-2v1 - 15v5 + 47v6 = 0.Solve when V = 50.2.16 The system0.473 x1 - 0.115x 20.731x1 -0.391x2-0.782 x2++0.267x30.979x3===blb2b3is singular.(b) Use the method of Exercise 2.13 to compute A-1.What is the exact cond(A)?(c) What does (2.24) tell you about the sensitivity ofx21 to changes in b11 ? Replace b11 by 1.43 and recalculate x21.
Do the numerical answers confirm thetheory? Here you are to consider relative changes tothe data and the solution.(a) Apply Factor to the coefficient matrix. What isthe smallest pivot? Is it near the unit roundoff u? Isit near underflow? Is COND large? The results you 2.18 Consider the linear systemobtain will depend on the hardware and software thatyou use. If Factor turns up a pivot that is exactly zero,perturb the coefficient 0.979 by a very small amountso as to get a system that is computationally nonsin(a) Solve for x using Factor/Solve.gular for the rest of this problem. Adding 10-l4 to thecoefficient will suffice for a number of configurations.(b) If each entry in A and b might have an error of±0.0005, how reliable is x?(b) Use Factor/Solve to compute x for b =(0.084,0.357,0.833). Is there any indication of singularity in the answer?(c) Use Factor/Solve to compute x for b =(0.566,0.404,0.178).
Is there any indication of singularity in the answer?(c) Make arbitrary changes of ±0.0005 in the elements of A to get A + AA and in the elements of b toget b +Solveto getx+CalculateIs this consistent with(b)? What is the relative change in each xi ?2.5 MATRICES WITH SPECIAL STRUCTURE65Figure 2.1 Circuit for Exercise 2.15.2.5 MATRICES WITH SPECIAL STRUCTUREMost general-purpose linear system solvers are based on Gaussian elimination withpartial pivoting.
When the matrix A has special properties, it is possible to reduce thestorage and the cost of solving linear systems very substantially. This section takes upimportant examples.When it is possible to factor the matrix without pivoting, this is both faster andreduces the storage required. There are two kinds of matrices that are common forwhich it is not necessary to pivot.
An n × n matrix A is said to be diagonally dominant(by columns) if for each columnThis says that the entry on the diagonal is the biggest one in the column, and bysome margin. An induction argument can be used to show that Gaussian eliminationapplied to a nonsingular, diagonally dominant matrix will always select the entry on thediagonal; hence do no row interchanges. The matrix A is said to be symmetric if AT =A. A symmetric matrix is positive definite if for any vector v 0, the quantity vT A v >0. It is not only possible to dispense with pivoting for positive definite matrices, buteven to exploit symmetry by working with a variant of Gaussian elimination.BAND MATRICESRecall that in the basic elimination algorithm, the innermost loop can be omitted whenthe multiplier t = 0.
This reflects the fact that the variable is not present in this equationand so does not need to be eliminated. When the matrix A is already “close” to anupper triangular matrix, testing for a zero multiplier can save quite a bit of work. A66CHAPTER 2SYSTEMS OF LINEAR EQUATIONSkind of matrix that is extremely important in several areas of computation is a bandedmatrix. A matrix A = (a ij ) is said to be banded when all the nonzero elements lie in aband about the diagonal. Specifically, when aij = 0 if i - j >and j - i > mu, thematrix is said to have the lower band widthupper band width mu, and band widthAn example of a matrix with= 2 and mu, = 1 isHere x indicates an entry that might not be zero.
When elimination is performed onsuch a matrix, at mostelements have to be eliminated at each stage. Examination ofthe elements above the diagonal shows that many zero elements remain zero. Indeed,partial pivoting will leave zeros infor j - i > mu +As with zero multipliers,we can speed up the computation by recognizing elements that are zero and stay zero.Another important observation is that by using a special storage scheme, there is nowith i - j >or j - i > mu +Codes implementingneed to provide storage fora special version of Gaussian elimination for banded matrices can be found in eitherLINPACK [5] or LAPACK [1]. Although it is a little more trouble to set up A in thespecial storage format used, the advantages can be great.
The numerical results areidentical, but the storage in the banded case is roughly n(2+ mu) instead of n2. Theoperation count for the decomposition is comparable toinstead of n3/3and there is a similar advantage for the forward and back substitution. Complete detailsare found in [5], [l], or [9]. The main point is that when n is large andand mu aresmall, tremendous savings are possible. This is what makes solution of systems withn = 103, and, say,= mu = 5, a routine matter when solving differential equations.It will be convenient now to derive an alternative form of the Gaussian eliminationalgorithm.
Assume the decomposition A = LU exists with L lower triangular and Uupper triangular. First note thatbecause the matrices are triangular. ChooseFor i > 1soAlso, for j > 1,and then2.5 MATRICES WITH SPECIAL STRUCTURE67soIn general we form a column of L and a row of U at a time. Suppose we have computedcolumns l, . . . , k - l of L and rows l, . . .
, k - l of U. ThenThe terms in the sum on the right are known. Chooseand thenNow for i > k,The terms in the sum on the right are known, soSimilarly, for j > k,andIf all the diagonal elements of L are taken to be 1, this algorithm is Gaussian elimination without pivoting. Later it will prove useful to choose other values for theseelements. In our discussion of elimination applied to a band matrix A, we observedthat quite a lot of storage and work could be saved. The situation is even better whenno pivoting is done. If A is a band matrix with lower band widthand upper bandwidth mu, then L is also a band matrix with lower band widthand U is a bandmatrix with upper band width mu.
If we choose the diagonal elements of L to be 1,it is not necessary to store them and as is the case for full matrices, the factors L andU can be written over A as they are computed. These statements about the form of Land U follow by examination of the recipe for their computation when the form of Ais taken into account. Of course, a special storage scheme is needed to exploit the factthat only elements in a band about the diagonal can be nonzero. This approach to solving banded linear systems is of great practical importance. In Chapter 3 matrices with= 1 = mu arise in the fitting of splines to data. They are examples of nonsingulardiagonally dominant matrices for which we can be sure that pivoting is not needed forstability of the algorithm.