Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C (523184), страница 17
Текст из файла (страница 17)
This means that we don’t have tocommit ourselves as to whether the diagonal element βjj is the one that happensto fall on the diagonal in the first instance, or whether one of the (undivided) αij ’sbelow it in the column, i = j + 1, . . .
, N , is to be “promoted” to become the diagonalβ. This can be decided after all the candidates in the column are in hand. As youshould be able to guess by now, we will choose the largest one as the diagonal β(pivot element), then do all the divisions by that element en masse. This is Crout’smethod with partial pivoting. Our implementation has one additional wrinkle: Itinitially finds the largest element in each row, and subsequently (when it is lookingfor the maximal pivot element) scales the comparison as if we had initially scaled allthe equations to make their maximum coefficient equal to unity; this is the implicitpivoting mentioned in §2.1.#include <math.h>#include "nrutil.h"#define TINY 1.0e-20;A small number.void ludcmp(float **a, int n, int *indx, float *d)Given a matrix a[1..n][1..n], this routine replaces it by the LU decomposition of a rowwisepermutation of itself.
a and n are input. a is output, arranged as in equation (2.3.14) above;indx[1..n] is an output vector that records the row permutation effected by the partialpivoting; d is output as ±1 depending on whether the number of row interchanges was evenor odd, respectively. This routine is used in combination with lubksb to solve linear equationsor invert a matrix.{int i,imax,j,k;float big,dum,sum,temp;float *vv;vv stores the implicit scaling of each row.vv=vector(1,n);*d=1.0;No row interchanges yet.for (i=1;i<=n;i++) {Loop over rows to get the implicit scaling informabig=0.0;tion.for (j=1;j<=n;j++)if ((temp=fabs(a[i][j])) > big) big=temp;if (big == 0.0) nrerror("Singular matrix in routine ludcmp");No nonzero largest element.vv[i]=1.0/big;Save the scaling.}for (j=1;j<=n;j++) {This is the loop over columns of Crout’s method.for (i=1;i<j;i++) {This is equation (2.3.12) except for i = j.sum=a[i][j];for (k=1;k<i;k++) sum -= a[i][k]*a[k][j];a[i][j]=sum;}big=0.0;Initialize for the search for largest pivot element.for (i=j;i<=n;i++) {This is i = j of equation (2.3.12) and i = j +1 .
. . Nsum=a[i][j];of equation (2.3.13).for (k=1;k<j;k++)2.3 LU Decomposition and Its Applications47sum -= a[i][k]*a[k][j];a[i][j]=sum;if ( (dum=vv[i]*fabs(sum)) >= big) {Is the figure of merit for the pivot better than the best so far?big=dum;imax=i;}}if (j != imax) {Do we need to interchange rows?for (k=1;k<=n;k++) {Yes, do so...dum=a[imax][k];a[imax][k]=a[j][k];a[j][k]=dum;}*d = -(*d);...and change the parity of d.vv[imax]=vv[j];Also interchange the scale factor.}indx[j]=imax;if (a[j][j] == 0.0) a[j][j]=TINY;If the pivot element is zero the matrix is singular (at least to the precision of thealgorithm).
For some applications on singular matrices, it is desirable to substituteTINY for zero.if (j != n) {Now, finally, divide by the pivot element.dum=1.0/(a[j][j]);for (i=j+1;i<=n;i++) a[i][j] *= dum;}}Go back for the next column in the reduction.free_vector(vv,1,n);}Here is the routine for forward substitution and backsubstitution, implementingequations (2.3.6) and (2.3.7).void lubksb(float **a, int n, int *indx, float b[])Solves the set of n linear equations A·X = B. Here a[1..n][1..n] is input, not as the matrixA but rather as its LU decomposition, determined by the routine ludcmp.
indx[1..n] is inputas the permutation vector returned by ludcmp. b[1..n] is input as the right-hand side vectorB, and returns with the solution vector X. a, n, and indx are not modified by this routineand can be left in place for successive calls with different right-hand sides b. This routine takesinto account the possibility that b will begin with many zero elements, so it is efficient for usein matrix inversion.{int i,ii=0,ip,j;float sum;for (i=1;i<=n;i++) {When ii is set to a positive value, it will become theip=indx[i];index of the first nonvanishing element of b. We nowsum=b[ip];do the forward substitution, equation (2.3.6).
Theb[ip]=b[i];only new wrinkle is to unscramble the permutationif (ii)as we go.for (j=ii;j<=i-1;j++) sum -= a[i][j]*b[j];else if (sum) ii=i;A nonzero element was encountered, so from now on web[i]=sum;will have to do the sums in the loop above.}for (i=n;i>=1;i--) {Now we do the backsubstitution, equation (2.3.7).sum=b[i];for (j=i+1;j<=n;j++) sum -= a[i][j]*b[j];b[i]=sum/a[i][i];Store a component of the solution vector X.}All done!}48Chapter 2.Solution of Linear Algebraic EquationsThe LU decomposition in ludcmp requires about 13 N 3 executions of the innerloops (each with one multiply and one add). This is thus the operation countfor solving one (or a few) right-hand sides, and is a factor of 3 better than theGauss-Jordan routine gaussj which was given in §2.1, and a factor of 1.5 betterthan a Gauss-Jordan routine (not given) that does not compute the inverse matrix.For inverting a matrix, the total count (including the forward and backsubstitutionas discussed following equation 2.3.7 above) is ( 13 + 16 + 12 )N 3 = N 3 , the sameas gaussj.To summarize, this is the preferred way to solve the linear set of equationsA · x = b:float **a,*b,d;int n,*indx;...ludcmp(a,n,indx,&d);lubksb(a,n,indx,b);The answer x will be given back in b.
Your original matrix A will havebeen destroyed.If you subsequently want to solve a set of equations with the same A but adifferent right-hand side b, you repeat onlylubksb(a,n,indx,b);not, of course, with the original matrix A, but with a and indx as were alreadyset by ludcmp.Inverse of a MatrixUsing the above LU decomposition and backsubstitution routines, it is completely straightforward to find the inverse of a matrix column by column.#define N ...float **a,**y,d,*col;int i,j,*indx;...ludcmp(a,N,indx,&d);Decompose the matrix just once.for(j=1;j<=N;j++) {Find inverse by columns.for(i=1;i<=N;i++) col[i]=0.0;col[j]=1.0;lubksb(a,N,indx,col);for(i=1;i<=N;i++) y[i][j]=col[i];}The matrix y will now contain the inverse of the original matrix a, which will havebeen destroyed. Alternatively, there is nothing wrong with using a Gauss-Jordanroutine like gaussj (§2.1) to invert a matrix in place, again destroying the original.Both methods have practically the same operations count.2.3 LU Decomposition and Its Applications49Incidentally, if you ever have the need to compute A−1 · B from matrices Aand B, you should LU decompose A and then backsubstitute with the columns ofB instead of with the unit vectors that would give A’s inverse.
This saves a wholematrix multiplication, and is also more accurate.Determinant of a MatrixThe determinant of an LU decomposed matrix is just the product of thediagonal elements,det =Nβjj(2.3.15)j=1We don’t, recall, compute the decomposition of the original matrix, but rather adecomposition of a rowwise permutation of it. Luckily, we have kept track ofwhether the number of row interchanges was even or odd, so we just preface theproduct by the corresponding sign.
(You now finally know the purpose of settingd in the routine ludcmp.)Calculation of a determinant thus requires one call to ludcmp, with no subsequent backsubstitutions by lubksb.#define N ...float **a,d;int j,*indx;...ludcmp(a,N,indx,&d);This returns d as ±1.for(j=1;j<=N;j++) d *= a[j][j];The variable d now contains the determinant of the original matrix a, which willhave been destroyed.For a matrix of any substantial size, it is quite likely that the determinant willoverflow or underflow your computer’s floating-point dynamic range. In this caseyou can modify the loop of the above fragment and (e.g.) divide by powers of ten,to keep track of the scale separately, or (e.g.) accumulate the sum of logarithms ofthe absolute values of the factors and the sign separately.Complex Systems of EquationsIf your matrix A is real, but the right-hand side vector is complex, say b + id, then (i)LU decompose A in the usual way, (ii) backsubstitute b to get the real part of the solutionvector, and (iii) backsubstitute d to get the imaginary part of the solution vector.If the matrix itself is complex, so that you want to solve the system(A + iC) · (x + iy) = (b + id)(2.3.16)then there are two possible ways to proceed.
The best way is to rewrite ludcmp and lubksbas complex routines. Complex modulus substitutes for absolute value in the construction ofthe scaling vector vv and in the search for the largest pivot elements. Everything else goesthrough in the obvious way, with complex arithmetic used as needed.
(See §§1.2 and 5.4 fordiscussion of complex arithmetic in C.)50Chapter 2.Solution of Linear Algebraic EquationsA quick-and-dirty way to solve complex systems is to take the real and imaginaryparts of (2.3.16), givingA·x−C·y=b(2.3.17)C·x+A·y=dwhich can be written as a 2N × 2N set of real equations, A −Cxb·=C Ayd(2.3.18)and then solved with ludcmp and lubksb in their present forms. This scheme is a factor of2 inefficient in storage, since A and C are stored twice.
It is also a factor of 2 inefficient intime, since the complex multiplies in a complexified version of the routines would each use4 real multiplies, while the solution of a 2N × 2N problem involves 8 times the work ofan N × N one. If you can tolerate these factor-of-two inefficiencies, then equation (2.3.18)is an easy way to proceed.CITED REFERENCES AND FURTHER READING:Golub, G.H., and Van Loan, C.F.