Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C (523184), страница 22
Текст из файла (страница 22)
This means that you have to store only a few columns of U and V (thesame k ones) and you will be able to recover, with good accuracy, the whole matrix.Note also that it is very efficient to multiply such an approximated matrix by avector x: You just dot x with each of the stored columns of V, multiply the resultingscalar by the corresponding wk , and accumulate that multiple of the correspondingcolumn of U. If your matrix is approximated by a small number K of singularvalues, then this computation of A · x takes only about K(M + N ) multiplications,instead of M N for the full matrix.SVD AlgorithmHere is the algorithm for constructing the singular value decomposition of anymatrix.
See §11.2–§11.3, and also [4-5] , for discussion relating to the underlyingmethod.#include <math.h>#include "nrutil.h"void svdcmp(float **a, int m, int n, float w[], float **v)Given a matrix a[1..m][1..n], this routine computes its singular value decomposition, A =U ·W ·V T . The matrix U replaces a on output. The diagonal matrix of singular values W is output as a vector w[1..n]. The matrix V (not the transpose V T ) is output as v[1..n][1..n].{float pythag(float a, float b);int flag,i,its,j,jj,k,l,nm;float anorm,c,f,g,h,s,scale,x,y,z,*rv1;rv1=vector(1,n);g=scale=anorm=0.0;Householder reduction to bidiagonal form.for (i=1;i<=n;i++) {l=i+1;rv1[i]=scale*g;g=s=scale=0.0;if (i <= m) {for (k=i;k<=m;k++) scale += fabs(a[k][i]);if (scale) {for (k=i;k<=m;k++) {a[k][i] /= scale;s += a[k][i]*a[k][i];}f=a[i][i];g = -SIGN(sqrt(s),f);h=f*g-s;a[i][i]=f-g;for (j=l;j<=n;j++) {for (s=0.0,k=i;k<=m;k++) s += a[k][i]*a[k][j];f=s/h;for (k=i;k<=m;k++) a[k][j] += f*a[k][i];}for (k=i;k<=m;k++) a[k][i] *= scale;}}w[i]=scale *g;g=s=scale=0.0;if (i <= m && i != n) {for (k=l;k<=n;k++) scale += fabs(a[i][k]);if (scale) {68Chapter 2.Solution of Linear Algebraic Equationsfor (k=l;k<=n;k++) {a[i][k] /= scale;s += a[i][k]*a[i][k];}f=a[i][l];g = -SIGN(sqrt(s),f);h=f*g-s;a[i][l]=f-g;for (k=l;k<=n;k++) rv1[k]=a[i][k]/h;for (j=l;j<=m;j++) {for (s=0.0,k=l;k<=n;k++) s += a[j][k]*a[i][k];for (k=l;k<=n;k++) a[j][k] += s*rv1[k];}for (k=l;k<=n;k++) a[i][k] *= scale;}}anorm=FMAX(anorm,(fabs(w[i])+fabs(rv1[i])));}for (i=n;i>=1;i--) {Accumulation of right-hand transformations.if (i < n) {if (g) {for (j=l;j<=n;j++)Double division to avoid possible underflow.v[j][i]=(a[i][j]/a[i][l])/g;for (j=l;j<=n;j++) {for (s=0.0,k=l;k<=n;k++) s += a[i][k]*v[k][j];for (k=l;k<=n;k++) v[k][j] += s*v[k][i];}}for (j=l;j<=n;j++) v[i][j]=v[j][i]=0.0;}v[i][i]=1.0;g=rv1[i];l=i;}for (i=IMIN(m,n);i>=1;i--) {Accumulation of left-hand transformations.l=i+1;g=w[i];for (j=l;j<=n;j++) a[i][j]=0.0;if (g) {g=1.0/g;for (j=l;j<=n;j++) {for (s=0.0,k=l;k<=m;k++) s += a[k][i]*a[k][j];f=(s/a[i][i])*g;for (k=i;k<=m;k++) a[k][j] += f*a[k][i];}for (j=i;j<=m;j++) a[j][i] *= g;} else for (j=i;j<=m;j++) a[j][i]=0.0;++a[i][i];}for (k=n;k>=1;k--) {Diagonalization of the bidiagonal form: Loop overfor (its=1;its<=30;its++) {singular values, and over allowed iterations.flag=1;for (l=k;l>=1;l--) {Test for splitting.nm=l-1;Note that rv1[1] is always zero.if ((float)(fabs(rv1[l])+anorm) == anorm) {flag=0;break;}if ((float)(fabs(w[nm])+anorm) == anorm) break;}if (flag) {c=0.0;Cancellation of rv1[l], if l > 1.s=1.0;for (i=l;i<=k;i++) {2.6 Singular Value Decomposition69f=s*rv1[i];rv1[i]=c*rv1[i];if ((float)(fabs(f)+anorm) == anorm) break;g=w[i];h=pythag(f,g);w[i]=h;h=1.0/h;c=g*h;s = -f*h;for (j=1;j<=m;j++) {y=a[j][nm];z=a[j][i];a[j][nm]=y*c+z*s;a[j][i]=z*c-y*s;}}}z=w[k];if (l == k) {Convergence.if (z < 0.0) {Singular value is made nonnegative.w[k] = -z;for (j=1;j<=n;j++) v[j][k] = -v[j][k];}break;}if (its == 30) nrerror("no convergence in 30 svdcmp iterations");x=w[l];Shift from bottom 2-by-2 minor.nm=k-1;y=w[nm];g=rv1[nm];h=rv1[k];f=((y-z)*(y+z)+(g-h)*(g+h))/(2.0*h*y);g=pythag(f,1.0);f=((x-z)*(x+z)+h*((y/(f+SIGN(g,f)))-h))/x;c=s=1.0;Next QR transformation:for (j=l;j<=nm;j++) {i=j+1;g=rv1[i];y=w[i];h=s*g;g=c*g;z=pythag(f,h);rv1[j]=z;c=f/z;s=h/z;f=x*c+g*s;g = g*c-x*s;h=y*s;y *= c;for (jj=1;jj<=n;jj++) {x=v[jj][j];z=v[jj][i];v[jj][j]=x*c+z*s;v[jj][i]=z*c-x*s;}z=pythag(f,h);w[j]=z;Rotation can be arbitrary if z = 0.if (z) {z=1.0/z;c=f*z;s=h*z;}f=c*g+s*y;x=c*y-s*g;70Chapter 2.Solution of Linear Algebraic Equationsfor (jj=1;jj<=m;jj++) {y=a[jj][j];z=a[jj][i];a[jj][j]=y*c+z*s;a[jj][i]=z*c-y*s;}}rv1[l]=0.0;rv1[k]=f;w[k]=x;}}free_vector(rv1,1,n);}#include <math.h>#include "nrutil.h"float pythag(float a, float b)Computes (a2 + b2 )1/2 without destructive underflow or overflow.{float absa,absb;absa=fabs(a);absb=fabs(b);if (absa > absb) return absa*sqrt(1.0+SQR(absb/absa));else return (absb == 0.0 ? 0.0 : absb*sqrt(1.0+SQR(absa/absb)));}(Double precision versions of svdcmp, svbksb, and pythag, named dsvdcmp,dsvbksb, and dpythag, are used by the routine ratlsq in §5.13.
You can easilymake the conversions, or else get the converted routines from the Numerical Recipesdiskette.)CITED REFERENCES AND FURTHER READING:Golub, G.H., and Van Loan, C.F. 1989, Matrix Computations, 2nd ed. (Baltimore: Johns HopkinsUniversity Press), §8.3 and Chapter 12.Lawson, C.L., and Hanson, R. 1974, Solving Least Squares Problems (Englewood Cliffs, NJ:Prentice-Hall), Chapter 18.Forsythe, G.E., Malcolm, M.A., and Moler, C.B. 1977, Computer Methods for MathematicalComputations (Englewood Cliffs, NJ: Prentice-Hall), Chapter 9.
[1]Wilkinson, J.H., and Reinsch, C. 1971, Linear Algebra, vol. II of Handbook for Automatic Computation (New York: Springer-Verlag), Chapter I.10 by G.H. Golub and C. Reinsch. [2]Dongarra, J.J., et al. 1979, LINPACK User’s Guide (Philadelphia: S.I.A.M.), Chapter 11. [3]Smith, B.T., et al. 1976, Matrix Eigensystem Routines — EISPACK Guide, 2nd ed., vol.
6 ofLecture Notes in Computer Science (New York: Springer-Verlag).Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag),§6.7. [4]Golub, G.H., and Van Loan, C.F. 1989, Matrix Computations, 2nd ed. (Baltimore: Johns HopkinsUniversity Press), §5.2.6. [5]2.7 Sparse Linear Systems712.7 Sparse Linear SystemsA system of linear equations is called sparse if only a relatively small numberof its matrix elements aij are nonzero. It is wasteful to use general methods oflinear algebra on such problems, because most of the O(N 3 ) arithmetic operationsdevoted to solving the set of equations or inverting the matrix involve zero operands.Furthermore, you might wish to work problems so large as to tax your availablememory space, and it is wasteful to reserve storage for unfruitful zero elements.Note that there are two distinct (and not always compatible) goals for any sparsematrix method: saving time and/or saving space.We have already considered one archetypal sparse form in §2.4, the banddiagonal matrix.
In the tridiagonal case, e.g., we saw that it was possible to saveboth time (order N instead of N 3 ) and space (order N instead of N 2 ). Themethod of solution was not different in principle from the general method of LUdecomposition; it was just applied cleverly, and with due attention to the bookkeepingof zero elements. Many practical schemes for dealing with sparse problems have thissame character. They are fundamentally decomposition schemes, or else eliminationschemes akin to Gauss-Jordan, but carefully optimized so as to minimize the numberof so-called fill-ins, initially zero elements which must become nonzero during thesolution process, and for which storage must be reserved.Direct methods for solving sparse equations, then, depend crucially on theprecise pattern of sparsity of the matrix.
Patterns that occur frequently, or that areuseful as way-stations in the reduction of more general forms, already have specialnames and special methods of solution. We do not have space here for any detailedreview of these. References listed at the end of this section will furnish you with an“in” to the specialized literature, and the following list of buzz words (and Figure2.7.1) will at least let you hold your own at cocktail parties:••••••••••••tridiagonalband diagonal (or banded) with bandwidth Mband triangularblock diagonalblock tridiagonalblock triangularcyclic bandedsingly (or doubly) bordered block diagonalsingly (or doubly) bordered block triangularsingly (or doubly) bordered band diagonalsingly (or doubly) bordered band triangularother (!)You should also be aware of some of the special sparse forms that occur in thesolution of partial differential equations in two or more dimensions.
See Chapter 19.If your particular pattern of sparsity is not a simple one, then you may wish totry an analyze/factorize/operate package, which automates the procedure of figuringout how fill-ins are to be minimized. The analyze stage is done once only for eachpattern of sparsity. The factorize stage is done once for each particular matrix thatfits the pattern. The operate stage is performed once for each right-hand side to72Chapter 2.Solution of Linear Algebraic Equationszeroszeroszeros(a)(b)(c)(d)(e)(f )(g)(h)(i)( j)(k)Figure 2.7.1. Some standard forms for sparse matrices. (a) Band diagonal; (b) block triangular; (c) blocktridiagonal; (d) singly bordered block diagonal; (e) doubly bordered block diagonal; (f) singly borderedblock triangular; (g) bordered band-triangular; (h) and (i) singly and doubly bordered band diagonal; (j)and (k) other! (after Tewarson) [1].be used with the particular matrix.