c2-6 (779464), страница 2
Текст из файла (страница 2)
Then, if W−1 denotesthe modified inverse of W with some elements zeroed,|x + x0 | = V · W−1 · UT · b + x0 = V · (W−1 · UT · b + VT · x0 )= W−1 · UT · b + VT · x0 (2.6.8)Here the first equality follows from (2.6.7), the second and third from the orthonormality of V. If you now examine the two terms that make up the sum on theright-hand side, you will see that the first one has nonzero j components only wherewj 6= 0, while the second one, since x0 is in the nullspace, has nonzero j componentsonly where wj = 0.
Therefore the minimum length obtains for x0 = 0, q.e.d.If b is not in the range of the singular matrix A, then the set of equations (2.6.6)has no solution. But here is some good news: If b is not in the range of A, thenequation (2.6.7) can still be used to construct a “solution” vector x. This vector xwill not exactly solve A · x = b. But, among all possible vectors x, it will do theclosest possible job in the least squares sense. In other words (2.6.7) findsx which minimizes r ≡ |A · x − b|(2.6.9)The number r is called the residual of the solution.The proof is similar to (2.6.8): Suppose we modify x by adding some arbitraryx0 . Then A · x − b is modified by adding some b0 ≡ A · x0 . Obviously b0 is inthe range of A. We then have A · x − b + b0 = (U · W · VT ) · (V · W−1 · UT · b) − b + b0 = (U · W · W−1 · UT − 1) · b + b0 = U · (W · W−1 − 1) · UT · b + UT · b0 = (W · W−1 − 1) · UT · b + UT · b0 (2.6.10)Now, (W · W−1 − 1) is a diagonal matrix which has nonzero j components only forwj = 0, while UT b0 has nonzero j components only for wj 6= 0, since b0 lies in therange of A.
Therefore the minimum obtains for b0 = 0, q.e.d.Figure 2.6.1 summarizes our discussion of SVD thus far.Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).x = V · [diag (1/wj )] · (UT · b)632.6 Singular Value DecompositionAA⋅x = b(a)nullspaceof Asolutions ofA⋅x = dsolutions ofA ⋅ x = c′SVD “solution”of A ⋅ x = crange of Adc′cSVD solution ofA⋅x = d(b)Figure 2.6.1. (a) A nonsingular matrix A maps a vector space into one of the same dimension. Thevector x is mapped into b, so that x satisfies the equation A · x = b. (b) A singular matrix A maps avector space into one of lower dimensionality, here a plane into a line, called the “range” of A.
The“nullspace” of A is mapped to zero. The solutions of A · x = d consist of any one particular solution plusany vector in the nullspace, here forming a line parallel to the nullspace. Singular value decomposition(SVD) selects the particular solution closest to zero, as shown. The point c lies outside of the rangeof A, so A · x = c has no solution. SVD finds the least-squares best compromise solution, namely asolution of A · x = c0 , as shown.In the discussion since equation (2.6.6), we have been pretending that a matrixeither is singular or else isn’t. That is of course true analytically.
Numerically,however, the far more common situation is that some of the wj ’s are very smallbut nonzero, so that the matrix is ill-conditioned. In that case, the direct solutionmethods of LU decomposition or Gaussian elimination may actually give a formalsolution to the set of equations (that is, a zero pivot may not be encountered); butthe solution vector may have wildly large components whose algebraic cancellation,when multiplying by the matrix A, may give a very poor approximation to theright-hand vector b. In such cases, the solution vector x obtained by zeroing theSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).bx64Chapter 2.Solution of Linear Algebraic Equations#include "nrutil.h"void svbksb(float **u, float w[], float **v, int m, int n, float b[], float x[])Solves A·X = B for a vector X, where A is specified by the arrays u[1..m][1..n], w[1..n],v[1..n][1..n] as returned by svdcmp. m and n are the dimensions of a, and will be equal forsquare matrices.
b[1..m] is the input right-hand side. x[1..n] is the output solution vector.No input quantities are destroyed, so the routine may be called sequentially with different b’s.{int jj,j,i;float s,*tmp;tmp=vector(1,n);for (j=1;j<=n;j++) {Calculate U T B.s=0.0;if (w[j]) {Nonzero result only if wj is nonzero.for (i=1;i<=m;i++) s += u[i][j]*b[i];s /= w[j];This is the divide by wj .}tmp[j]=s;}for (j=1;j<=n;j++) {Matrix multiply by V to get answer.s=0.0;for (jj=1;jj<=n;jj++) s += v[j][jj]*tmp[jj];x[j]=s;}free_vector(tmp,1,n);}Note that a typical use of svdcmp and svbksb superficially resembles thetypical use of ludcmp and lubksb: In both cases, you decompose the left-handmatrix A just once, and then can use the decomposition either once or many timeswith different right-hand sides. The crucial difference is the “editing” of the singularvalues before svbksb is called:Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).small wj ’s and then using equation (2.6.7) is very often better (in the sense of theresidual |A · x − b| being smaller) than both the direct-method solution and the SVDsolution where the small wj ’s are left nonzero.It may seem paradoxical that this can be so, since zeroing a singular valuecorresponds to throwing away one linear combination of the set of equations thatwe are trying to solve.
The resolution of the paradox is that we are throwing awayprecisely a combination of equations that is so corrupted by roundoff error as to be atbest useless; usually it is worse than useless since it “pulls” the solution vector wayoff towards infinity along some direction that is almost a nullspace vector. In doingthis, it compounds the roundoff problem and makes the residual |A · x − b| larger.SVD cannot be applied blindly, then. You have to exercise some discretion indeciding at what threshold to zero the small wj ’s, and/or you have to have some ideawhat size of computed residual |A · x − b| is acceptable.As an example, here is a “backsubstitution” routine svbksb for evaluatingequation (2.6.7) and obtaining a solution vector x from a right-hand side b, giventhat the SVD of a matrix A has already been calculated by a call to svdcmp. Notethat this routine presumes that you have already zeroed the small wj ’s.
It does notdo this for you. If you haven’t zeroed the small wj ’s, then this routine is just asill-conditioned as any direct method, and you are misusing SVD.2.6 Singular Value Decomposition65SVD for Fewer Equations than UnknownsIf you have fewer linear equations M than unknowns N , then you are notexpecting a unique solution. Usually there will be an N − M dimensional familyof solutions. If you want to find this whole solution space, then SVD can readilydo the job.The SVD decomposition will yield N − M zero or negligible wj ’s, sinceM < N .
There may be additional zero wj ’s from any degeneracies in your Mequations. Be sure that you find this many small wj ’s, and zero them before callingsvbksb, which will give you the particular solution vector x. As before, the columnsof V corresponding to zeroed wj ’s are the basis vectors whose linear combinations,added to the particular solution, span the solution space.SVD for More Equations than UnknownsThis situation will occur in Chapter 15, when we wish to find the least-squaressolution to an overdetermined set of linear equations.
In tableau, the equationsto be solved are · x = b A(2.6.11)The proofs that we gave above for the square case apply without modificationto the case of more equations than unknowns. The least-squares solution vector x isSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).#define N ...float wmax,wmin,**a,**u,*w,**v,*b,*x;int i,j;...for(i=1;i<=N;i++)Copy a into u if you don’t want it to be defor j=1;j<=N;j++)stroyed.u[i][j]=a[i][j];svdcmp(u,N,N,w,v);SVD the square matrix a.wmax=0.0;Will be the maximum singular value obtained.for(j=1;j<=N;j++) if (w[j] > wmax) wmax=w[j];This is where we set the threshold for singular values allowed to be nonzero. The constantis typical, but not universal. You have to experiment with your own application.wmin=wmax*1.0e-6;for(j=1;j<=N;j++) if (w[j] < wmin) w[j]=0.0;svbksb(u,w,v,N,N,b,x);Now we can backsubstitute.66Chapter 2.Solution of Linear Algebraic Equationsgiven by (2.6.7), which, with nonsquare matrices, looks like this, V · diag(1/wj ) · UT · b (2.6.12)In general, the matrix W will not be singular, and no wj ’s will need to beset to zero.