c2-8 (779466), страница 2
Текст из файла (страница 2)
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).in turn for M = 1, 2, . . . until M = N , the desired result, is finally reached. The vector xjis the result at the M th stage, and becomes the desired answer only when N is reached.Levinson’s method is well documented in standard texts (e.g., [5]). The useful fact thatthe method generalizes to the nonsymmetric case seems to be less well known. At some riskof excessive detail, we therefore give a derivation here, due to G.B. Rybicki.In following a recursion from step M to step M + 1 we find that our developing solutionx(M ) changes in this way:94Chapter 2.Solution of Linear Algebraic EquationsThe only remaining problem is to develop a recursion relation for G.
Before we dothat, however, we should point out that there are actually two distinct sets of solutions to theoriginal linear problem for a nonsymmetric matrix, namely right-hand solutions (which wehave been discussing) and left-hand solutions zi . The formalism for the left-hand solutionsdiffers only in that we deal with the equationsMX(M )= yii = 1, . . . , M(2.8.20)Then, the same sequence of operations on this set leads toMX(M )Ri−j Hj= Ri(2.8.21)j=1where(M )(M )Hj≡(M +1)zM +1−j − zM +1−j(2.8.22)(M +1)zM +1(compare with 2.8.14 – 2.8.15). The reason for mentioning the left-hand solutions now isthat, by equation (2.8.21), the Hj satisfy exactly the same equation as the xj except forthe substitution yi → Ri on the right-hand side.
Therefore we can quickly deduce fromequation (2.8.19) thatPM(M )− RM +1j=1 RM +1−j Hj(M +1)HM +1 = PM(2.8.23)(M )j=1 RM +1−j GM +1−j − R0By the same token, G satisfies the same equation as z, except for the substitution yi → R−i .This givesPM(M )− R−M −1j=1 Rj−M −1 Gj(M +1)GM +1 = PM(2.8.24)(M )j=1 Rj−M −1 HM +1−j − R0The same “morphism” also turns equation (2.8.16), and its partner for z, into the final equations(M +1)= Gj(M +1)= HjGjHj(M )− GM +1 HM +1−j(M +1)(M )− HM +1 GM +1−j(M +1)(M )(M )(2.8.25)Now, starting with the initial values(1)x1 = y1 /R0(1)G1 = R−1 /R0(1)H1= R1 /R0(2.8.26)we can recurse away.
At each stage M we use equations (2.8.23) and (2.8.24) to find(M +1)(M +1)HM +1 , GM +1 , and then equation (2.8.25) to find the other components of H (M +1), G(M +1) .From there the vectors x(M +1) and/or z (M +1) are easily calculated.The program below does this. It incorporates the second equation in (2.8.25) in the form(M +1)(M )(M +1)(M )HM +1−j = HM +1−j − HM +1 Gj(2.8.27)so that the computation can be done “in place.”Notice that the above algorithm fails if R0 = 0. In fact, because the bordering methoddoes not allow pivoting, the algorithm will fail if any of the diagonal principal minors of theoriginal Toeplitz matrix vanish. (Compare with discussion of the tridiagonal algorithm in§2.4.) If the algorithm fails, your matrix is not necessarily singular — you might just haveto solve your problem by a slower and more general algorithm such as LU decompositionwith pivoting.The routine that implements equations (2.8.23)–(2.8.27) is also due to Rybicki.
Notethat the routine’s r[n+j] is equal to Rj above, so that subscripts on the r array vary from1 to 2N − 1.Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Rj−i zjj=12.8 Vandermonde Matrices and Toeplitz Matrices95#include "nrutil.h"#define FREERETURN {free_vector(h,1,n);free_vector(g,1,n);return;}if (r[n] == 0.0) nrerror("toeplz-1 singular principal minor");g=vector(1,n);h=vector(1,n);x[1]=y[1]/r[n];Initialize for the recursion.if (n == 1) FREERETURNg[1]=r[n-1]/r[n];h[1]=r[n+1]/r[n];for (m=1;m<=n;m++) {Main loop over the recursion.m1=m+1;sxn = -y[m1];Compute numerator and denominator for x,sd = -r[n];for (j=1;j<=m;j++) {sxn += r[n+m1-j]*x[j];sd += r[n+m1-j]*g[m-j+1];}if (sd == 0.0) nrerror("toeplz-2 singular principal minor");x[m1]=sxn/sd;whence x.for (j=1;j<=m;j++) x[j] -= x[m1]*g[m-j+1];if (m1 == n) FREERETURNsgn = -r[n-m1];Compute numerator and denominator for G and H,shn = -r[n+m1];sgd = -r[n];for (j=1;j<=m;j++) {sgn += r[n+j-m1]*g[j];shn += r[n+m1-j]*h[j];sgd += r[n+j-m1]*h[m-j+1];}if (sd == 0.0 || sgd == 0.0) nrerror("toeplz-3 singular principal minor");g[m1]=sgn/sgd;whence G and H.h[m1]=shn/sd;k=m;m2=(m+1) >> 1;pp=g[m1];qq=h[m1];for (j=1;j<=m2;j++) {pt1=g[j];pt2=g[k];qt1=h[j];qt2=h[k];g[j]=pt1-pp*qt2;g[k]=pt2-pp*qt1;h[j]=qt1-qq*pt2;h[k--]=qt2-qq*pt1;}}Back for another recurrence.nrerror("toeplz - should not arrive here!");}If you are in the business of solving very large Toeplitz systems, you should find out aboutso-called “new, fast” algorithms, which require only on the order of N (log N )2 operations,compared to N 2 for Levinson’s method.
These methods are too complicated to include here.Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited.
To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).void toeplz(float r[], float x[], float y[], int n)PSolves the Toeplitz system Nj=1 R(N +i−j) xj = yi (i = 1, .
. . , N ). The Toeplitz matrix neednot be symmetric. y[1..n] and r[1..2*n-1] are input arrays; x[1..n] is the output array.{int j,k,m,m1,m2;float pp,pt1,pt2,qq,qt1,qt2,sd,sgd,sgn,shn,sxn;float *g,*h;96Chapter 2.Solution of Linear Algebraic EquationsPapers by Bunch [6] and de Hoog [7] will give entry to the literature.CITED REFERENCES AND FURTHER READING:Golub, G.H., and Van Loan, C.F.
1989, Matrix Computations, 2nd ed. (Baltimore: Johns HopkinsUniversity Press), Chapter 5 [also treats some other special forms].Westlake, J.R. 1968, A Handbook of Numerical Matrix Inversion and Solution of Linear Equations(New York: Wiley). [2]von Mises, R. 1964, Mathematical Theory of Probability and Statistics (New York: AcademicPress), pp. 394ff. [3]Levinson, N., Appendix B of N.
Wiener, 1949, Extrapolation, Interpolation and Smoothing ofStationary Time Series (New York: Wiley). [4]Robinson, E.A., and Treitel, S. 1980, Geophysical Signal Analysis (Englewood Cliffs, NJ: PrenticeHall), pp. 163ff. [5]Bunch, J.R. 1985, SIAM Journal on Scientific and Statistical Computing, vol. 6, pp. 349–364.
[6]de Hoog, F. 1987, Linear Algebra and Its Applications, vol. 88/89, pp. 123–138. [7]2.9 Cholesky DecompositionIf a square matrix A happens to be symmetric and positive definite, then it has aspecial, more efficient, triangular decomposition. Symmetric means that aij = aji fori, j = 1, . . . , N , while positive definite means thatv · A · v > 0 for all vectors v(2.9.1)(In Chapter 11 we will see that positive definite has the equivalent interpretation that A hasall positive eigenvalues.) While symmetric, positive definite matrices are rather special, theyoccur quite frequently in some applications, so their special factorization, called Choleskydecomposition, is good to know about.
When you can use it, Cholesky decomposition is abouta factor of two faster than alternative methods for solving linear equations.Instead of seeking arbitrary lower and upper triangular factors L and U, Choleskydecomposition constructs a lower triangular matrix L whose transpose LT can itself serve asthe upper triangular part. In other words we replace equation (2.3.1) byL · LT = A(2.9.2)This factorization is sometimes referred to as “taking the square root” of the matrix A. Thecomponents of LT are of course related to those of L byLTij = Lji(2.9.3)Writing out equation (2.9.2) in components, one readily obtains the analogs of equations(2.3.12)–(2.3.13),!1/2i−1XLii = aii −L2ik(2.9.4)k=1andLji1=Liiaij −i−1Xk=1!Lik Ljkj = i + 1, i + 2, . . .
, N(2.9.5)Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Forsythe, G.E., and Moler, C.B.
1967, Computer Solution of Linear Algebraic Systems (Englewood Cliffs, NJ: Prentice-Hall), §19. [1].