c2-0 (779458)
Текст из файла
2.0 IntroductionA set of linear algebraic equations looks like this:a11 x1 + a12 x2 + a13 x3 + · · · + a1N xN = b1a21 x1 + a22 x2 + a23 x3 + · · · + a2N xN = b2a31 x1 + a32 x2 + a33 x3 + · · · + a3N xN = b3···(2.0.1)···a M 1 x 1 + aM 2 x 2 + aM 3 x 3 + · · · + aM N x N = b MHere the N unknowns xj , j = 1, 2, . . . , N are related by M equations. Thecoefficients aij with i = 1, 2, .
. . , M and j = 1, 2, . . ., N are known numbers, asare the right-hand side quantities bi , i = 1, 2, . . . , M .Nonsingular versus Singular Sets of EquationsIf N = M then there are as many equations as unknowns, and there is a goodchance of solving for a unique solution set of xj ’s. Analytically, there can fail tobe a unique solution if one or more of the M equations is a linear combination ofthe others, a condition called row degeneracy, or if all equations contain certainvariables only in exactly the same linear combination, called column degeneracy.(For square matrices, a row degeneracy implies a column degeneracy, and viceversa.) A set of equations that is degenerate is called singular. We will considersingular matrices in some detail in §2.6.Numerically, at least two additional things can go wrong:• While not exact linear combinations of each other, some of the equationsmay be so close to linearly dependent that roundoff errors in the machinerender them linearly dependent at some stage in the solution process.
Inthis case your numerical procedure will fail, and it can tell you that ithas failed.32Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Chapter 2. Solution of LinearAlgebraic Equations332.0 IntroductionMuch of the sophistication of complicated “linear equation-solving packages”is devoted to the detection and/or correction of these two pathologies. As youwork with large linear sets of equations, you will develop a feeling for when suchsophistication is needed. It is difficult to give any firm guidelines, since there is nosuch thing as a “typical” linear problem.
But here is a rough idea: Linear sets withN as large as 20 or 50 can be routinely solved in single precision (32 bit floatingrepresentations) without resorting to sophisticated methods, if the equations are notclose to singular. With double precision (60 or 64 bits), this number can readilybe extended to N as large as several hundred, after which point the limiting factoris generally machine time, not accuracy.Even larger linear sets, N in the thousands or greater, can be solved when thecoefficients are sparse (that is, mostly zero), by methods that take advantage of thesparseness.
We discuss this further in §2.7.At the other end of the spectrum, one seems just as often to encounter linearproblems which, by their underlying nature, are close to singular. In this case, youmight need to resort to sophisticated methods even for the case of N = 10 (thoughrarely for N = 5).
Singular value decomposition (§2.6) is a technique that cansometimes turn singular problems into nonsingular ones, in which case additionalsophistication becomes unnecessary.MatricesEquation (2.0.1) can be written in matrix form asA·x=b(2.0.2)Here the raised dot denotes matrix multiplication, A is the matrix of coefficients, andb is the right-hand side written as a column vector,a11 a21A=aM 1a12a22···aM 2......a1Na2N . . .
aM Nb1b b= 2 ···bM(2.0.3)By convention, the first index on an element aij denotes its row, the secondindex its column. For most purposes you don’t need to know how a matrix is storedin a computer’s physical memory; you simply reference matrix elements by theirtwo-dimensional addresses, e.g., a34 = a[3][4]. We have already seen, in §1.2,that this C notation can in fact hide a rather subtle and versatile physical storagescheme, “pointer to array of pointers to rows.” You might wish to review that sectionSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).• Accumulated roundoff errors in the solution process can swamp the truesolution. This problem particularly emerges if N is too large. Thenumerical procedure does not fail algorithmically. However, it returns aset of x’s that are wrong, as can be discovered by direct substitution backinto the original equations. The closer a set of equations is to being singular,the more likely this is to happen, since increasingly close cancellationswill occur during the solution. In fact, the preceding item can be viewedas the special case where the loss of significance is unfortunately total.34Chapter 2.Solution of Linear Algebraic Equationsat this point.
Occasionally it is useful to be able to peer through the veil, for exampleto pass a whole row a[i][j], j=1, . . . , N by the reference a[i].Tasks of Computational Linear Algebra• Solution of the matrix equation A·x = b for an unknown vector x, where Ais a square matrix of coefficients, raised dot denotes matrix multiplication,and b is a known right-hand side vector (§2.1–§2.10).• Solution of more than one matrix equation A · xj = bj , for a set of vectorsxj , j = 1, 2, .
. . , each corresponding to a different, known right-hand sidevector bj . In this task the key simplification is that the matrix A is heldconstant, while the right-hand sides, the b’s, are changed (§2.1–§2.10).• Calculation of the matrix A−1 which is the matrix inverse of a squarematrix A, i.e., A · A−1 = A−1 · A = 1, where 1 is the identity matrix(all zeros except for ones on the diagonal). This task is equivalent,for an N × N matrix A, to the previous task with N different bj ’s(j = 1, 2, .
. ., N ), namely the unit vectors (bj = all zero elements exceptfor 1 in the jth component). The corresponding x’s are then the columnsof the matrix inverse of A (§2.1 and §2.3).• Calculation of the determinant of a square matrix A (§2.3).If M < N , or if M = N but the equations are degenerate, then there areeffectively fewer equations than unknowns. In this case there can be either nosolution, or else more than one solution vector x.
In the latter event, the solutionspace consists of a particular solution xp added to any linear combination of(typically) N − M vectors (which are said to be in the nullspace of the matrix A).The task of finding the solution space of A involves• Singular value decomposition of a matrix A.This subject is treated in §2.6.In the opposite case there are more equations than unknowns, M > N . Whenthis occurs there is, in general, no solution vector x to equation (2.0.1), and the setof equations is said to be overdetermined. It happens frequently, however, that thebest “compromise” solution is sought, the one that comes closest to satisfying allequations simultaneously.
If closeness is defined in the least-squares sense, i.e., thatthe sum of the squares of the differences between the left- and right-hand sides ofequation (2.0.1) be minimized, then the overdetermined linear problem reduces toa (usually) solvable linear problem, called the• Linear least-squares problem.The reduced set of equations to be solved can be written as the N ×N set of equations(AT · A) · x = (AT · b)(2.0.4)where AT denotes the transpose of the matrix A. Equations (2.0.4) are called thenormal equations of the linear least-squares problem.
There is a close connectionSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).We will consider the following tasks as falling in the general purview of thischapter:2.0 Introduction35between singular value decomposition and the linear least-squares problem, and thelatter is also discussed in §2.6.
Характеристики
Тип файла PDF
PDF-формат наиболее широко используется для просмотра любого типа файлов на любом устройстве. В него можно сохранить документ, таблицы, презентацию, текст, чертежи, вычисления, графики и всё остальное, что можно показать на экране любого устройства. Именно его лучше всего использовать для печати.
Например, если Вам нужно распечатать чертёж из автокада, Вы сохраните чертёж на флешку, но будет ли автокад в пункте печати? А если будет, то нужная версия с нужными библиотеками? Именно для этого и нужен формат PDF - в нём точно будет показано верно вне зависимости от того, в какой программе создали PDF-файл и есть ли нужная программа для его просмотра.