Using MATLAB (779505), страница 43
Текст из файла (страница 43)
The expressionnorm(A*x - b)is equal tonorm(Q*R*x - b)Multiplication by orthogonal matrices preserves the Euclidean norm, so thisexpression is also equal tonorm(R*x - y)where y = Q'*b. Since the last m-n rows of R are zero, this expression breaksinto two piecesnorm(R(1:n,1:n)*x - y(1:n))andnorm(y(n+1:m))When A has full rank, it is possible to solve for x so that the first of theseexpressions is zero.
Then the second expression gives the norm of the residual.When A does not have full rank, the triangular structure of R makes it possibleto find a basic solution to the least squares problem.11-3111Matrices and Linear AlgebraMatrix Powers and ExponentialsThis section tells you how to obtain the following matrix powers andexponentials in MATLAB:• Positive integer• Inverse and fractional• Element-by-element• ExponentialsPositive Integer PowersIf A is a square matrix and p is a positive integer, then A^p effectively multipliesA by itself p-1 times.X = A^2X =361061425102546Inverse and Fractional PowersIf A is square and nonsingular, then A^(-p) effectively multiplies inv(A) byitself p-1 times.Y = B^(-3)Y =0.0053-0.0034-0.0016-0.00680.00010.00700.00180.0036-0.0051Fractional powers, like A^(2/3), are also permitted; the results depend uponthe distribution of the eigenvalues of the matrix.Element-by-Element PowersThe .^ operator produces element-by-element powers.
For example,X = A.^211-32Matrix Powers and ExponentialsA =1111491936ExponentialsThe functionsqrtm(A)computes A^(1/2) by a more accurate algorithm. The m in sqrtm distinguishesthis function from sqrt(A) which, like A.^(1/2), does its jobelement-by-element.A system of linear, constant coefficient, ordinary differential equations can bewrittendx ⁄ dt = Axwhere x = x(t) is a vector of functions of t and A is a matrix independent of t.The solution can be expressed in terms of the matrix exponential,x(t) = etAx(0)The functionexpm(A)computes the matrix exponential.
An example is provided by the 3-by-3coefficient matrixA =06-5-6220-1-16-10and the initial condition, x(0)x0 =11111-3311Matrices and Linear AlgebraThe matrix exponential is used to compute the solution, x(t), to the differentialequation at 101 points on the interval 0 ≤ t ≤ 1 withX = [];for t = 0:.01:1X = [X expm(t*A)*x0];endA three-dimensional phase plane plot obtained withplot3(X(1,:),X(2,:),X(3,:),'-o')shows the solution spiraling in towards the origin. This behavior is related tothe eigenvalues of the coefficient matrix, which are discussed in the nextsection.1.210.80.60.40.20−0.21.5110.80.50.60.400.2−0.511-340EigenvaluesEigenvaluesAn eigenvalue and eigenvector of a square matrix A are a scalar λ and anonzero vector v that satisfyAv = λ vThis section explains:• Eigenvalue decomposition• Problems associated with defective (not diagonalizable) matrices• The use of Schur decomposition to avoid problems associated witheigenvalue decompositionEigenvalue DecompositionWith the eigenvalues on the diagonal of a diagonal matrix Λ and thecorresponding eigenvectors forming the columns of a matrix V, we haveAV = VΛIf V is nonsingular, this becomes the eigenvalue decompositionA = VΛV–1A good example is provided by the coefficient matrix of the ordinary differentialequation in the previous section.A =06-5-6220-1-16-10The statementlambda = eig(A)produces a column vector containing the eigenvalues.
For this matrix, theeigenvalues are complex.lambda =-3.0710-2.4645+17.6008i-2.4645-17.6008i11-3511Matrices and Linear AlgebraλtThe real part of each of the eigenvalues is negative, so e approaches zero ast increases. The nonzero imaginary part of two of the eigenvalues, ± ω ,contributes the oscillatory component, sin ( ωt ) , to the solution of thedifferential equation.With two output arguments, eig computes the eigenvectors and stores theeigenvalues in a diagonal matrix.[V,D] = eig(A)V =-0.8326-0.3553-0.42480.2003 - 0.1394i-0.2110 - 0.6447i-0.6930D =-3.0710000.2003 + 0.1394i-0.2110 + 0.6447i-0.69300-2.4645+17.6008i000-2.4645-17.6008iThe first eigenvector is real and the other two vectors are complex conjugatesof each other.
All three vectors are normalized to have Euclidean length,norm(v,2), equal to one.The matrix V*D*inv(V), which can be written more succinctly as V*D/V, iswithin roundoff error of A. And, inv(V)*A*V, or V\A*V, is within roundoff errorof D.Defective MatricesSome matrices do not have an eigenvector decomposition. These matrices aredefective, or not diagonalizable. For example,A =6-9412-209For this matrix[V,D] = eig(A)produces11-3619-3315EigenvaluesV =-0.47410.8127-0.3386-0.40820.8165-0.4082-0.40820.8165-0.408201.00000001.0000D =-1.000000There is a double eigenvalue at λ = 1 .
The second and third columns of V arethe same. For this matrix, a full set of linearly independent eigenvectors doesnot exist.The optional Symbolic Math Toolbox extends MATLAB’s capabilities byconnecting to Maple, a powerful computer algebra system. One of the functionsprovided by the toolbox computes the Jordan Canonical Form. This isappropriate for matrices like our example, which is 3-by-3 and has exactlyknown, integer elements.[X,J] = jordan(A)X =-1.75003.0000-1.25001.5000-3.00001.50002.7500-3.00001.2500J =-100010011The Jordan Canonical Form is an important theoretical concept, but it is not areliable computational tool for larger matrices, or for matrices whose elementsare subject to roundoff errors and other uncertainties.Schur Decomposition in MATLAB Matrix ComputationsMATLAB’s advanced matrix computations do not require eigenvaluedecompositions.
They are based, instead, on the Schur decomposition11-3711Matrices and Linear AlgebraA = U S UTwhere U is an orthogonal matrix and S is a block upper triangular matrix with1-by-1 and 2-by-2 blocks on the diagonal. The eigenvalues are revealed by thediagonal elements and blocks of S, while the columns of U provide a basis withmuch better numerical properties than a set of eigenvectors. The Schurdecomposition of our defective example is[U,S] = schur(A)U =0.4741-0.81270.3386-0.6571-0.07060.75050.58610.57830.5675S =-1.00000021.37371.0081-0.000144.41610.60950.9919The double eigenvalue is contained in the lower 2-by-2 block of S.Note If A is complex, schur returns the complex Schur form, which is uppertriangular with the eigenvalues of A on the diagonal.11-38Singular Value DecompositionSingular Value DecompositionA singular value and corresponding singular vectors of a rectangular matrix Aare a scalar σ and a pair of vectors u and v that satisfyAv = σuTA u = σvWith the singular values on the diagonal of a diagonal matrix Σ and thecorresponding singular vectors forming the columns of two orthogonal matricesU and V, we haveAV = UΣTA U = VΣSince U and V are orthogonal, this becomes the singular value decompositionA = UΣVTThe full singular value decomposition of an m-by-n matrix involves an m-by-mU, an m-by-n Σ , and an n-by-n V.
In other words, U and V are both square andΣ is the same size as A. If A has many more rows than columns, the resultingU can be quite large, but most of its columns are multiplied by zeros in Σ . Inthis situation, the economy sized decomposition saves both time and storage byproducing an m-by-n U, an n-by-n Σ and the same V.The eigenvalue decomposition is the appropriate tool for analyzing a matrixwhen it represents a mapping from a vector space into itself, as it does for anordinary differential equation. On the other hand, the singular valuedecomposition is the appropriate tool for analyzing a mapping from one vectorspace into another vector space, possibly with a different dimension.
Mostsystems of simultaneous linear equations fall into this second category.If A is square, symmetric, and positive definite, then its eigenvalue andsingular value decompositions are the same. But, as A departs from symmetryand positive definiteness, the difference between the two decompositionsincreases. In particular, the singular value decomposition of a real matrix isalways real, but the eigenvalue decomposition of a real, nonsymmetric matrixmight be complex.11-3911Matrices and Linear AlgebraFor the example matrixA =962487the full singular value decomposition is[U,S,V] = svd(A)U =-0.6105-0.6646-0.43080.7174-0.2336-0.65630.3355-0.70980.6194S =14.93590005.18830V =-0.6925-0.72140.7214-0.6925You can verify that U*S*V' is equal to A to within roundoff error.
For this smallproblem, the economy size decomposition is only slightly smaller.[U,S,V] = svd(A,0)U =-0.6105-0.6646-0.43080.7174-0.2336-0.6563S =14.9359005.1883V =-0.6925-0.72140.7214-0.6925Again, U*S*V' is equal to A to within roundoff error.11-4012Polynomials andInterpolationPolynomials . . . .
. . .Polynomial Function SummaryRepresenting Polynomials . .Polynomial Roots . . . . . .Characteristic Polynomials .Polynomial Evaluation . . .Convolution and DeconvolutionPolynomial Derivatives . . .Polynomial Curve Fitting . .Partial Fraction Expansion .................................................................................Interpolation . . . . . . . . . .
. . . .Interpolation Function Summary . . . . . . .One-Dimensional Interpolation . . . . . . . .Two-Dimensional Interpolation . . . . . . . .Comparing Interpolation Methods . . . . . .Interpolation and Multidimensional Arrays . . .Triangulation and Interpolation of Scattered DataTessellation and Interpolation of Scattered Data inHigher Dimensions . . .
















