Heath - Scientific Computing (523150), страница 31
Текст из файла (страница 31)
LINEAR LEAST SQUARES3.35 Explain why the Householder methodrequires less storage than the modified GramSchmidt method for computing the QR factorization of a matrix A.3.36 Explain how QR factorization with column pivoting can be used to determine therank of a matrix.3.37 Explain why column pivoting can beused with the modified Gram-Schmidt orthog-onalization procedure but not with the classical Gram-Schmidt procedure.3.38 In terms of the condition number of thematrix A, compare the range of applicability of the normal equations method and theHouseholder QR method for solving the linear least squares problem Ax ≈ b [i.e., forwhat values of cond(A) can each method beexpected to break down?].Exercises3.1 If a vertical beam has a downward forceapplied at its lower end, the amount by whichit stretches will be proportional to the magnitude of the force.
Thus, the total length y ofthe beam is given by the equationy = x1 + x2 t,where x1 is its original length, t is the force applied, and x2 is the proportionality constant.Suppose that the following measurements aretaken:ty1011.601511.853.3 Set up the linear least squares system Ax ≈ b for fitting the model functionf (t, x) = x1 t + x2 et to the three data points(1,2), (2,3), (3,5).3.4 In fitting a straight line y = x0 + x1 tto the three data points (ti , yi ) = (0,0), (1,0),(1,1), is the least squares solution unique?Why?3.5 Let x be the solution to the linear leastsquares problem Ax ≈ b, where11A=112012.25(a) Set up the overdetermined 3 × 2 systemof linear equations corresponding to the datacollected.(b) Is this system consistent? If not, compute each possible pair of values for (x1 , x2 )obtained by selecting any two of the equationsfrom the system.
Is there any reason to preferany one of these results?(c) Set up the system of normal equations andsolve it to obtain the least squares solution tothe overdetermined system. Compare your result with those obtained in part b.3.2 Suppose you are fitting a straight line tothe three data points (0,1), (1,2), (3,3).(a) Set up the overdetermined linear systemfor the least squares problem.01.23Let r = b − Ax be the corresponding residualvector.
Which of the following three vectors isa possible value for r? Why? 11(a) 11−1 −1 (b) 11−1 1(c) 1−13.6 (a) What is the Euclidean norm of theminimum residual vector for the following linear least squares problem? 1 1 2x0 1 1 ≈ 1x20 01(b) Set up the corresponding normal equations.(b) What is the solution vector x for this problem?(c) Compute the least squares solution byCholesky factorization.3.7 Let A be an m × n matrix and b an mvector.EXERCISES109(a) Prove that a solution to the least squaresproblem Ax ≈ b always exists.(b) Prove that such a solution is unique if andonly if rank(A) = n.3.8 Suppose that A is an m × n matrix ofrank n. Prove that the matrix AT A is positive definite.3.9 Prove that the augmented system matrixin Section 3.3.3 cannot be positive definite.3.10 Let A be an n × n matrix, and assumethat A is both orthogonal and triangular.(a) Prove that A must be diagonal.(b) What are the diagonal entries of A?3.11 Suppose that the partitioned matrixA BO Cis orthogonal, where the submatrices A andC are square.
Prove that A and C must beorthogonal, and B = O.3.12 (a) Let A be an n×n matrix. Show thatany two of the following conditions imply theother:1. AT = A2. AT A = I3. A2 = I3.15 Consider the vector a as an n×1 matrix.(a) Write out its QR factorization, showing thematrices Q and R explicitly.(b) What is the solution to the linear leastsquares problem ax ≈ b, where b is a givenn-vector?3.16 Determine the Householder transformation that annihilates all but the first entry ofTthe vector [ 1 1 1 1 ] .
Specifically, if 1αvv 1 0 (I − 2 T ) = ,10v v10Twhat are the values of the scalar α and thevector v?3.17 Suppose that you are computing the QRfactorization of the matrix1 1141 21 391 4 16by Householder transformations.(a) How many Householder transformationsare required?(b) What does the first column of A becomeas a result of applying the first Householdertransformation?(b) Give a specific example, other than theidentity matrix I or a permutation of it, ofa 3 × 3 matrix that has all three of these properties.(c) Name a nontrivial class of matrices thathave all three of these properties.(c) What does the first column then becomeas a result of applying the second Householdertransformation?(d ) How many Givens rotations would be required to compute the QR factorization of thesame matrix?3.13 Show that if the vector v 6= o, then thematrixvv TH =I −2 Tv vis orthogonal and symmetric.3.18 Consider the vector 2a = 3.43.14 Let a be any nonzero vector.
If v =a − αe1 , where α = ±kak2 , and(a) Specify an elementary elimination matrixthat annihilates the third component of a.H =I −2show that Ha = αe1 .vv T,vT v(b) Specify a Householder transformation thatannihilates the third component of a.(c) Specify a Givens rotation that annihilatesthe third component of a.110(d ) When annihilating a given nonzero component of any vector, is it ever possible forthe corresponding elementary elimination matrix and Householder transformation to be thesame? Why?(e) When annihilating a given nonzero component of any vector, is it ever possible forthe corresponding Householder transformationand Givens rotation to be the same? Why?3.19 Suppose you want to annihilate the second component of a vector a1a=a2using a Givens rotation, but a1 is already zero.(a) Is it still possible to annihilate a2 with aGivens rotation? If so, specify an appropriateGivens rotation; if not, explain why.(b) Under these circumstances, can a2 be annihilated with an elementary elimination matrix? If so, how? If not, why?3.20 A Givens rotation is defined by two parameters, c and s, and therefore would appearto require two storage locations in a computerimplementation.
The two parameters dependon a single angle of rotation, however, so inprinciple it should be possible to record therotation by storing only one number. Devisean algorithm for storing and recovering Givensrotations using only one storage location perrotation.3.21 Let A be an m×n matrix of rank n. Let RA=QObe the QR factorization of A, with Q orthogonal and R an n × n upper triangular matrix.Let AT A = LLT be the Cholesky factorization of AT A.(a) Show that RT R = LLT .(b) Can one conclude that R = LT ? Why?3.22 In Section 3.3 we observed that the normal equations matrix AT A is exactly singularin floating-point arithmetic if1 1A = 0,0 CHAPTER 3. LINEAR LEAST SQUARESwhere is a positive number smaller thanthe square root of machine precision mach ina given floating-point system. Show that ifA = QR is the QR factorization for this matrix A, then R is not singular, even in floatingpoint arithmetic.3.23 Verify that the dominant terms in theoperation count (number of multiplications ornumber of additions) for solving an m×n linearleast squares problem by the normal equationsand Cholesky factorization are n2 m/2 + n3 /6.3.24 Verify that the dominant terms in theoperation count (number of multiplications ornumber of additions) for QR factorization ofan m × n matrix by Householder transformations are n2 m − n3 /3.3.25 An n × n matrix P is an orthogonal projector if it is both idempotent (P 2 = P ) andsymmetric (P = P T ).
Such a matrix projectsany given n-vector orthogonally onto a subspace (namely, the column space of P ) butleaves unchanged any vector that is already inthat subspace.(a) Suppose that Q is an n × k matrix whosecolumns form an orthonormal basis for a subspace S of Rn . Show that QQT is an orthogonal projector onto S.(b) If A is a matrix with linearly independentcolumns, show that A(AT A)−1 AT is an orthogonal projector onto the column space ofA.
How does this result relate to the linearleast squares problem?(c) If P is an orthogonal projector onto a subspace S, show that I − P is an orthogonalprojector onto the orthogonal complement ofS.(d ) Let v be any nonzero n-vector. Whatis the orthogonal projector onto the subspacespanned by v?(e) In the Gram-Schmidt procedure of Section 3.4.6, if we define the orthogonal projectors Pk = qk qkT , k = 1, . . . , n, show that theclassical Gram-Schmidt procedure is equivalent toqk = (I − (P1 + · · · + Pk−1 ))ak ,COMPUTER PROBLEMS111whereas the modified Gram-Schmidt procedure is equivalent toqk = (I − Pk−1 ) · · · (I − P1 )ak .(f ) An alternative way to stablize the classical procedure is to apply it more than once(i.e., iterative refinement), which is equivalentto takingqk = (I − (P1 + · · · + Pk−1 ))m ak ,where m = 2 is typically sufficient.