Higham - Accuracy and Stability of Numerical Algorithms (523152), страница 93
Текст из файла (страница 93)
On a workstation with an Intel 486DX chip (in whichdouble rounding does occur), the following behaviour is observed in MATLAB:>> format hex; format compact% Hexadecimal format.>> x = 2^(-53) + 2^ (-64) + 2^(-105) ; y = [1+x 1 X]y=3ff00000000000013ff00000000000003ca0020000000001PROBLEMS509>> x = 2^(-53) + 2^(-64) + 2^(-106); y=[1+x 1 x]y=3ff00000000000003ff00000000000003ca002000000000025.2. Show that Smith’s formulae (25.1) can be derived by applying Gaussianelimination with partial pivoting to the 2 x 2 linear system obtained from(c + id)(x + iy) = a + ib.25.3. The following MATLAB function implements an algorithm of Malcolm [724,1972] for determining the floating point arithmetic parameters β and t.function[beta, t] = param(x)% a and b are floating point variables.a = 1;while (a+l) - a == 1a = 2*a;endb=2;while (a+b) == ab = 2*b ;endbeta =- a;t = 1;a = beta;while (a+l) - a == 1t = t+l;a= a*beta;endRun this code, or a translation into another language, on any machines available to you.
Explain how the code works. (Hint: consider the integers that canbe represented exactly as floating point numbers.) Under what circumstancesmight it fail to produce the desired results?25.4. Hough [585, 1989] formed a 512 x 512 matrix using the following Fortranrandom number generator:subroutine matgen (a, lda, n, b, norms)REAL a(lda,1), b(1), normacinit = 1325norms = 0.0do 30 j = 1,ndo 20 i = 1,n510S OFTWARE I SSUESINFLOATING P OINT A RITHMETICinit = mod(3125*init,65536)a(i, j) = (init - 32768 .0)/16384.0norms = max(a(i, j), norms)20continue30 continueHe then solved a linear system using this matrix, on a workstation that usesIEEE single precision arithmetic. He found that the program took an inordinate amount of time to run and tracked the problem to underflows.
On thesystem in question underflows are trapped and the correct result recomputedvery slowly in software. Hough picks up the story:I started printing out the pivots in the program. They started outas normal numbers like 1 or – 10, the suddenly dropped to about1e-7, then later to 1e-14, and then:kkkkkkkkkkkkkkkk82838485868788899091929394959697pivotpivotpivotpivotpivotpivotpivotpivotpivotpivotpivotpivotpivotpivotpivotpivot-1.8666e-20-2.96595e-142.46156e-142.40541e-14-4.99053e-141.7579e-141.69295e-14-1.56396e-141.37869e-14-3.10221e-142.35206e-141.32175e-14-7.77593e-151.34815e-14-1.02589e-214.27131e-22kkkkkkkkkkkkkkkkk98 pivot 1.22101e-2199 pivot -7.12407e-22100 pivot -1.75579e-21101 pivot 3.13343e-21102 pivot -6.99946e-22103 pivot 3.82048e-22104 pivot 8.05538e-22105 pivot -1.18164e-21106 pivot -6.349e-22107 pivot -2.48245e-21108 pivot -8.89452e-22109 pivot -8.23235e-22110 pivot 4.40549e-21111 pivot 1.12387e-21112 pivot -4.78853e-22113 pivot 4.38739e-22114 pivot 7.3868e-28SIGFPE 8: numerical exception, CHK, or TRAPstopped atdaxpy+0x18c:mov1a4@(0xe10),a3@Explain this behaviour.25.5.
The version of the level-l BLAS routine xNRM2 that is distributed withLAPACK implements the following algorithm for computing ||x||2 ,t=0s=1for i = l:nPROBLEMS511if |xi | > ts = 1 + s (t/xi )2t = |xi |elses = s + (xi /t)2endendProve that the algorithm works and describe its properties.25.6. (Moler and Morrison [773, 1983], Dubrulle [323, 1983]) This MATLABM-file computes the Pythagorean sumusing an algorithm of Molerand Morrison.function p = pythag(a, b)%PYTHAGPythagorean sum.P = max(abs(a) , abs(b)) ;q = min(abs(a) , abs(b)) ;while 1r = (q/p)^2;if r+4 == 4, return, ends = r/(4+ r);p = p+2*s*p;q = s*q;fprintf(’p = %19.15e, q = %19.15e\n’, p, q)endThe algorithmic immune to underflow and overflow (unless the result overflows), is very accurate, and converges in at most three iterations, assumingthe unit roundoff u > 10–20.Example:p = pythag(3,4); (p-5)/5p = 4.986301369863014e+000, q = 3.698630136986301e-001p = 4.999999974188253e+000, q = 5.080526329415358e-004p = 5.000000000000001e+000, q = l.311372652397091e-012ans =1.7764e-016The purpose of this problem is to show that pythag is Halley’s methodapplied to a certain equation.
Halley’s method for solving f(x) = 0 is theiterationS OFTWARE I SSUES512INFLOATING P OINT A RITHMETICwhere fn, f´n and f´´n denote the values of f and its first two derivatives at xn.(a) For given x 0 and y 0 such that 0 < y 0 < x 0 , the Pythagorean sumis a root of the equation f(x) = x 2 – p 2 = 0. Show thatHalley’s method applied to this equation givesShow that if 0 < x0.
< p =then xn < xn+1 < p for all n. Deducethat y n :=is defined and thatConfirm that pythag implements a scaled version of these equations.(b) Show thatwhich displays the cubic convergence of the iteration.25.7. (Incertis [604, 1985]) (a) Given a skew-symmetric matrix Ywith ||Y||2 < 1, show that there is a real, symmetric X satisfying X2 = I + Y2such that X + Y is orthogonal.(b) Consider the following iteration, for P0 , Q0with ||P0||2 >||Q0||2, which generalizes to matrices the "pythag" iteration in Problem 25.6.for k =R k = (Q k P k - 1 ) 2S k = R k(4I + R k) - 1Pk+1 = Pk + 2S k P kQk+l = SkQkendShow that this iteration can be used to compute X in part (a)(c) Investigate the convergence of the iteration in (b) for general P0 andQ0.25.8.
Why might we prefer the expressionintended to be robust in floating point arithmetic?in an algorithmPrevious Home NextChapter 26A Gallery of Test MatricesMany tricks or treats associated with the Hilbert matrixmay seem rather frightening or fascinating.— MAN-DUEN CHOI, Tricks or Treats with the Hilbert Matrix (1983)I start by looking at a 2 by 2 matrix.Sometimes I look at a 4 by 4 matrix.That’s when things get out of control and too hard.Usually 2 by 2 or 3 by 3 is enough, and I look at them,and I compute with them, and I try to guess the facts.First, think of a question.Second, I look at examples, and then third,guess the facts.— PAUL R. HALMOS 2 4 (1991)When people look down on matrices,remind them of great mathematicians such asFrobenius, Schur, C. L.
Siegel, Ostrowski, Motzkin, Kac, etc.,who made important contributions to the subject.— OLGA TAUSSKY, How I Became a Torchbearer for Matrix Theory (1988)24From interviews by Albers in [9, 1991].513514A G ALLERYOFT EST M ATRICESEver since the first computer programs for matrix computations were writtenin the 1940s, researchers have been devising matrices suitable for test purposesand investigating the properties of these matrices. In the 1950s and 1960s itwas common for a whole paper to be devoted to a particular test matrix:typically its inverse or eigenvalues would be obtained in closed form.Early collections of test matrices include those of Newman and Todd [796,1958] and Rutishauser [886, 1968]; most of Rutishauser’s matrices come fromcontinued fractions or moment problems.
Two well-known books present collections of test matrices. Gregory and Karney [482, 1969] deal exclusively withthe topic, while Westlake [1076, 1968] gives an appendix of test matrices.In this chapter we present a gallery of matrices. We describe their properties and explain why they are useful (or not so useful, as the case may be)for test purposes.
The coverage is limited. A comprehensive, up-to-date, andwell-documented collection of parametrized test matrices may be found inthe Test Matrix Toolbox, described in Appendix E. Of course, MATLAB itselfcontains a number of special matrices that can be used for test purposes (typehelp specmat).Several other types of matrices would have been included in this chapterhad they not been discussed elsewhere in the book. These include magicsquares (Problem 6.4), the Kahan matrix (8.10), Hadamard matrices (§9.3),and Vandermonde matrices (Chapter 21).The matrices described here can be modified in various ways while stillpreserving some or all of their interesting properties.
Among the many waysof constructing new test matrices from old arelSimilarity transformations AlUnitary transformations AllKronecker products Aroutine kron) .Powers AAX – 1 AX.U AV, where U*U = V*V = I.B or BA (for which MATLAB has aAk.26.1. The Hilbert and Cauchy MatricesThe Hilbert matrix H nwith elements hij = 1/(i + j – 1), is perhapsthe most famous of all test matrices. It was widely used in the 1950s and1960s for testing inversion algorithms and linear equation solvers. Its attractions were threefold: it is very ill conditioned for even moderate values of n,formulae are known for the elements of the inverse, and the matrix arises ina practical problem: least squares fitting by a polynomial expressed in themonomial basis.26.1 T HE H ILBERTANDC AUCHY M ATRICES515Despite its past popularity and notoriety, the Hilbert matrix is not a goodtest matrix.
It is too special. Not only is it symmetric positive definite, butit is totally positive. This means, for example, that Gaussian eliminationwithout pivoting is guaranteed to produce a small componentwise relativebackward error (as is Cholesky factorization). Thus the Hilbert matrix is nota typical ill-conditioned matrix.The (i, j) element of the inverse of the Hilbert matrix Hn is the integer(26.1)and(26.2)There are many ways to rewrite the formula (26.1). These formulae are bestobtained as special cases of those for the Cauchy matrix below.The Cholesky factor Rn of the inverse of the Hilbert matrix is knownexplicitly, as is Rn-1:(26.3)(26.4)One interesting application of these formulae is to compute the eigenvaluesof H n as the squares of the singular values of Rn ; if Rn is evaluated from(26.3) and the one-sided Jacobi algorithm is used to compute the singularvalues then high relative accuracy is obtained for every eigenvalue, as shownby Mathias [732, 1995].The condition number of the Hilbert matrix grows at an exponential rate:(Hk2n) ~ exp(3.5n) [1004, 1954].