Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 22
Текст из файла (страница 22)
In Müller’s method, if r zeroshave already been found, the next zero is obtained by working with thedeflated function(3.54)If no estimates are given, the routine always looks for zeros in order ofincreasing magnitude since this will usually minimize round-off-errorgrowth. Also, all zeros found using deflated functions are tested foraccuracy by substitution into the original function f(x). In practice someaccuracy may be lost when a zero is found using deflation.
Approximatezeros found using deflation may be refined by using these approximatezeros as initial guesses in Newton’s method applied to the original function. In applying the Müller subroutine, the user can specify the number ofzeros desired. Some functions, for example, may have an infinite numberof zeros, of which only the first few may be of interest.Example 3.13 Bessel’s function J0(x) is given by the infinite seriesIt is known that J0(x) has an infinite number of real zeros.
Find the first three positivezeros, using Algorithm 3.10. The machine results given below were obtained on an IBM7094 using a standard library subroutine for J0(x) based on the series given above. Thevalues of J0(x) were computed to maximum accuracy.*3.7COMPLEX ROOTS AND MÜLLER'S METHOD125The iterations were all started with the approximations x0 = - 1, xl = 1, x2 = 0and were continued until either of the following error criteria was satisfied:The converged values are correct to at least six significant figures. Note that the zerosare obtained in ascending order of magnitude.COMPUTER RESULTS FOR EXAMPLE 3.13All the following examples were run on a CDC 6500 computer usingAlgorithm 3.10. The error criteria for these examples wereand all used the same starting values (0.5, -0.5, 0.0) followed by deflation.Although the results are printed to 8 significant figures, one should recallthat on a CDC 6500 the floating-point word length is 14 decimal digits.The output consists of the real and imaginary (if applicable) parts of theconverged approximations to the roots, and the real and imaginary parts ofthe value of the function at those roots.Example 3.14 Find all the zeros of the polynomial p(x) = x3 + x - 3.Compare these results with those obtained in Example 3.10, where we computedthe solutions on a hand calculator.
Note that since p(x) has real coefficients, thecomplex roots occur in complex-conjugate pairs. Note as well that no estimates ofthe complex roots had to be provided. While Newton’s method can be used to findcomplex roots, it must be supplied with a good estimate of that root, an estimate that126THE SOLUTION OF NONLINEAR EQUATIONSmay be difficult to obtain.
Observe that the error in F(ROOT) is considerably smallerthan 10-8 as required by the error criterion. In fact, in the last iteration, the error musthave been reduced from something like 10 - 7 to 10 - 1 4 , indicating that the methodconverges almost quadratically.Example 3.15 Find the zeros of the polynomialThis is Example 3.11 solved earlier by Newton’s method. The exact zeros areand 1.7. The results below are correct to eight significant figures, even thoughthere is a small real component to the pure-imaginary zerosExample 3.16 Find the zeros of the polynomialThis example was treated by Newton’s method in Example 3.12, where we had somedifficulty in finding accurate solutions. The zeros are x = 1, 2, 3, 4, 5, 6, 7. The resultsbelow are remarkably accurate, although the long word length on the CDC 6500 islargely responsible for this.
Note that although, in general, Müller’s method seeks thezeros in ascending order of magnitude, in this case it did not succeed in doing so.Example 3.17 Find the zeros of the polynomialThis polynomial has the zerosThe program was run in thecomplex mode and produced the zeros correct to eight significant figures. This exampleshows that this algorithm is capable of handling polynomials of fairly high degree withgood results (see Exercise 3.64).*3.7COMPLEX ROOTS AND MÜLLER'S METHOD127EXERCISES3.7-1 Use Müller’s method to find the zeros, real or complex, of the following polynomials:3.7-2 The equation x - tan x = 0 has an infinite number of real roots.
Use Miiller’s methodto find the first three positive roots.3.7-3 The Fresnel integral C(x) is defined by the seriesFind the first three real positive zeros of this function using Müller’s method. Start bytruncating the series with n = 3 and then increase n until you are satisfied that you have thecorrect zeros.3.7-4 Bessel’s function of order 1 is defined by the seriesFind the first four zeros of this function proceeding as in Exercise 3.7-3.Previous Home NextCHAPTERFOURMATRICES AND SYSTEMS OF LINEAREQUATIONSMany of the problems of numerical analysis can be reduced to theproblem of solving linear systems of equations. Among the problems whichcan be so treated are the solution of ordinary or partial differentialequations by finite-difference methods, the solution of systems of equations, the eigenvalue problems of mathematical physics, least-squares fitting of data, and polynomial approximation.
The use of matrix notation isnot only convenient, but extremely powerful, in bringing out fundamentalrelationships. In Sec. 4.1 we introduce some simple properties of matriceswhich will be used in later sections. Some of the theorems and propertieswill be stated without proof.4.1 PROPERTIES OF MATRICESA system of m linear equations in n unknowns has the general form(4.1)1284.1PROPERTIES OF MATRICES129The coefficients aij (i = 1, . .
. , m; j = 1, . . . , n) and the right sides bi(i = 1, . . . , m) are given numbers. The problem is to find, if possible,numbers xj (j = l, . . . , n) such that the m equations (4.1) are satisfiedsimultaneously. The discussion and understanding of this problem isgreatly facilitated when use is made of the algebraic concepts of matrixand vector.Definition of Matrix and VectorA matrix is a rectangular array of (usually real) numbers arranged in rowsand columns.
The coefficients of (4.1) form a matrix, which we will call A.It is customary to display such a matrix A as follows:(4.2)At times, we will write more briefly(4.3)The matrix A in (4.2) has m rows and n columns, or A is of order m × n,for short. The (i, j) entry aij of A is located at the intersection of the it hrow and the jth column of A. If A is an n × n matrix, we say that A is asquare matrix of order n. If a matrix has only one column, we call it acolumn vector, and a matrix having only one row is called a row vector.
Wedenote column vectors by a single lowercase letter in bold type, todistinguish them from other matrices, and call them vectors, for short.Thus both the right-side constants bi (i = 1, . . . , m) and the unknownsxj(j = l, . . . , n) form vectors,(4.4)We say that b is an m-vector, and x is an n -vector.EqualityIf A = (aij) and B = (bij) are both matrices, then we say that A equals B,or A = B, provided A and B have the same order and aij = bij, all i and j.130MATRICES AND SYSTEMS OF LINEAR EQUATIONSMatrix MultiplicationIn the terminology so far introduced, (4.1) states that the matrix Acombined in a certain way with the one-column matrix, or vector, x shouldequal the one-column matrix, or vector, b.
The process of combiningmatrices involved here is called matrix multiplication and is defined, ingeneral, as follows: Let A = (aij) be an m × n matrix, B = (bij) an n × pmatrix; then the matrix C = (Cij) is the (matrix) product of A with B (inthat order), or C = A B, provided C is of order m × p and(4.5)In words, the (i, j) entry of the product C = A B of A with B is calculatedby taking the n entries of row i of A and the n entries of column j of B,multiplying corresponding entries, and summing the resulting n products.ExampleThe (2,1) entry of A B, for instance, is obtained by combining row 2 of A with column 1of B:as indicated by the arrows.With this definition of matrix product and the definitions (4.2) and(4.4), we can write our system of equations (4.1) simply as(4.6)At present, it looks as if this simplification was achieved at the cost ofseveral definitions, one of them quite complicated, but the many advantages of matrix notation will become apparent in the course of this chapter.Matrix multiplication does not at all behave like multiplication ofnumbers.
For example, it is possible to form the product of the matrix Awith the matrix B only when the number of columns of A equals thenumber of rows of B. Hence, even when the product A B is defined, theproduct of B with A need not be defined. Further, even when both A B andB A are defined, they need not be equal.ExampleOn the other hand, matrix multiplication is associative: If A, B, C arematrices of order m × n, n × p, p × q, respectively, then(4.7)4.1PROPERTIES OF MATRICES131This can be seen as follows: Since A is of order m × n, while B is of ordern × p, A B is defined and is of order m × p; hence (A B)C is defined and isof order m × q.
In the same way, one verifies that A(B C) is defined and isalso of order m × q, so that at least one condition for equality is satisfied.Further,proving that (A B)C = A(B C). We will make repeated use of the specialcase when C is a vector (of appropriate order), that isDiagonal and Triangular MatricesIf A = (a ij ) is a square matrix of order n, then we call its entriesa 11 , a22 , . . .
, a nn the diagonal entries of A, and call all other entriesoff-diagonal. All entries aij of A with i < j are called superdiagonal, allentries aij with i > j are called subdiagonal (see Fig. 4.1).If all off-diagonal entries of the square matrix A are zero, we call A adiagonal matrix.