Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 41
Текст из файла (страница 41)
Forexample, it might be more convenient at times to replace the ith equationby an equivalent equation of the formin which the right-hand side depends explicitly on ξ i , too, As anotherexample, one might satisfy the ith equation by changing several components of the current guess at once. In other words, one might determine the*5.3FIXED-POINT ITERATION AND RELAXATION METHODS233new guess x + αy(i) so thatwith y (i) a fixed vector depending on i. In ordinary relaxation, y (i) = ii , ofcourse.Example 5.8 We attempt to solve the nonlinear system of Example 5.4,withby Gauss-Seidel iteration.
Thus, starting with theinitial guess x = [l 2 · · · n ]T /(n + 1) and n = 3, as in Example 5.4, we carry outthe iterationThe table lists the first few iterates, recorded after each sweep.Convergence is linear (hence does not compare with the convergence of Newton’smethod), but is quite regular, so that convergence acceleration might be tried.
Usingsuccessive overrelaxation with ω = 1.2 produces the 21st iterate above in just 10 sweeps.EXERCISES5.3-l Solve the systemx - sinh y = 02 y - cosh x = 0by fixed-point iteration. There is a solution near [0.6 0.6]T .5.3-2 By experiment, determine a good choice for the overrelaxation parameter to be used insuccessive overrelaxation for Example 5.8.Do it also for n = 10, and then do it for the related problem 5.2-6.234*SYSTEMS OF EQUATIONS AND UNCONSTRAINED OPTIMIZATION5.3-3 Try to solve the systemx2 + xy3 = 93x2y - y3 = 4by fixed-point iteration.5.3-4 Show that fixed-point iteration with the iteration matrixconvergeseven though5.3-5 Use Schur’s theorem to prove that, for any square matrix B and every ε > 0, there issome vector norm for which the corresponding matrix norm satisfies ||B|| < ρ( B) + ε.
(Hint:Construct the vector norm in the form ||x|| :=with U chosen by Schur’s theoremso that A = U-1BU is upper-triangular, and D = diag[l, δ, δ2, . . . ,δ n - 1 ] so chosen thatD-1AD has all its off-diagonal entries less than e/n in absolute value.)5.3-6 Show that Jacobi iteration and Gauss-Seidel iteration converge in finitely many stepswhen applied to the solution of the linear system Aξ = b with A an invertible upper-triangularmatrix.5.3-7 Solve the systemby Jacobi iteration and by Gauss-Seidel iteration. Also, derive a factorization of thecoefficient matrix of the system by Algorithm 4.3; then use iterative improvement to solve thesystem, starting with the same initial guess.
Estimate the work ( = floating-point operations)required for each of the three methods to get an approximate solution of absolute accuracyless than 10-6 .5.3-8 Prove that Jacobi iteration converges if the coefficient matrix A of the system is strictlycolumn-diagonally dominant, i.e.,(Hint: Use the matrix norm corresponding to the vector normPrevious Home NextCHAPTERSIXAPPROXIMATIONIn this chapter, we consider the problem of approximating a generalfunction by a class of simpler functions. There are two uses for approximating functions.
The first is to replace complicated functions by somesimpler functions so that many common operations such as differentiationand integration or even evaluation can be more easily performed. Thesecond major use is for recovery of a function from partial informationabout it, e.g., from a table of (possibly only approximate) values. The mostcommonly used classes of approximating functions are algebraic polynomials, trigonometric polynomials, and, lately, piecewise-polynomialfunctions.
We consider best, and good, approximation by each of theseclasses.6.1 UNIFORM APPROXIMATION BY POLYNOMIALSIn this section, we are concerned with the construction of a polynomialp(x) of degree < n which approximates a given function f(x) on someinterval a < x < b uniformly well. This means that we measure the errorin the approximation p(x) to f(x) by the number or norm(6.l)Ideally, we would want a best uniform approximation from πn, that is, apolynomial pn*(x) of degree < n for which(6.2)235236APPROXIMATIONHere, we have used the notation π πn, as an abbreviation for thestatement “p is a polynomial of degree < n.” In other words, p n * is aparticular polynomial of degree < n which is as close to the function f as itis possible to be for a polynomial of degree < n.
We denote the numberand call it the uniform distance on the interval a < x < b of f frompolynomials of degree < n.Before discussing the construction of a good or best polynomial approximant, we take a moment to consider ways of estimatingIf, for example, such an estimate shows thatand we are looking for an approximation which is good to two places afterthe decimal point, then we will not be wasting time and effort on constructing p*10. For such a purpose, it is particularly important to get lowerbounds forand here is one way to get them.Recall from Chap. 2 thatwith w(x) = (x - x0) · · · (x - xn+1 )(see Exercise 2.2-l), and that this (n + 1)st divided difference is zero ifg(x) happens to be a polynomial of degree < n (see Exercise 2.2-5). Thusfor any particular polynomial pConsequently, if x0, .
. . , xn+1 are all in a < x < b, thenwith the positive number W(x0, . . , xn+1 ) given by(6.3)Now we choose p to be pn*. Thenlower boundand we get the(6.4)6.1UNIFORM APPROXIMATION BY POLYNOMIALS237Example 6.1 For n - 1 and x0 = -1, x1 = 0, x2 = 1, we haveHence, W(-1, 0, 1) = 2, and so, for a < -1, 1 < b,For example, for f(x) = e x , f[-1, 0, l] = e -1/2 - e 0 + e 1/2 = 0.54308; consequently,Use of the lower bound (6.4) requires calculation of the numbersfor the formation of W( x 0 ,.
. . ,x n+1 ). (See Exercise6.1-14 for an efficient way to accomplish this.) For certain choices of thexi 's, these numbers take on a particularly simple form. For example, if(6.5)then(6.6)Hence, W(x0, . . . , xn+1 ) = 2 n (see Exercise 6.1-5) and therefore(6.7)if the interval a < x < b contains both 1 and -1. To apply this lowerbound to other intervals, one must first carry out a linear change ofvariables which carries the interval in question to the interval -1 < x < 1.Example 6.2 Consider approximation to the function f(x) = tan π /4x on the standardinterval -1 < x < 1 from π3.
This is an odd function, i.e., f(-x) = f(x); the lowerbound (6.7) therefore is equal to zero for odd n, and of no help. Consider, instead,approximation from π4. Then (6.7) givesor 0.00203 <In fact, one can show thatour lower bound is quite good.henceRelated to these lower bounds is the following theorem due to de laVallée-Poussin which avoids computation of the w´(xi ), but requires construction of an approximantTheorem 6.1 Suppose the error f(x) - p(x) in the polynomial approximationto f alternates in sign at the points x0 < x1238APPROXIMATION< · · · < xn+1, i.e.,(-1)i[f(xi )-p(xi )] ε > 0for i = 0, .
. . , n + 1with ε = signum[f(x0) - p(x0)]. If a < xi < b, all i, thenIndeed, if the points xi are ordered as the theorem assumes, then(-1) n+1-iw´(xi )>0for i = 0, . . . , n + 1and therefore all the summands in the sumhave the same sign. But this means thatand this, together with (6.4), proves the theorem.Suppose now that we manage in Theorem 6.1 to have, in addition, thatThen we haveand, since the first and last expressions in this string of inequalitiescoincide, we must have equality throughout. In particular, the polynomial pmust then be a best uniform approximation to f from π n.
This proves theeasy half of the following theorem due to Chebyshev.Theorem 6.2 A function f which is continuous onexactly one best uniform approximation on a < x <polynomialis the best uniform approximationb if and only if there are n + 2 points a < x0 < · ·thata < x < b hasb from π n.
Theto f on a < x <· < xn+1 < b so(6.8)with ε = signum[f(x0) - p(x0)]. Here a = x0 and b = xn+1 in casef(n+1)(x) does not change sign on a < x < b.A proof of this basic theorem can be found in any textbook onapproximation theory, for example in Rice [17] or Rivlin [35].6.1UNIFORM APPROXIMATION BY POLYNOMIALS239Example 6.3 We consider again approximation to f(x) = e x on the standard interval-1 < x < 1. We saw in Example 6.1 thatNow choose p(x) = a +bx, with b = (e1 - e-1 )/2, and a = (e - bx1)/2, where ex1 = f´(x1) = p´(x1) = b, orx l = ln b; see Fig.