Conte, de Boor - Elementary Numerical Analysis. An Algorithmic Approach (523140), страница 14
Текст из файла (страница 14)
1.7), f’(x) would have to vanish somewhere in [1,2].2Hence, since f’(x) = 3x - 1 is positive on [1,2], f(x) has exactly one zeroin the interval [1,2]. If we call this zerothenTo find out more about this zero, we evaluate f(x) at the midpoint 1.5 ofthe interval [1,2] and getHence we now know that the zerolies in the smaller interval [1, 1.5]; i.e.,Checking again at the midpoint 1.25, we findand know therefore thatlies in the yet smaller interval [1.25, 1.5]; i.e.,This procedure of locating a solution of the equation f(x) = 0 in asequence of intervals of decreasing size is known as the bisection method.3.1A SURVEY OF ITERATIVE METHODS75Algorithm 3.1: Bisection method Given a function f(x) continuous onthe interval [a0, b0] and such that f(a0)f(b0) < 0.We shall frequently state algorithms in the above concise form.
Forstudents familiar with the ALGOL language, this notation will appearquite natural. Further, we have used here the phrase “until satisfied” inorder to stress that this description of the algorithm is incomplete. A user ofthe algorithm must specify precise termination criteria. These will dependin part on the specific problem to be solved by the algorithm.
Some of themany possible termination criteria are discussed in the next section.At each step of the bisection algorithm 3.1, the length of the intervalknown to contain a zero of f(x) is reduced by a factor of 2. Hence eachstep produces one more correct binary digit of the root of f(x) = 0. After20 steps of this algorithm applied to our example and starting as we didwith a, = 1, b0 = 2, one getsClearly, with enough effort, one can always locate a root to any desiredaccuracy with this algorithm. But compared with other methods to bediscussed, the bisection method converges rather slowly.One can hope to get to the root faster by using more fully theinformation about f(x) available at each step. In our example (3.2), westarted with the informationSince |f(l)| is closer to zero than is |f(2)| the root is likely to be closer to1 than to 2 [at least if f(x) is “nearly” linear].
Hence, rather than check themidpoint, or average value, 1.5 of 1 and 2, we now check f(x) at theweighted average(3.4)Note that since f(1) and f(2) have opposite sign, we can write (3.4) moresimply as(3.5)76THE SOLUTION OF NONLINEAR EQUATIONSThis gives for our example....andHencewe getlies in [1.166666 · · · , 2]. Repeating the process for this interval,Consequently, f(x) has a zero in the interval [1.253112 · · · , 2]. Thisalgorithm is known as the regula falsi, or false-position, method.Algorithm 3.2: Regula falsi Given a function f(x) continuous on theinterval [a0, b0] and such that f(a0)f(b0) < 0.After 16 steps of this algorithm applied to our example and starting aswe did with a0 = 1, b0, = 2, one getsHence, although the regula falsi produces a point at which |f(x)| is “small”somewhat faster than does the bisection method, it fails completely to givea “small” interval in which a zero is known to lie.A glance at Fig.
3.2 shows the reason for this. As one verifies easily,the weighted averageis the point at which the straight line through the points {an , f(an )} and{bn, f(bn)} intersects the x axis. Such a straight line is a secant to f(x), andin our example, f(x) is concave upward and increasing (in the interval[1,2] of interest); hence the secant is always above (the graph of) f(x).Consequently, w always lies to the left of the zero (in our example). If f(x)were concave downward and increasing, w would always lie to the right ofthe zero.3.1A SURVEY OF ITERATIVE METHODS77Figure 3.2 Regula falsi.The regula falsi algorithm can be improved in several ways, two ofwhich we now discuss.
The first one, called modified regula falsi, replacessecants by straight lines of ever-smaller slope until w falls to the oppositeside of the root. This is shown graphically in Fig. 3.3.Algorithm 3.3: Modified regula falsi Given f(x) continuous on [a0, b0]and such that f(a0)f(b0) < 0.If the modified regula falsi is applied to our example with a 0 = 1,b0 = 2, then after six steps, one getswhich shows an impressive improvement over the bisection method.78THE SOLUTION OF NONLINEAR EQUATIONSFigure 3.3 Modified rcgula falsi.A second, very popular modification of the regula falsi, called thesecant method, retains the use of secants throughout, but may give up thebracketing of the root.Algorithm 3.4: Secant method Given a function f(x) and two pointsx-1, x0.If the second method is applied to our example with x-1 = 1, x0 = 2,then after six steps one getsApparently, the secant method locates quite rapidly a point at which |f(x)|is “small,” but gives, in general, no feeling for how far away from a zero off(x) this point might be.
Also, f(xn ) and f(xn-1 ) need not be of oppositesign, so that the expression(3.6)is prone to round-off-error effects. In an extreme situation, we might evenhave f(xn ) = f(xn-1 ), making the calculation of xn+1 impossible. Althoughthis does not cure the trouble, it is better to calculate x n+1 from the3.1A SURVEY OF ITERATIVE METHODS79equivalent expression(3.7)in which xn+1 is obtained from xn by adding the “correction term”(3.8)The student will recognize the ratio [f(xn) - f(xn-1)]/(xn - xn-1) as afirst divided difference of f(x) and from (2.10) as the slope of the secant tof(x) through the points {xn-1 , f(xn-1 )} and {xn , f(xn )}. Furthermore from(2.17) we see that this ratio is equal to the slope of f(x) at some pointbetween x n-1 and x n if f(x) is differentiable.
It would be reasonabletherefore to replace this ratio by the value of f’(x) at some point “near” xnand xn-1, given that f’(x) can be calculated.If f(x) is differentiable, then on replacing in (3.7) the slope of thesecant by the slope of the tangent at xn, one gets the iteration formula(3.9)of Newton’s method.Algorithm 3.5: Newton’s method Given f(x) continuously differentiableand a point x0.If this algorithm is applied to our example with x0 = 1, then after foursteps, one getsFinally, we mention fixed-point iteration, of which Newton’s method isa special example.
If we set(3.10)then the iteration formula (3.9) for Newton’s method takes on the simpleform(3.11)If the sequence x1, x2, · · · so generated converges to some pointg(x) is continuous, thenand(3.12)80THE SOLUTION OF NONLINEAR EQUATIONSthat is,is then a fixed point of g(x). Clearly, if is a fixedorpoint of the iteration function g(x) for Newton’s method, then is asolution of the equation f(x) = 0. Now, for a given equation f(x) = 0, it ispossible to choose various iteration functions g(x), each having the property that a fixed point of g(x) is a zero of f(x). For each such choice, onemay then calculate the sequence x1, x2, .
. . byand hope that it converges. If it does, then its limit is a solution of theequation f(x) = 0. We discuss fixed-point iteration in more detail in Secs.3.3 and 3.4.Example 3.1 The function f(x) = x - 0.2 sin x - 0.5 has exactly one zero betweenx 0 - 0.5 and x l - 1.0, since f(0.5)f(l.0) < 0, while f’(x) does not vanish on [0.5, 1].Locate the zero correct to six significant figures using Algorithms 3.1, 3.3, 3.4, and 3.5.The following calculations were performed on an IBM 7094 computer in singleprecision 27-binary-bit floating-point arithmetic.In Algorithms 3.1 and 3.3, x, is the midpoint between the lower and the upperbounds, an and bn, after n iterations, while thegives the corresponding bound on theerror in x n provided by the algorithm. Note the rapid and systematic convergence ofAlgorithms 3.4 and 3.5.
The bisection method converges very slowly but steadily, whilethe modified regula falsi method seems to converge “in jumps,” although it does obtainthe correct zero rather quickly.EXERCISES3.1-1 Find an interval containing the real positive zero of the function f(x) = x2 - 2x - 2.Use Algorithms 3.1 and 3.2 to compute this zero correct to two significant figures. Can youestimate how many steps each method would require to produce six significant figures?3.2FORTRAN PROGRAMS FOR SOME ITERATIVE METHODS 813.1-2 For the example given in the text, carry out two steps of the modified regula falsi(Algorithm 3.3).3.1-3 The polynomial x 3 - 2x - 1 has a zero between 1 and 2.
Using the secant method(Algorithm 3.4), find this zero correct to three significant figures.3.1-4 In Algorithm 3.1 let M denote the length of the initial interval [a 0 , b 0 ]. Let{x0, x1, x2, . . . } represent the successive midpoints generated by the bisection method. ShowthatAlso show that the number I of iterations required to guarantee an approximation to a root toan accuracy is given by3.1-5 The bisection method can be applied whenever f(a)f(b) < 0. If f(x) has more than onezero in (a, b), which zero does Algorithm 3.1 usually locate?3.1-6 With a = 0, b = 1, each of the following functions changes sign in (a, b), that is,f(a)f(b) < 0.
What point does the bisection Algorithm 3.1 locate? Is this point a zero of f(x)?3.1-7 The function f(x) = e 2x - e x - 2 has a zero on the interval [0,1]. Find this zerocorrect to four significant digits using Newton’s method (Algorithm 3.5).3.1-8 The function f(x) = 4 sin x - ex has a zero on the interval [0, 0.5].
Find this zerocorrect to four significant digits using the secant method (Algorithm 3.4).3.1-9 Using the bisection algorithm locate the smallest positive zero of the polynomialp(x) = 2x3 - 3x - 4 correct to three significant digits.3.2 FORTRAN PROGRAMS FOR SOME ITERATIVEMETHODSWhen the algorithms introduced in the preceding section are used incalculations, the vague phrase “until satisfied” has to be replaced byprecise termination criteria.