Shampine, Allen, Pruess - Fundamentals of Numerical Computing (523185), страница 33
Текст из файла (страница 33)
Thoseemphasizing a modeling approach, like Borrelli and Coleman [l], give considerableattention to the formulation of the model and conclusions that can be drawn. Althoughthe population equations do not have an analytical solution, the equationfor trajectories in the phase plane can be solved because it is separable. An easycalculation detailed in the books cited shows that the solutions satisfy the conservationlawwhere K is an arbitrary constant. The trajectories reveal the qualitative properties ofthe solutions, so there is great interest in their computation.
Davis [5] exploits theconservation law to compute trajectories by solving nonlinear algebraic equations, thesubject of this case study.Following Volterra, Davis considered evaluation of the trajectory for parametersa = 2, c = 1 and initial condition (x,y) = (1,3). In this case there is a periodic solutionof the differential equations, that is, the populations of predator and prey are sustainable for all time. In the phase plane this periodic solution corresponds to a trajectorythat is a closed curve, as seen in Figure 4.6. The initial condition determines the constant K for the solution of interest. A little manipulation of the conservation law leadstoPoints (x,y) on the trajectory are computed for a sequence of y by forming the corresponding γ > 0 and solving this algebraic equation for x.
Davis gives a series for thesmallest positive root,For positive γ we are summing positive terms, so as we learned in Chapter 1 we canexpect to evaluate the series accurately in a relative sense provided that it converges at164CHAPTER 4ROOTS OF NONLINEAR EQUATIONSFigure 4.6 A periodic solution of the Lotka-Volterra equation.a reasonable rate. The ratio test for convergence is illuminating. A well-known limitfor e shows that the ratiohas the limitThe ratio test amounts to comparing the rate of convergence of the series to the rateof convergence of a geometric series, in this caseFor values of γ ratherless than l/e, the series converges quickly.
It was pointed out in Chapter 1 that n !grows rapidly as n increases, making integer overflow a dangerous possibility whenevaluating the series, and the factor nn grows even more rapidly. It is better to computethe coefficients bywhich is nicely scaled.The function f(x) = xe-X and its derivative, f´(x) = (1 - x ) e-x, are so simple thatproperties of the equation f(x) = γ are easily determined. The function vanishes atx = 0 and tends to 0 asIt strictly increases to its maximum of e-l at x = 1and strictly decreases thereafter.
These facts tell us that for 0 < γ < e-l, the equationf(x) = γ has exactly two roots. One lies in (0,l) and the other inBoth aresimple. The roots merge to form the double root x = 1 when γ = e-l and there isno root at all for γ > e-1. This is easily understood geometrically. The trajectoryis a closed curve and the two roots represent the left and right portions of the curve4.6 CASE STUDY 4165for given y. We have to expect numerical difficulties solving for x(y) when g = e-lbecause the curve has a horizontal tangent then.If we were solving this equation for a given γ as an isolated problem, a good wayto proceed would be to use Zero.
Because the code converges very quickly from poorguesses, we might try a “large” interval so as to increase the chance of locating the rootthat is larger than 1. We might, for example, choose [B,C] = [1, l000]. This presentsno difficulty for the code, but makes a point raised in Chapter 1. The function exp(-x )underflows for x as large as 1000, causing problems with some operating systems.Davis solves for x when y = 1.
It is a matter of a few minute’s work to alter theexample code provided with Zero to solve this problem using a system that deals withunderflows by setting them to 0. With a relative error tolerance of 10-6 and an absoluteerror tolerance of 10-8, only 15 evaluations of the function were needed to computethe root x 4.24960. With this value of y, the constant0.060641. The code reportsthe residual of the approximate root to be r 8 × 10-9.
This is a situation illuminatedby backward error analysis: the computed root is the exact root of f(x) = γ + r, aproblem very close to the one posed. Here we see that the x coordinate computed byZero is the exact value for a slightly different y coordinate.When computing a trajectory, we are solving a sequence of problems, indeed,more than a hundred in generating the figure. The first derivative is readily available,the roots are simple, and a good initial guess is available.
The circumstances suggestthat writing a code based on Newton’s method would be worth the trouble. Beforediscussing the computation of the trajectory, let’s do a couple of experiments withNewton’s method. First we compute the smaller root when y = 1. Recall that whensolving F(x) = f(x) - γ = 0, the method isBecause the method is quadratically convergent for these simple roots, the differencebetween successive iterates provides a convenient and reliable way to test for convergence. Of course, for this test to work it is necessary that the iterate be close enoughto the root that the method really is converging quadratically fast and not so close thatfinite precision corrupts the estimate.
Because γ is relatively small in this case, we cancompute an accurate value for x using the series to see how well the procedure works.With x0 = γ, the table shows that convergence appears to be quadratic right from thestart and the estimated error is quite close to the true error, except at limiting precision.(It is quite possible that the root computed in this manner is more accurate than the reference value computed with the series.) Because there is a natural measure of scaleprovided by the constant γ, the small residual tells us that after only a few iterations,we have a very accurate solution in the sense of backward error analysis.166CHAPTER 4ROOTS OF NONLINEAR EQUATIONSThis is what we expect of Newton’s method, but things do not always go so well.Suppose now that we try to compute the larger root starting with a guess of 8.
In orderto measure the error, we computed the root accurately in a preliminary computation.The first difficulty we encountered is that the method does not converge to this root.Remember that we can expect convergence to the root nearest the guess only when theguess is “sufficiently close.” The error reported in the table is the difference betweenthe iterate and an accurate value for the smaller root.As the residual makes clear, we went from a reasonable initial guess to approximationsthat are terrible in the sense of backward error analysis. This kind of thing did nothappen with Zero, even though it was given much worse guesses, because of the wayit combines the secant rule and bisection.
All goes well near the root, but convergenceis very slow when an iterate is large and negative. Indeed, the estimated error is thechange in the iterate and the table shows it to be nearly constant then. This is easy tosee analytically fromExamination of the sizes of the terms when xi << - 1 shows that the change is approximately xi /(1 - xi ). This can be further approximated as 1, agreeing as well as mightbe expected with the values 0.94 seen in the table.
These approximations make it clearthat if we should turn up an approximation xi << - 1, the iterates are going to increaseslowly to the positive roots.Davis used this algebraic approach only for computing a couple of points on thetrajectory. If we want to compute the closed trajectory of the figure, we need to do continuation efficiently and deal with the fact that the differential equation for the phaseis singular when x = 1. First let’s look for a moment at the solution of x exp( -x) = γfor a sequence of values γ.
Suppose we have found a root x corresponding to a given γand want to compute a root corresponding to γ + δ for some “small” δ. One possibilityfor an initial guess is simply x. Often this works well enough, but for this equation ashort calculation shows thatA rather better initial guess is then x + δdx/dγ. Notice the change in character of theREFERENCES167problem that shows up here when x = 1.
This is the kind of difficulty that we overcomewhen tracing a trajectory by exploiting additional information at our disposal.The program that computed the figure accepts the constants a, c and the initialpoint (x,y). Using these data, it computes the constant K. The conservation law allowsus to solve for x when y is given, or to solve for y when x is given, and the code selectsthe more appropriate at each step.
The differential equations for the populations tellus that for a small increment δ in t, the change in x is about δdx/dt and the change iny is about δdy/dt. If |dy/dt| < |dx/dt|, the code uses the value x + δdx/dt and solvesfor y(x). Otherwise, it uses the value y + δdy/dt and solves for x(y).
This amounts tochanging the coordinate system in which the curve is viewed so as to avoid difficultieswith vertical tangents. After choosing which equation to solve, Newton’s method iseasily applied and converges very rapidly. Superlinear convergence is used to test forconvergence to a specified relative error tolerance.REFERENCES1.