Nash - Compact Numerical Methods for Computers (523163), страница 34
Текст из файла (страница 34)
Note that stholds value of x2-x1.}tt1 := (s0-s1)*st; tt2 := (s2-s1)*tt0; {temporary accumulators}if tt1<>tt2 then {avoid zero divide in parabolic inverse interpolation}beginst := 0.5*(tt2*tt0-tt1*st)/(tt2-ttl1; {calculate the step}Xii := x1+st;writeln(‘Paramin step and argument :‘,st,’ ‘,xii);if (reltest+xii)<>(reltest+x1) then155156Compact numerical methods for computersAlgorithm 17. Minimisation of a function of one variable (cont.)begin {evaluate function if argument has been changed}fii := fn1d(xii,notcomp); ifn := ifn+1;if notcomp then fii := big;if fii<s1 thenbegins1 := fii; x1 := xii; {save new & better function, argument}writeln(‘New min f(‘,x1,‘)=‘,s1);end;end; {evaluate function for parabolic inverse interpolation}end; {if not zerodivide situation}writeln(ifn,’ evalnsf(‘,x1,‘)=‘,s1);s0 := s1; {to save function value in case of new iteration}until (bb=x1); {Iterate until minimum does not change.
We coulduse reltest in this termination condition,}writeln(‘Apparent minimum is f(‘,bb,‘)=‘,s1);writeln(‘after ‘jfn,’ function evaluations’);fnminval := s1; {store value for return to calling program}end; {alg17.pas == min1d}The driver program DR1617.PAS on the software diskette allows grid search to be used tolocalise a minimum, with its precise locution found using the one-dimensional minimiser above.Example 13.1. Grid and linear searchThe expenditure minimisation example 12.5 provides a problem on which to testalgorithms 16 and 17.
Recall thatS (t )=t s(l-F)+FP(l+r) twhere t is the time until purchase (in months), P is the purchase price of the itemwe propose to buy (in $), F is the payback ratio on the loan we will have toobtain, s is the amount we can save each month, and r is the inflation rate permonth until the purchase date.Typical figures might be P = $10000, F = 2·25 (try 1·5% per month for 5years), s = $200 per month, and r = 0·00949 (equivalent to 12% per annum). Thusthe function to be minimised isS(t)=-250t+22500(1·00949)twhich has the analytic solution (see example 12·5) t* = 17·1973522154 as computed on a Tektronix 4051.NOVAF(0) = 22500F(10) = 22228·7F (20) = 22178·1F (30) = 22370·2F(40) = 22829F (50) = 23580·8ECLIPSEF(0) = 22500F(10) = 22228·5F (20) = 22177·7F (30) = 22369·3F (40) = 22827·8F (50) = 23579·2For both sets, the endpoints are u=0, v =50 and number of pointsis n = 5.One-dimensional problems157Simple grid search was applied to this function on Data General NOVA andECLIPSE computers operating in 23-bit binary and six-digit hexadecimal arithmetic,respectively.
The table at the bottom of the previous page gives the results of thisexercise. Note the difference in the computed function values!An extended grid search (on the ECLIPSE) uses 26 function evaluations tolocalise the minimum to within a tolerance of 0·1.NEW*ENTER#SGRID#*RUNSGRID NOV 23 7716 44 313 5 l978ENTER SEARCH INTERVAL ENDPOINTSAND TOLERANCE OF ANSWER’S PRECISION? 10 ? 30 ? 1ENTER THE NUMBER OF GRID DIVISIONSF( 14 )= 22180.5F( 18 )= 22169.2F( 22 )=: 22195.9F( 76 )= 22262.1THE MINIMUM LIES IN THE INTERVAL.F( 15.6 )= 22171.6F( 17.2 )= 22168.6F( 18.8 )= 22171.6F( 20.4 )=: 22180.7THE MINIMUM LIES IN THE INTERVALF( 16.24 )=: 22169.7F( 16.88 )= 22168.7F( 17.52 )= 22168.7F( 18.16 )= 22169.7THE MINIMUM LIES IN THE INTERVALF( 16.496 )= 22169.2F( 36.752 )= 22168.8F( 17.008 ):= 22168.7F( 17.264 )= 22168.6THE MINIMUM LIES IN THE INTERVAL18 FUNCTION EVALUATIONSNEW TOLERANCE ? .1F( 17.1104 )= 22168.6F( 17.2128 )= 22168.6F( 17.3152 )= 22168.6F( 17.4176 )= 22168.6THE MINIMUM LIES IN THE INTERVAL [F( 17.1513 )= 22168.6F( l7.1923 )= 22168.6F( 17.2332 )= 22168.6F( 17.2742 )= 22168.6THE: MINIMUM LIES IN THE INTERVAL [26 FUNCTION EVALUATIONSNEW TOLERANCE ? -1[ 14 , 22 ][ 15.6 , 18.8 ][ 16.24 , 17.52 ][ 17.008 , 17.52 ]17.1104 , 17.3152 ]17.1923 , 17.2742 ]STOP AT 0420*Algorithm 17 requires a starting point and a step length.
The ECLIPSE gives**RUNNEWMIN JULY 7 77STARTING VALUE= ? 10 STEP ? 5158Compact numerical methods for computersF( 10 )= 22228.5F( 15 )= 22174.2SUCCESSF( 22.5)= 22202.2PARAMIN STEP= 2.15087F( 17.1509 )= 22168.6NEW K4=-1.78772F( 15.3631 )= 22172.6FAILUREF( 17.5978 )= 22168.8PARAMIN STEP= 7.44882E-02F( 17.2253 )= 22168.6NEW K4=-.018622F( 17.2067 )= 22168.6SUCCESSF( 17.1788 )= 22168.6PARAMIN STEP=-4.65551E-03F( 17.2021 )= 22168.6PARAMIN FAILSNEW K4= 4.65551E-03F( 172114 )= 22168.6FAILUREF( 17.2055 )= 22168.6PARAMIN FAILSNEW K4= 0MIN AT 17.2067 = 22168.612 FN EVALSSTOP AT 0060*The effect of step length choice is possibly important.
Therefore, consider thefollowing applications of algorithm 17 using a starting value of t = 10.Step lengthMinimum atFunction evaluations15102017·226417·206717·231417·177413121011The differences in the minima are due to the flatness of this particular function,which may cause difficulties in deciding when the minimum has been located.
Byway of comparison, a linear search based on the success-failure/inverse interpolation sequence in algorithm 22 found the following minima starting from t = 10.Step lengthMinimum atFunction evaluations15102017·206317·220717·238817·253123232124One-dimensional problems159A cubic inverse interpolation algorithm requiring both the function and derivative to be computed, but which employs a convergence test based solely on thechange in the parameter, used considerably more effort to locate a minimum fromt = 10.Step lengthMinimum atFunction and derivativeevaluations15102017·208317·208217·208217·208138+3823+2336+3638+38Most of the work in this algorithm is done near the minimum, since the region ofthe minimum is located very quickly.If we can be confident of the accuracy of the derivative calculation, then aroot-finder is a very effective way to locate a minimum of this function.
However,we should check that the point found is not a maximum or saddle. Algorithm 18gives**RUN200ROOTFINDERU= ? 10 V= ? 30BISECTION EVERY ? 5TOLERANCE ? 0F( 10 )=-16.4537 F( 30 )= 32.0994FPITN 1 U= 10V= 30 F( 16.7776 )=-1.01735FP ITN 2 U= 16.7776 V= 30 F( 17.1838 )=-0.582123FP ITN 3 U= 17.1838 V= 30 F( 17.207 )=-.00361633FP ITN 4 U= 17.2307 V= 30 F( 117.2084 )=-2.28882E-04FP ITN 5 U= 17.2084 V= 30 F( 17.2085 )=-3.05176E-05BI ITN 6 U= 17.2085 V= 30 F( 23.6042 )= 15.5647FP CONVERGEDROOT: F( 17.2085 )=-3.05176E-05STOP AT 0340*Unless otherwise stated all of the above results were obtained using a DataGeneral ECLIPSE operating in six hexadecimal digit arithmetic.It may appear that this treatment is biased against using derivative information.For instance, the cubic inverse interpolation uses a convergence test which doesnot take it into account at all.
The reason for this is that in a single-precisionenvironment (with a short word length) it is difficult to evaluate the projection ofa gradient along a line since inner-product calculations cannot be made inextended precision. However, if the linear search is part of a larger algorithm tominimise a function for several parameters, derivative information must usuallybe computed in this way. The function values may still be well determined, but160Compact numerical methods for computersinaccuracy in the derivative has, in my experience, appeared to upset theperformance of either the inverse interpolation or the convergence test.13.3.
REAL, ROOTS OF FUNCTIONS OF ONE VARIABLEThe linear search problem may, as mentioned, be approached by finding the rootsof the derivative of the function to be minimised. The problem of finding one ormore values b for which f(b) is zero arises directly in a variety of disciplines.Formally the problem may be stated:find the (real) value or values b such that f(b) = 0 .However, in practice there is almost always more information available aboutthe roots other than that they are real.
For instance, the roots may be restricted toan interval [u, v] and the number of roots in this interval may be known. In suchcases, a grid search may be used to reduce the length of the interval in which theroots are to be sought. Choosing some integer n to define a steph = (v - u) / (n + 1)(13.15)gives a grid upon which function valuesf (u + jh)j = 0, 1, 2, . . .
,(n + 1)(13.16)can be computed. Iff (u + jh) *f(u +(j + 1)h ) < 0(13.17)then the interval [u + jh, u +(j +1)h] contains at least one root and the search canbe repeated on this smaller interval if necessary. Roots which occur withf(b) = 0f'(b) = 0(13.18)simultaneously are more difficult to detect, since the function may not cross the baxis and a sign change will not be observed. Grid searches are notoriouslyexpensive in computer time if used indiscriminately.
As in the previous section,they are a useful tool for discovering some of the properties of a function by‘taking a look’, particularly in conjunction with some plotting device. As such, thegrid parameter, n, should not be large; n < 10 is probably a reasonable bound.Unfortunately, it is necessary to caution that if the inequality (13.17) is notsatisfied, there may still be an even number of roots in [u + jh,u +(j + 1)h].Suppose now that a single root is sought in the interval [ u, v] which has beenspecified so that(13.19)f(u) * f(u) < 0.Thus the function has at least one root in the interval if the function is continuous.One possible way to find the root is to bisect the interval, evaluating the functionatb = (u + v)/2.(13.20)Iff (u) * f(b) < 0(13.21)One-dimensional problems161then the root lies in [u,b]; otherwise in [b, v].
This bisection can be repeated asmany times as desired. After t bisections the interval length will be2-t(u – v)(13.22)so that the root can always be located by this method in a fixed number of steps.Since the function values are only examined for their signs, however, an unnecessarily large number of function evaluations may be required. By reversing theprocess of interpolation-that is, estimating function values between tabulatedvalues by means of an interpolating function fitted to the tabular points-one mayexpect to find the root with fewer evaluations of the function.