Nash - Compact Numerical Methods for Computers (523163), страница 39
Текст из файла (страница 39)
of cur-v.‘: 14,’ tilt’);lowerfn := false; {initially no lower function value than fmin existsfor the function at hand}for i := 1 to n do {STEP 1}beginif (not lowerfn) thenbegin {STEP 2}temp := Bvec[i]; {to save the parameter value}step := eps*(abs(temp)+eps); {the change in the parameter -- STEP 3}Bvec[i] := temp+step; {STEP 4}f := fminfn(n, Bvec, Workdata, notcomp); {function calculation}if notcomp then f := big; {substitution of a large value fornon-computable function}write(i:5,’’,step:12,’’,f:12,’’);end; {step forwards}if f<fmin then lowerfn := true; {STEP 5}if {not lowerfn} thenbeginfplus := f; {to save the function value after forward step}Bvec[i] := temp-step; {STEP 6}f := fminfn(n,Bvec,Workdata,notcomp); {function calculation}if notcomp then f := big; {substitution of a large value fornon-computable function}write(f:12,’’);end; {step backwards}if f<fmin then lowerfn := true; {STEP 7}if (not lowerfn) then {STEP 8}beginBvec[i] := temp; {to restore parameter value}{compute tilt and radius of curvature}179180Compact numerical methods for computersAlgorithm 20.
Axial search (cont.)temp := 0.5*(fplus-f)/step; {first order parabola coefficient}fplus := 0.5*(fplus+f-2.0*fmin)/(step*step);{2nd order parabolic coefficient - 0th order is fmin}if fplus<>0.0 then {avoid zero divide}begincradius := 1.0+temp*temp;cradius := cradius*sqrt(cradius)/fplus; {radius of curvature}endelsecradius := big; {a safety measure}tilt := 45.0*arctan(temp)/arctan(1.0); {rem tilt in degrees}write(cradius: 12,’’‚tilt: 12);endwriteln; {to advance printer to next line}end; {loop on i -- STEP 9}end; {alg20.pas == axissrch -- STEP 10}Example 14.1. Using the Nelder-Mead simplex procedure (algorithm 19)Consider a (time) series of numbersQtA transformation of this series tot = 1, 2 , . . .
, m.P t =Q t -b 1 Q t- 1 -b 2 Q t-2t = 1, 2, . . . , mwill have properties different from those of the original series. In particular, theautocorrelation coefficient of order k is defined (Kendall 1973) asThe following output was produced with driver program DR1920, but has beenedited for brevity. The final parameters are slightly different from those given in thefirst edition, where algorithm 19 was run in much lower precision. Use of the finalparameters from the first edition (1·1104, -0·387185) as starting parameters for thepresent code gives an apparent minimum.Minimum function value found = 2.5734305415E-24At parametersB[l]= 1.1060491080Et00B[2]= -3.7996531780E-01dr1920.pas -- driver for Nelder-Mead minimisation15: 21: 371989/01/25File for input of control data ([cr] for keyboard) ex14-1File for console image ([cr] = nul) d:testl4-1.Function: Jaffrelot Minimisation of First Order ACF25.02000 25.13000 25.16000 23.70000 22.09000 23.3900026.96000Direct search methods27.5600 28.95000 27.32000 29.38000 27.81000 26.7800032.16000 30.10000 29.02000 26.76000 29.18000 26.5900026.95000 28.25000 27.29000 27.82000 31.07000 36.5900040.68000 36.01000 34.34000 33.58000 32.46000 31.6100028.57000 28.23000 28.69000 33.60000 33.63000 33.6100037.13000 37.81000 38.34000 33.40000 30.85000 26.9900023.74000 26.21000 27.58000 33.32000 34.91000 39.9500049.36000 48.98000 63.49000 57.74000 50.78000 42.25000Enter starting parametersstarting point ( 1.0000000000E+00, 1.0000000000E+00)Nash Algorithm 19 version 2 1988-03-17Nelder Mead polytope direct search function minimiserFunction value for initial parameters = 6.2908504924E-01Scaled convergence tolerance is 1.1442990385E-12Stepsize computed as 1.0000000000E-01BUILD3 6.5569689140E-016.2908504924E-01EXTENSION5 6.5006359306E-015.9698860501E-01EXTENSION7 6.2908504924E-015.1102081991E-019 5.969886050lE-012.7249463178E-01EXTENSIONEXTENSION11 5.1102081991E-011.8450323330E-02LO-REDUCTION13 2.7249463178E-011.8450323330E-0215 4.0644968776E-021.8450323330E-02LO-REDUCTIONREFLECTION17 3.2560286564E-025.8957995328E-0319 1.8450323330E-02HI-REDUCTION3.5399834634E-04...LO-REDUCTION67 3.0354380079E-118.8619333373E-12HI-REDUCTION69 1.8902369403E-114.91328416458-13LO-REDUCTION71 8.8619333373E-121.09216117193-13HI-REDUCTION73 1.75344785383-12 l.0921611719E-13Exiting from Algl9.pas Nelder Mead polytope minimiser75 function evaluations used18128.7500026.7900038.3700030.2600034.1900025.6300041.9700053.00000Minimum function value found = 1.09216117193-13At parametersB[l]= 1.1204166839E+00B[2]= -4.0364239252E-01alg20.pas -- axial searchAxisStepsizefunction +tiltfunction rad.
of CURV.1 1.511270E-06 1.216470E-11 6.654846E-12 2.455700E-01 1.044457E-042 5.446594E-07 1.265285E-12 4.698094E-14Lower function value foundNash Algorithm 19 version 2 1988-03-17Nelder Mead polytope direct search function minimiserFunction value for initial parameters = 4.6980937735E-14Scaled convergence tolerance is 3.3941802781E-24Stepsize computed as l.l205378270E-01182Compact numerical methods for computers4.6980937735E-1434.1402114150E-02BUILD4.6980937735E-141.2559406706E-025LO-REDUCTION4.6980937735E-143.4988133663E-03HI -REDUCTION74.6980937735E-1497.8255935023E-04HI-REDUCTION...SHRINK593.1448995130E-141.0373099578E-161.0373099578E-162.4400978639E-1463SHRINK1.7010223449E-14 1.0373099578E-1665HI-REDUCTION...6.0920713485E-243.9407472806E-25117HI--REDUCTIONExiting from ALG19.pas Nelder Mead polytope minimiser119 function evaluations usedMinimum function value found = 1.7118624554E-25At parametersB[l]= 1.1213869326E+00B[2]= -4.0522273834E-01alg20.pas -- axial searchAxisStepsizetiltfunction +function rad.
of curv.1 1.512415E-06 9.226758E-12 9.226726E-12 2.479099E-01 6.159003E-102 5.465253E-07 4.546206E-13 4.546053E-13 6.5702023-01 8.031723E-1014.4. OTHER DIRECT SEARCH METHODSThe Nelder-Mead procedure presented here is by no means the only direct searchprocedure, nor can it be taken to be the most economical of space or time. Dixon(1972, chap 5) discusses a number of direct search methods. Some of theseperform various linear searches then form the resultant or sum of these over. say,n directions. A new set of directions is produced with the first being the resultantand the others orthogonal to this direction and to each other. This is the basis ofthe method of Rosenbrock and that of Davies, Swann and Campey. These bothrequire storage of at least n vectors of n elements, equivalent to algorithm 19.The major differences in the two methods mentioned occur in the linear searchand in the orthogonalisation procedure, which must be made resilient to theoccurrence of directions in which no progress can be made, since these will havelength zero and will cause a loss of dimensionality.A method which is very simple to code and which uses only two vectors of workingstorage is that of Hooke and Jeeves (1961).
This is the only method 1 have testedusing the 77 test functions in Nash (1976). At that time, as reported in the firstedition, the performance of this method in BASIC on a Data General NOVAcomputer was not satisfactory. However, for the types of functions encountered inmany teaching situations it seems quite reliable, and EASON and FENTON (1973)showed a preference for this method. Furthermore. it is explicable to students whosemajor interest is not mathematics, such as those in economics or commerce, whoDirect search methods183nevertheless may need to minimise functions. I have used the Hooke and Jeevesmethod in a number of forecasting courses, and published it as a step-anddescription version with a BASIC code in Nash (1982).
A more advanced versionappears in Nash and Walker-Smith (1987). I caution that it may prove very slow tofind a minimum, and that it is possible to devise quite simple functions (Nash andWalker-Smith 1989) which will defeat its heuristic search. Algorithm 27 belowpresents a Pascal implementation.Algorithm 27. Hooke and Jeeves minimiserprocedure hjmin(n: integer; {the number of parameters in thefunction to be minimised}var B,X: r-vector; {the parameter values oninput (B) and output (X) from minmeth}var Fmin: real; {‘minimum’ function value}Workdata: probdata; {user defined data area}var fail: boolean; {true if method has failed}intol: real); {user-initialized convergencetolerance}{alg27.pas == Hooke and Jeeves pattern search function minimisationFrom Interface Age, March 1982 page 34ff.Copyright 1988 J.C.Nash}vari j: integer; {loop counters}stepsize: real; {current step size}fold: real; {function value at ‘old’ base point}fval: real; {current function value}notcomp: boolean; {set true if function not computable}temp: real; {temporary storage value}samepoint: boolean; {true if two points are identical}ifn: integer; {to count the number of function evaluations}beginif intol<0.0 then int<1 := calceps; {set convergence tolerance if necessary}ifn := 1; {to initialize the count of function evaluations}fail := false; {Algorithm has not yet failed.}{STEP HJ1: n already entered, but we need an initial stepsize.