Press, Teukolsly, Vetterling, Flannery - Numerical Recipes in C (523184), страница 79
Текст из файла (страница 79)
However, there is a large advantage intaking dith to be nonzero if some special symmetry of the integrand puts the active regionexactly at the midpoint of the region, or at the center of some power-of-two submultiple ofthe region. One wants to avoid the extreme case of the active region being evenly dividedinto 2d abutting corners of a d-dimensional space. A typical nonzero value of dith, onthose occasions when it is useful, might be 0.1. Of course, when the dithering parameteris nonzero, we must take the differing sizes of the subvolumes into account; the code doesthis through the variable fracl.One final feature in the code deserves mention.
The RSS algorithm uses a single setof sample points to evaluate equation (7.8.23) in all d directions. At bottom levels of therecursion, the number of sample points can be quite small. Although rare, it can happen thatin one direction all the samples are in one half of the volume; in that case, that directionis ignored as a candidate for bifurcation. Even more rare is the possibility that all of thesamples are in one half of the volume in all directions. In this case, a random direction ischosen.
If this happens too often in your application, then you should increase MNPT (seeline if (!jb). . . in the code).Note that miser, as given, returns as ave an estimate of the average function valuef , not the integral of f over the region. The routine vegas, adopting the other convention,returns as tgral the integral. The two conventions are of course trivially related, by equation(7.8.8), since the volume V of the rectangular region is known.#include <stdlib.h>#include <math.h>#include "nrutil.h"#define PFAC 0.1#define MNPT 15#define MNBS 60#define TINY 1.0e-30#define BIG 1.0e30Here PFAC is the fraction of remaining function evaluations used at each stage to explore thevariance of func. At least MNPT function evaluations are performed in any terminal subregion;a subregion is further bisected only if at least MNBS function evaluations are available.
We takeMNBS = 4*MNPT.static long iran=0;void miser(float (*func)(float []), float regn[], int ndim, unsigned long npts,float dith, float *ave, float *var)Monte Carlo samples a user-supplied ndim-dimensional function func in a rectangular volumespecified by regn[1..2*ndim], a vector consisting of ndim “lower-left” coordinates of theregion followed by ndim “upper-right” coordinates.
The function is sampled a total of nptstimes, at locations determined by the method of recursive stratified sampling. The mean valueof the function in the region is returned as ave; an estimate of the statistical uncertainty of ave(square of standard deviation) is returned as var. The input parameter dith should normallybe set to zero, but can be set to (e.g.) 0.1 if func’s active region falls on the boundary of apower-of-two subdivision of region.{void ranpt(float pt[], float regn[], int n);326Chapter 7.Random Numbersfloat *regn_temp;unsigned long n,npre,nptl,nptr;int j,jb;float avel,varl;float fracl,fval;float rgl,rgm,rgr,s,sigl,siglb,sigr,sigrb;float sum,sumb,summ,summ2;float *fmaxl,*fmaxr,*fminl,*fminr;float *pt,*rmid;pt=vector(1,ndim);if (npts < MNBS) {Too few points to bisect; do straightsumm=summ2=0.0;Monte Carlo.for (n=1;n<=npts;n++) {ranpt(pt,regn,ndim);fval=(*func)(pt);summ += fval;summ2 += fval * fval;}*ave=summ/npts;*var=FMAX(TINY,(summ2-summ*summ/npts)/(npts*npts));}else {Do the preliminary (uniform) sampling.rmid=vector(1,ndim);npre=LMAX((unsigned long)(npts*PFAC),MNPT);fmaxl=vector(1,ndim);fmaxr=vector(1,ndim);fminl=vector(1,ndim);fminr=vector(1,ndim);for (j=1;j<=ndim;j++) {Initialize the left and right bounds foriran=(iran*2661+36979) % 175000;each dimension.s=SIGN(dith,(float)(iran-87500));rmid[j]=(0.5+s)*regn[j]+(0.5-s)*regn[ndim+j];fminl[j]=fminr[j]=BIG;fmaxl[j]=fmaxr[j] = -BIG;}for (n=1;n<=npre;n++) {Loop over the points in the sample.ranpt(pt,regn,ndim);fval=(*func)(pt);for (j=1;j<=ndim;j++) {Find the left and right bounds for eachif (pt[j]<=rmid[j]) {dimension.fminl[j]=FMIN(fminl[j],fval);fmaxl[j]=FMAX(fmaxl[j],fval);}else {fminr[j]=FMIN(fminr[j],fval);fmaxr[j]=FMAX(fmaxr[j],fval);}}}sumb=BIG;Choose which dimension jb to bisect.jb=0;siglb=sigrb=1.0;for (j=1;j<=ndim;j++) {if (fmaxl[j] > fminl[j] && fmaxr[j] > fminr[j]) {sigl=FMAX(TINY,pow(fmaxl[j]-fminl[j],2.0/3.0));sigr=FMAX(TINY,pow(fmaxr[j]-fminr[j],2.0/3.0));sum=sigl+sigr;Equation (7.8.24), see text.if (sum<=sumb) {sumb=sum;jb=j;siglb=sigl;sigrb=sigr;}7.8 Adaptive and Recursive Monte Carlo Methods327}}free_vector(fminr,1,ndim);free_vector(fminl,1,ndim);free_vector(fmaxr,1,ndim);free_vector(fmaxl,1,ndim);if (!jb) jb=1+(ndim*iran)/175000;MNPT may be too small.rgl=regn[jb];Apportion the remaining points betweenrgm=rmid[jb];left and right.rgr=regn[ndim+jb];fracl=fabs((rgm-rgl)/(rgr-rgl));nptl=(unsigned long)(MNPT+(npts-npre-2*MNPT)*fracl*siglb/(fracl*siglb+(1.0-fracl)*sigrb));Equation (7.8.23).nptr=npts-npre-nptl;regn_temp=vector(1,2*ndim);Now allocate and integrate the two subfor (j=1;j<=ndim;j++) {regions.regn_temp[j]=regn[j];regn_temp[ndim+j]=regn[ndim+j];}regn_temp[ndim+jb]=rmid[jb];miser(func,regn_temp,ndim,nptl,dith,&avel,&varl);regn_temp[jb]=rmid[jb];Dispatch recursive call; will return backregn_temp[ndim+jb]=regn[ndim+jb];here eventually.miser(func,regn_temp,ndim,nptr,dith,ave,var);free_vector(regn_temp,1,2*ndim);*ave=fracl*avel+(1-fracl)*(*ave);*var=fracl*fracl*varl+(1-fracl)*(1-fracl)*(*var);Combine left and right regions by equation (7.8.11) (1st line).free_vector(rmid,1,ndim);}free_vector(pt,1,ndim);}The miser routine calls a short function ranpt to get a random point within a specifiedd-dimensional region.
The following version of ranpt makes consecutive calls to a uniformrandom number generator and does the obvious scaling. One can easily modify ranpt togenerate its points via the quasi-random routine sobseq (§7.7). We find that miser withsobseq can be considerably more accurate than miser with uniform random deviates. Sincethe use of RSS and the use of quasi-random numbers are completely separable, however, wehave not made the code given here dependent on sobseq.
A similar remark might be maderegarding importance sampling, which could in principle be combined with RSS. (One couldin principle combine vegas and miser, although the programming would be intricate.)extern long idum;void ranpt(float pt[], float regn[], int n)Returns a uniformly random point pt in an n-dimensional rectangular region. Used by miser;calls ran1 for uniform deviates.
Your main program should initialize the global variable idumto a negative seed integer.{float ran1(long *idum);int j;for (j=1;j<=n;j++)pt[j]=regn[j]+(regn[n+j]-regn[j])*ran1(&idum);}328Chapter 7.Random NumbersCITED REFERENCES AND FURTHER READING:Hammersley, J.M. and Handscomb, D.C. 1964, Monte Carlo Methods (London: Methuen).Kalos, M.H. and Whitlock, P.A. 1986, Monte Carlo Methods (New York: Wiley).Bratley, P., Fox, B.L., and Schrage, E.L.
1983, A Guide to Simulation (New York: Springer-Verlag).Lepage, G.P. 1978, Journal of Computational Physics, vol. 27, pp. 192–203. [1]Lepage, G.P. 1980, “VEGAS: An Adaptive Multidimensional Integration Program,” PublicationCLNS-80/447, Cornell University. [2]Press, W.H., and Farrar, G.R. 1990, Computers in Physics, vol. 4, pp. 190–195. [3]Chapter 8.Sorting8.0 IntroductionThis chapter almost doesn’t belong in a book on numerical methods. However,some practical knowledge of techniques for sorting is an indispensable part of anygood programmer’s expertise. We would not want you to consider yourself expert innumerical techniques while remaining ignorant of so basic a subject.In conjunction with numerical work, sorting is frequently necessary when data(either experimental or numerically generated) are being handled.
One has tablesor lists of numbers, representing one or more independent (or “control”) variables,and one or more dependent (or “measured”) variables. One may wish to arrangethese data, in various circumstances, in order by one or another of these variables.Alternatively, one may simply wish to identify the “median” value, or the “upperquartile” value of one of the lists of values. This task, closely related to sorting,is called selection.Here, more specifically, are the tasks that this chapter will deal with:• Sort, i.e., rearrange, an array of numbers into numerical order.• Rearrange an array into numerical order while performing the corresponding rearrangement of one or more additional arrays, so that thecorrespondence between elements in all arrays is maintained.• Given an array, prepare an index table for it, i.e., a table of pointerstelling which number array element comes first in numerical order, whichsecond, and so on.• Given an array, prepare a rank table for it, i.e., a table telling what isthe numerical rank of the first array element, the second array element,and so on.• Select the M th largest element from an array.For the basic task of sorting N elements, the best algorithms require on theorder of several times N log2 N operations.
The algorithm inventor tries to reducethe constant in front of this estimate to as small a value as possible. Two of thebest algorithms are Quicksort (§8.2), invented by the inimitable C.A.R. Hoare, andHeapsort (§8.3), invented by J.W.J. Williams.For large N (say > 1000), Quicksort is faster, on most machines, by a factor of1.5 or 2; it requires a bit of extra memory, however, and is a moderately complicatedprogram. Heapsort is a true “sort in place,” and is somewhat more compact toprogram and therefore a bit easier to modify for special purposes.
On balance, werecommend Quicksort because of its speed, but we implement both routines.329330Chapter 8.SortingFor small N one does better to use an algorithm whose operation count goesas a higher, i.e., poorer, power of N , if the constant in front is small enough. ForN < 20, roughly, the method of straight insertion (§8.1) is concise and fast enough.We include it with some trepidation: It is an N 2 algorithm, whose potential formisuse (by using it for too large an N ) is great. The resultant waste of computertime is so awesome, that we were tempted not to include any N 2 routine at all. Wewill draw the line, however, at the inefficient N 2 algorithm, beloved of elementarycomputer science texts, called bubble sort. If you know what bubble sort is, wipe itfrom your mind; if you don’t know, make a point of never finding out!For N < 50, roughly, Shell’s method (§8.1), only slightly more complicated toprogram than straight insertion, is competitive with the more complicated Quicksorton many machines.