c7-6 (779520), страница 2
Текст из файла (страница 2)
Ifyour accuracy requirements are modest, or if your computer budget is large, thenthe technique is highly recommended as one of great generality. In the next twosections we will see that there are techniques available for “breaking the square rootof N barrier” and achieving, at least in some cases, higher accuracy with fewerfunction evaluations.Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).#include "nrutil.h"...n=...Set to the number of sample points desired.sw=swx=swy=swz=0.0;varw=varx=vary=varz=0.0;ss=0.2*(exp(5.0)-exp(-5.0))Interval of s to be random sampled.vol=3.0*7.0*ssVolume in x,y,s-space.for(j=1;j<=n;j++) {x=1.0+3.0*ran2(&idum);y=(-3.0)+7.0*ran2(&idum);s=0.00135+ss*ran2(&idum);Pick a point in s.z=0.2*log(5.0*s);Equation (7.6.7).if (z*z+SQR(sqrt(x*x+y*y)-3.0) < 1.0) {sw += 1.0;Density is 1, since absorbed into definitionswx += x;of s.swy += y;swz += z;varw += 1.0;varx += x*x;vary += y*y;varz += z*z;}}w=vol*sw/n;The values of the integrals (7.6.5),x=vol*swx/n;y=vol*swy/n;z=vol*swz/n;dw=vol*sqrt((varw/n-SQR(sw/n))/n);and their corresponding error estimates.dx=vol*sqrt((varx/n-SQR(swx/n))/n);dy=vol*sqrt((vary/n-SQR(swy/n))/n);dz=vol*sqrt((varz/n-SQR(swz/n))/n);7.7 Quasi- (that is, Sub-) Random Sequences309CITED REFERENCES AND FURTHER READING:Hammersley, J.M., and Handscomb, D.C.
1964, Monte Carlo Methods (London: Methuen).Shreider, Yu. A. (ed.) 1966, The Monte Carlo Method (Oxford: Pergamon).Sobol’, I.M. 1974, The Monte Carlo Method (Chicago: University of Chicago Press).Kalos, M.H., and Whitlock, P.A. 1986, Monte Carlo Methods (New York: Wiley).We have just seen that choosing N points uniformly randomly in an ndimensionalspace leads to an error term in Monte Carlo integration that decreases√as 1/ N .
In essence, each new point sampled adds linearly to an accumulated sumthat will become the function average, and also linearly to an accumulated sum ofsquares that will become the variance (equation 7.6.2). The estimated error comesfrom the square root of this variance, hence the power N −1/2 .Just because this square root convergence is familiar does not, however, meanthat it is inevitable. A simple counterexample is to choose sample points that lieon a Cartesian grid, and to sample each grid point exactly once (in whatever order).The Monte Carlo method thus becomes a deterministic quadrature scheme — albeita simple one — whose fractional error decreases at least as fast as N −1 (even fasterif the function goes to zero smoothly at the boundaries of the sampled region, oris periodic in the region).The trouble with a grid is that one has to decide in advance how fine it shouldbe.
One is then committed to completing all of its sample points. With a grid, it isnot convenient to “sample until” some convergence or termination criterion is met.One might ask if there is not some intermediate scheme, some way to pick samplepoints “at random,” yet spread out in some self-avoiding way, avoiding the chanceclustering that occurs with uniformly random points.A similar question arises for tasks other than Monte Carlo integration. We mightwant to search an n-dimensional space for a point where some (locally computable)condition holds. Of course, for the task to be computationally meaningful, therehad better be continuity, so that the desired condition will hold in some finite ndimensional neighborhood. We may not know a priori how large that neighborhoodis, however. We want to “sample until” the desired point is found, moving smoothlyto finer scales with increasing samples.
Is there any way to do this that is betterthan uncorrelated, random samples?The answer to the above question is “yes.” Sequences of n-tuples that filln-space more uniformly than uncorrelated random points are called quasi-randomsequences. That term is somewhat of a misnomer, since there is nothing “random”about quasi-random sequences: They are cleverly crafted to be, in fact, sub-random.The sample points in a quasi-random sequence are, in a precise sense, “maximallyavoiding” of each other.A conceptually simple example is Halton’s sequence [1]. In one dimension, thejth number Hj in the sequence is obtained by the following steps: (i) Write j as anumber in base b, where b is some prime.
(For example j = 17 in base b = 3 is122.) (ii) Reverse the digits and put a radix point (i.e., a decimal point base b) inSample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).7.7 Quasi- (that is, Sub-) Random Sequences.















