c10-9 (Numerical Recipes in C)
Описание файла
Файл "c10-9" внутри архива находится в папке "Numerical Recipes in C". PDF-файл из архива "Numerical Recipes in C", который расположен в категории "". Всё это находится в предмете "цифровая обработка сигналов (цос)" из 8 семестр, которые можно найти в файловом архиве МГТУ им. Н.Э.Баумана. Не смотря на прямую связь этого архива с МГТУ им. Н.Э.Баумана, его также можно найти и в других разделах. Архив можно найти в разделе "книги и методические указания", в предмете "цифровая обработка сигналов" в общих файлах.
Просмотр PDF-файла онлайн
Текст из PDF
444Chapter 10.Minimization or Maximization of FunctionsStoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag),§4.10.Wilkinson, J.H., and Reinsch, C. 1971, Linear Algebra, vol. II of Handbook for Automatic Computation (New York: Springer-Verlag). [5]The method of simulated annealing [1,2] is a technique that has attracted significant attention as suitable for optimization problems of large scale, especially oneswhere a desired global extremum is hidden among many, poorer, local extrema. Forpractical purposes, simulated annealing has effectively “solved” the famous travelingsalesman problem of finding the shortest cyclical itinerary for a traveling salesmanwho must visit each of N cities in turn. (Other practical methods have also beenfound.) The method has also been used successfully for designing complex integratedcircuits: The arrangement of several hundred thousand circuit elements on a tinysilicon substrate is optimized so as to minimize interference among their connectingwires [3,4] .
Surprisingly, the implementation of the algorithm is relatively simple.Notice that the two applications cited are both examples of combinatorialminimization. There is an objective function to be minimized, as usual; but the spaceover which that function is defined is not simply the N -dimensional space of Ncontinuously variable parameters.
Rather, it is a discrete, but very large, configurationspace, like the set of possible orders of cities, or the set of possible allocations ofsilicon “real estate” blocks to circuit elements. The number of elements in theconfiguration space is factorially large, so that they cannot be explored exhaustively.Furthermore, since the set is discrete, we are deprived of any notion of “continuingdownhill in a favorable direction.” The concept of “direction” may not have anymeaning in the configuration space.Below, we will also discuss how to use simulated annealing methods for spaceswith continuous control parameters, like those of §§10.4–10.7.
This application isactually more complicated than the combinatorial one, since the familiar problem of“long, narrow valleys” again asserts itself. Simulated annealing, as we will see, tries“random” steps; but in a long, narrow valley, almost all random steps are uphill!Some additional finesse is therefore required.At the heart of the method of simulated annealing is an analogy with thermodynamics, specifically with the way that liquids freeze and crystallize, or metals cooland anneal. At high temperatures, the molecules of a liquid move freely with respectto one another.
If the liquid is cooled slowly, thermal mobility is lost. The atoms areoften able to line themselves up and form a pure crystal that is completely orderedover a distance up to billions of times the size of an individual atom in all directions.This crystal is the state of minimum energy for this system. The amazing fact is that,for slowly cooled systems, nature is able to find this minimum energy state. In fact, ifa liquid metal is cooled quickly or “quenched,” it does not reach this state but ratherends up in a polycrystalline or amorphous state having somewhat higher energy.So the essence of the process is slow cooling, allowing ample time forredistribution of the atoms as they lose mobility.
This is the technical definition ofannealing, and it is essential for ensuring that a low energy state will be achieved.Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).10.9 Simulated Annealing Methods10.9 Simulated Annealing Methods445Prob (E) ∼ exp(−E/kT )(10.9.1)expresses the idea that a system in thermal equilibrium at temperature T has itsenergy probabilistically distributed among all different energy states E. Even atlow temperature, there is a chance, albeit very small, of a system being in a highenergy state.
Therefore, there is a corresponding chance for the system to get out ofa local energy minimum in favor of finding a better, more global, one. The quantityk (Boltzmann’s constant) is a constant of nature that relates temperature to energy.In other words, the system sometimes goes uphill as well as downhill; but the lowerthe temperature, the less likely is any significant uphill excursion.In 1953, Metropolis and coworkers [5] first incorporated these kinds of principles into numerical calculations.
Offered a succession of options, a simulatedthermodynamic system was assumed to change its configuration from energy E1 toenergy E2 with probability p = exp[−(E2 − E1 )/kT ]. Notice that if E2 < E1 , thisprobability is greater than unity; in such cases the change is arbitrarily assigned aprobability p = 1, i.e., the system always took such an option. This general scheme,of always taking a downhill step while sometimes taking an uphill step, has cometo be known as the Metropolis algorithm.To make use of the Metropolis algorithm for other than thermodynamic systems,one must provide the following elements:1. A description of possible system configurations.2.
A generator of random changes in the configuration; these changes are the“options” presented to the system.3. An objective function E (analog of energy) whose minimization is thegoal of the procedure.4. A control parameter T (analog of temperature) and an annealing schedulewhich tells how it is lowered from high to low values, e.g., after how many randomchanges in configuration is each downward step in T taken, and how large is thatstep. The meaning of “high” and “low” in this context, and the assignment of aschedule, may require physical insight and/or trial-and-error experiments.Combinatorial Minimization: The Traveling SalesmanA concrete illustration is provided by the traveling salesman problem. Theproverbial seller visits N cities with given positions (xi , yi ), returning finally to hisor her city of origin.
Each city is to be visited only once, and the route is to be made asshort as possible. This problem belongs to a class known as NP-complete problems,whose computation time for an exact solution increases with N as exp(const. × N ),becoming rapidly prohibitive in cost as N increases. The traveling salesman problemalso belongs to a class of minimization problems for which the objective function ESample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5)Copyright (C) 1988-1992 by Cambridge University Press.Programs Copyright (C) 1988-1992 by Numerical Recipes Software.Permission is granted for internet users to make one paper copy for their own personal use.
Further reproduction, or any copying of machinereadable files (including this one) to any servercomputer, is strictly prohibited. To order Numerical Recipes books,diskettes, or CDROMsvisit website http://www.nr.com or call 1-800-872-7423 (North America only),or send email to trade@cup.cam.ac.uk (outside North America).Although the analogy is not perfect, there is a sense in which all of theminimization algorithms thus far in this chapter correspond to rapid cooling orquenching.
In all cases, we have gone greedily for the quick, nearby solution: Fromthe starting point, go immediately downhill as far as you can go. This, as oftenremarked above, leads to a local, but not necessarily a global, minimum. Nature’sown minimization algorithm is based on quite a different procedure. The so-calledBoltzmann probability distribution,446Chapter 10.Minimization or Maximization of FunctionsE=L≡N pX(xi − xi+1 )2 + (yi − yi+1 )2(10.9.2)i=1with the convention that point N + 1 is identified with point 1.
To illustrate theflexibility of the method, however, we can add the following additional wrinkle:Suppose that the salesman has an irrational fear of flying over the Mississippi River.In that case, we would assign each city a parameter µi , equal to +1 if it is east of theMississippi, −1 if it is west, and take the objective function to beE=N hpX(xi − xi+1 )2 + (yi − yi+1 )2 + λ(µi − µi+1 )2i(10.9.3)i=1A penalty 4λ is thereby assigned to any river crossing. The algorithm now findsthe shortest path that avoids crossings.