J89 (1121213)
Текст из файла
J Glob OptimDOI 10.1007/s10898-006-9107-zO R I G I NA L PA P E RSimulated annealing with asymptotic convergencefor nonlinear constrained optimizationBenjamin W. Wah · Yixin Chen · Tao WangReceived: 20 August 2005 / Accepted: 16 October 2006© Springer Science+Business Media B.V. 2006Abstract In this paper, we present constrained simulated annealing (CSA), an algorithm that extends conventional simulated annealing to look for constrained localminima of nonlinear constrained optimization problems.
The algorithm is based onthe theory of extended saddle points (ESPs) that shows the one-to-one correspondence between a constrained local minimum and an ESP of the corresponding penaltyfunction. CSA finds ESPs by systematically controlling probabilistic descents in theproblem-variable subspace of the penalty function and probabilistic ascents in thepenalty subspace.
Based on the decomposition of the necessary and sufficient ESPcondition into multiple necessary conditions, we present constraint-partitioned simulated annealing (CPSA) that exploits the locality of constraints in nonlinear optimization problems. CPSA leads to much lower complexity as compared to that of CSAby partitioning the constraints of a problem into significantly simpler subproblems,solving each independently, and resolving those violated global constraints across thesubproblems.
We prove that both CSA and CPSA asymptotically converge to a constrained global minimum with probability one in discrete optimization problems. Theresult extends conventional simulated annealing (SA), which guarantees asymptoticconvergence in discrete unconstrained optimization, to that in discrete constrainedoptimization. Moreover, it establishes the condition under which optimal solutionscan be found in constraint-partitioned nonlinear optimization problems. Finally, weevaluate CSA and CPSA by applying them to solve some continuous constrainedB.
W. Wah (B)Department of Electrical and Computer Engineering and the Coordinated ScienceLaboratory, University of Illinois, Urbana-Champaign, Urbana, IL 61801, USAe-mail: wah@uiuc.edu.Y. ChenDepartment of Computer Science, Washington University, St. Louis, MO 63130, USAT. WangSynopsys Inc., 700 East Middlefield Road, Mountain View, CA 94043, USAJ Glob Optimoptimization benchmarks and compare their performance to that of other penaltymethods.Keywords Asymptotic convergence · Constrained local minimum · Constraintpartitioning · Simulated annealing · Dynamic penalty methods · Extended saddlepoints · Nonlinear constrained optimization1 Problem definitionA general mixed-integer nonlinear programming problem (MINLP) is formulated asfollows:(Pm ) :min f (z),z(1)subject to h(z) = 0 and g(z) ≤ 0,∈ Z; x ∈ Rv and y ∈ Dw are, respectively, bounded continwhere z =uous and discrete variables; f (z) is a lower-bounded objective function; g(z) =(g1 (z), .
. . , gr (z))T is a vector of r inequality constraint functions;1 and h(z) =(h1 (z), . . . , hm (z))T is a vector of m equality constraint functions. Functions f (z),g(z), and h(z) are general functions that can be discontinuous, non-differentiable, andnot in closed form.Without loss of generality, we present our results with respect to minimizationproblems, knowing that maximization problems can be converted to minimizationones by negating their objectives. Because there is no closed-form solution to Pm , wedevelop in this paper efficient procedures for finding locally optimal and feasible solutions to Pm , demonstrate that our procedures can lead to better solutions than existingmethods, and prove that our procedures have well-behaved convergence properties.We first define the following basic terms.(x, y)TDefinition 1 A mixed neighborhood N m (z) for z = (x, y)T in the mixed space Rv ×Dw is:N m (z) = (x , y)T x ∈ N c (x) ∪ (x, y )T y ∈ N d (y) ,(2)where N c (x) = {x : x − x ≤ and → 0} is the continuous neighborhood of x, andthe discrete neighborhood N d (y) is a finite user-defined set of points {y ∈ Dw } suchthat y ∈ N d (y) ⇐⇒ y ∈ N d (y ) [1].
Here, → 0 means that is arbitrarily close to 0.Definition 2 Point z of Pm is a feasible point iff h(z) = 0 and g(z) ≤ 0.Definition 3 Point z∗ is a constrained local minimum (CLMm ) of Pm iff z∗ is feasible,and f (z∗ ) ≤ f (z) with respect to all feasible z ∈ N m (z∗ ).Definition 4 Point z∗ is a constrained global minimum (CGMm ) of Pm iff z∗ is feasible,and f (z∗ ) ≤ f (z) for every feasible z ∈ Z. The set of all CGMm of Pm is Zopt .1 Given two vectors V and V of the same dimension, V ≥ V means that each element of V is22111greater than or equal to the corresponding element of V2 ; V1 > V2 means that at least one element ofV1 is greater than the corresponding element of V2 and the other elements are greater than or equalto the corresponding elements of V2 .J Glob OptimNote that a discrete neighborhood is a user-defined concept because it does not haveany generally accepted definition.
Hence, it is possible for z = (x, y)T to be a CLMm toa neighborhood N d (y) but not to another neighborhood N d (y). The choice, however,1does not affect the validity of a search as long as one definition is consistently usedthroughout. Normally, one may choose N d (y) to include discrete points closest to z,although a search will also be correct if the neighborhood includes “distant” points.Finding a CLMm of Pm is often challenging. First, f (z), g(z), and h(z) may benonconvex and highly nonlinear, making it difficult to even find a feasible point or afeasible region.
Moreover, it is not always useful to keep a search within a feasibleregion because there may be multiple disconnected feasible regions. To find high-quality solutions, a search may have to move from one feasible region to another. Second,f (z), g(z), and h(z) may be discontinuous or may not be differentiable, rendering itimpossible to apply existing theories based on gradients.A popular method for solving Pm is the penalty method (Sect. 2.1). It transformsPm into an unconstrained penalty function and finds suitable penalties in such a waythat a global minimum of the penalty function corresponds to a CGMm of Pm . Becauseit is computationally intractable to look for global minima when the penalty functionis highly nonlinear, penalty methods are only effective for finding CGMm in specialcases.This paper is based on the theory of extended saddle points (EPSs ) in mixedspace [27,30] (Sect.
2.2), which shows the one-to-one correspondence between aCLMm of Pm and an ESP of the corresponding penalty function. The necessary andsufficient condition allows us to find a CLMm of Pm by looking for an ESP of thecorresponding penalty function.One way to look for those ESPs is to minimize the penalty function, while graduallyincreasing its penalties until they are larger than some thresholds. The approach is notsufficient because it also generates stationary points of the penalty function that arenot CLMm of Pm .
To avoid those undesirable stationary points, it is possible to restartthe search when such stationary points are reached, or to periodically decrease thepenalties in order for the search to escape from such local traps. However, this simplegreedy approach for updating penalties may not always work well across differentproblems.Our goals in this paper are to design efficient methods for finding ESPs of a penaltyformulation of Pm and to prove their convergence properties. We have made threecontributions in this paper.First, we propose in Sect.
3.1 a constrained simulated annealing algorithm (CSA),an extension of conventional simulated annealing (SA) [19], for solving Pm . In addition to probabilistic descents in the problem-variable subspace as in SA, CSA doesprobabilistic ascents in the penalty subspace, using a method that controls descentsand ascents in a unified fashion. Because CSA is sample-based, it is inefficient forsolving large problems. To this end, we propose in Sect. 3.2 a constraint-partitionedsimulated annealing algorithm (CPSA). By exploiting the locality of constraints inmany constraint optimization problems, CPSA partitions Pm into multiple looselycoupled subproblems that are related by very few global constraints, solves each subproblem independently, and iteratively resolves the inconsistent global constraints.Second, we prove in Sect. 4 the asymptotic convergence of CSA and CPSA to aCGM with probability one in discrete constrained optimization problems, under aspecific temperature schedule.
Характеристики
Тип файла PDF
PDF-формат наиболее широко используется для просмотра любого типа файлов на любом устройстве. В него можно сохранить документ, таблицы, презентацию, текст, чертежи, вычисления, графики и всё остальное, что можно показать на экране любого устройства. Именно его лучше всего использовать для печати.
Например, если Вам нужно распечатать чертёж из автокада, Вы сохраните чертёж на флешку, но будет ли автокад в пункте печати? А если будет, то нужная версия с нужными библиотеками? Именно для этого и нужен формат PDF - в нём точно будет показано верно вне зависимости от того, в какой программе создали PDF-файл и есть ли нужная программа для его просмотра.