pareto (1121218)
Текст из файла
Math. Meth. Oper. Res. (2006) 64: 353–362DOI 10.1007/s00186-006-0082-4O R I G I NA L A RT I C L EMario Villalobos-Arias · Carlos A. Coello Coello ·Onésimo Hernández-LermaAsymptotic convergence of a simulatedannealing algorithm for multiobjectiveoptimization problemsReceived: 18 July 2005 / Accepted: 15 May 2006 / Published online: 21 July 2006© Springer-Verlag 2006Abstract In this paper we consider a simulated annealing algorithm for multiobjective optimization problems.
With a suitable choice of the acceptance probabilities, the algorithm is shown to converge asymptotically, that is, the Markovchain that describes the algorithm converges with probability one to the Paretooptimal set.Keywords Simulated annealing · Multiple objective programming ·Multiobjective optimization · Vector optimization · ConvergenceMathematics Subject Classification 90C29 · 90C59 · 68T201 IntroductionThis paper concerns multiobjective optimization problems in which one wishes tooptimize a vector function, say F(x) = ( f 1 (x), . .
. , f d (x)).A typical way to approach these problems is to transform the multiobjectiveoptimization problem into a series of single-objective (or “scalar”) problems. Thisapproach indeed makes sense if the functions f 1 , . . . , f d are of the same type, butM. Villalobos-Arias · O. Hernández-LermaDepartamento de Matemáticas, CINVESTAV-IPN, Apartado Postal 14-740,México D.F. 07000, MexicoE-mail: mava@math.cinvestav.mxE-mail: ohernand@math.cinvestav.mxC.
A. Coello Coello (B)Sección de Computación, Depto. de Ingeniería Eléctrica, CINVESTAV-IPN, Av. IPN No. 2508,Col. San Pedro Zacatenco, México D. F. 07300, MexicoE-mail: ccoello@cs.cinvestav.mxTel.: +52-55-50613800Fax.: +52-55-50613757354M. Villalobos-Arias et al.otherwise (for instance, if f 1 denotes distance, f 2 denotes time, and so on) thescalarized problem might be meaningless.For such cases, one can try a direct approach to the multiobjective optimization problem such as evolutionary algorithms, simulated annealing, and any otherrelated heuristics (Coello Coello et al.
2002).Although there have been some convergence proofs for multiobjective evolutionary algorithms (see Rudolph 1998; Rudolph and Agapie 2000), most heuristicsused for multiobjective optimization do not have such convergence proof reportedin the literature. This paper intends to bridge this gap for a class of simulatedannealing algorithms.Here, we consider a simulated annealing algorithm (SAA) for solving multiobjective optimization problems.
Under mild assumptions and a suitable choice ofthe acceptance probabilities, our SAA is shown to converge asymptotically (withprobability one) to the Pareto optimal set of the problem.The remainder of this paper is organized as follows. The multiobjective optimization problem (MOP) is stated in section 2. In section 3 we introduce the SAAwe are concerned with; we also briefly discuss the algorithm’s acceptance probabilities, which are crucial for proving asymptotic convergence. Our main result isstated in section 4. Finally, our conclusions are provided in section 5 with somegeneral remarks.2 The multiobjective optimization problemTo compare vectors in IR d we will use the standard Pareto order on IR d defined asfollows.If u = (u 1 , u 2 , .
. . , u d ) and v = (v1 , v2 , . . . , vd ) are vectors in IR d , thenu v ⇐⇒ u i ≤ vi ∀i ∈ 1, . . . , d.This relation is a partial order because it is reflexive, antisymmetric and transitive.We also have u ≺ v ⇐⇒ u v and u = v.The MOP we are concerned with is to find a vector x∗ ∈ X ⊂ IR m such thatx ) = min[ f 1 (x ), . . . , f n (x )],F(x ∗ ) = min F(x∈Xx∈X(1)where F : X ⊂ IR m → IR d is a given vector function with componentsf i : X ⊂ IR m → IR for each i ∈ {1, .
. . , d}, and the minimum is understood inthe sense of the Pareto order.Definition 1 (Pareto optimality) A vector x∗ ∈ X is called a Pareto optimal solution for the MOP (1) if there is no x ∈ X such that F(x ) ≺ F(x ∗ ).The set P ∗ = {x ∈ X : x is a Pareto optimal solution} is called the Paretooptimal set and its image under F, i.e.F(P ∗ ) := F(x ) : x ∈ P ∗ ,is the Pareto front.In the sequel we will use the following well-known “scalarization” result.Asymptotic convergence of a simulated annealing algorithm355Proposition 1 Let x∗ ∈ X be a solution of the weighted problem:minx∈Xdws f s (x ), where ws > 0 ∀s ∈ {1, .
. . , n} ands=1dws = 1.s=1Then x∗ ∈ P ∗ .We omit the proof of this lemma because it is trivial.As we are concerned with computational aspects, in the remainder of the paperwe will replace the set X in (1) with a finite set S ⊂ IR m .3 The simulated annealing algorithmMetropolis et al. (1953) originally proposed an algorithm to simulate the evolutionof a solid in a heat bath until it reached its thermal equilibrium. The process startedfrom a certain thermodynamic state of the system, defined by a certain energy andtemperature. Then, the state was slightly perturbed. If the change in energy produced by this perturbation was negative, the new configuration was accepted. If itwas positive, it was accepted with a certain probability.
This process was repeateduntil a frozen state was achieved (Dowsland 1993; Sait and Youssef 1999).About 30 years after the publication of Metropolis’ approach, Kirkpatrick (1983)and Černy (1985) independently pointed out the analogy between this “annealing”process and combinatorial optimization. Such analogies led to the development ofan algorithm called “Simulated Annealing” which is a heuristic search techniquethat has been quite successful in combinatorial optimization problems (see Aartsand Korst 1989; Laarhoven and Aarts 1987 for details).The SAA generates a succession of possible solutions of the optimization problem.
These possible solutions are the states of a Markov chain and the “energy” ofa state is the evaluation of the possible solution that it represents.The temperature is simulated with a sequence of positive control parameters ck .A transition of the Markov chain occurs in two steps, given the value ck of thecontrol parameter. First, if the current state is i, a new state j is generated with acertain probability G i j (ck ), defined below. Then an “acceptance rule” Ai j (ck ) isapplied to j.
Our main result hinges on a suitable selection of the acceptance rule,which we now discuss.The generation probability For each state i, let Si be a subset of S \ {i} calledthe neighborhood of i. We shall assume that the number of elements in Si is thesame, say , for all i ∈ S, and also that the neighbor relation is symmetric, thatis, j ∈ Si if and only if i ∈ S j . Then, denoting by χ Si the indicator function of Si(i.e. χ Si ( j) := 1 if j ∈ Si and 0 otherwise), we define the generation probabilityG i j (ck ) :=χ Si ( j)for all i, j ∈ S.(2)356M. Villalobos-Arias et al.The acceptance probability The acceptance probability, which is crucial for thebehavior of the SAA, can be defined in several different ways.
For instance, Serafini(1994) proposes to use the L ∞ –Tchebycheff norm given byλs ( f s (i) − f s ( j))max,Ai j (c) = min 1, exps∈{1,...,d}cwhere the λs are given positive parameters. This acceptance probability has the“inconvenience” that if a single entry is improved (i.e. f s (i) > f s ( j) for some s)or has the same value, then the state j is accepted, which obviously is not verygood.
For example, in Figure 1, in which f 1 ( j) = f 1 (i), we have Ai j = 1 althoughf 2 ( j) is too “bad” in comparison with f 2 (i).Fig. 1 Graphical illustration of the “inconvenience” of the acceptance probability Ai j (c) proposed by Serafini (1994)On the other hand, Ulungu and coworkers Ulungu (1993),Ulungu et al. (1995,1998, 1999) useAij (c)dλs ( f s (i) − f s ( j)):= min 1, expcs=1⎧ + ⎫d⎨λs ( f s ( j) − f s (i)) ⎬,= exp −⎭⎩c(3)s=1where as usual, let a + be the positive part of a number a ∈ IR, namelya if a > 0,+a :=0 otherwise.But again, in this case it is possible to have the acceptance probability dependingon the change of a single entry f s (i) − f s ( j), s = 1, .
. . , n.Asymptotic convergence of a simulated annealing algorithm357Here, we shall use the acceptance probability Serafini (1994)Ai j (c) :=dmin 1, exps=1f s (i) − f s ( j)c,which can be expressed in the simpler form d+s=1 ( f s ( j) − f s (i))Ai j (c) = exp −.c(4)This acceptance probability is obviously “better” than the one in (3) becauseonly the entries that do not improve are taken into account to calculate the probability; this probability could be improved changing c by an individual cs for eachentry s = 1, .
. . , n.The transition probability Having the generation and the acceptance probabilities, we can now define the transition probability from i to j as⎧if i = j,⎨ G i j (ck )Ai j (ck )Pi j (ck ) :=(5)⎩1 − l∈S,l=i Pil (ck ) if i = jwhere Ai j is as in (4) [or as in (3)].Note that for theoretical purposes we can use f s (i)− f s ( j) instead of λs ( f s (i)−f s ( j)) or ( f s (i) − f s ( j))/cs , because the last two expressions can be transformedinto the first one via the changes gs = λs f s or gs = f s /cs , respectively.
Hence, atthe remainder of this work we will use the first one.4 Main resultLetopt := i ∈ S :df s (i) = m ,s=1wherem := minj∈Sdf s ( j).(6)s=1Then, by Proposition 1, the Pareto optimal set P ∗ contains opt , i.e.opt ⊂ P ∗ .(7)We next present our main result, which in particular states the convergence ofthe SAA for the MOP (1). The convergence is understood in the following sense.358M. Villalobos-Arias et al.Definition 2 Let P(c) be the transition matrix associated with the SAA defined by(2), (4), (5), and let {X k (c), k = 0, 1, 2, .
Характеристики
Тип файла PDF
PDF-формат наиболее широко используется для просмотра любого типа файлов на любом устройстве. В него можно сохранить документ, таблицы, презентацию, текст, чертежи, вычисления, графики и всё остальное, что можно показать на экране любого устройства. Именно его лучше всего использовать для печати.
Например, если Вам нужно распечатать чертёж из автокада, Вы сохраните чертёж на флешку, но будет ли автокад в пункте печати? А если будет, то нужная версия с нужными библиотеками? Именно для этого и нужен формат PDF - в нём точно будет показано верно вне зависимости от того, в какой программе создали PDF-файл и есть ли нужная программа для его просмотра.