Chen Disser (1121212), страница 8
Текст из файла (страница 8)
This means that the CLMs whosepenalties do not exist in (2.22) cannot be found by looking for local minima of (2.22). Moreover, the Lagrange methods and 1 −penalty methods work for continuous and differentiable2.2Existing Partitioning Methods for MathematicalProgrammingproblems only.Partitioning has been used in many existing methods for solving NLPs. Partitioning methodsTo cope with these shortcomings, in the next chapter, we propose an exact local optimalpenalty method based on an 1 −penalty function and prove that a constrained local minimumcan be classified as subspace partitioning methods and constraint partitioning methods. Weillustrate a comparison of these two approaches in Figure 2.2.of a general MINLP always corresponds to a saddle point of the corresponding 1 -penaltySubspace partitioning decomposes a problem by partitioning its variable space into sub2930sets and by examining the subspaces one at a time until the problem is solved (Figure 2.2a).a) Generalized Benders decomposition (GBD) [28] computes in each iteration an upperAlthough pruning and ordering strategies can make the search more efficient by not requir-bound on the solution sought by solving a primal problem and a lower bound on a mastering the search of every subspace, the complexity of searching each subspace is very similarproblem.
Here, the primal problem corresponds to the original problem with fixed discreteto that of the original. In contrast, constraint partitioning decomposes the constraints ofvariables, and the master problem is derived through nonlinear duality theory.a problem into subproblems. Each subproblem is typically much more relaxed than theoriginal and requires significantly less time to solve (see Figure 1.4 for examples).
However,Specifically, GBD alternates between a local phase and a global phase. The local phasesolves an NLP subproblem with all integer variables y fixed:there are global constraints (SG in Figure 2.2b) that may not be satisfied after solving eachsubproblem independently. As a result, the subproblems may need to be solved multipletimes in order to resolve any violated global constraints. The number of times that the subproblems are to be solved depends strongly on the difficulty in resolving the violated global(NLPk ) :minxsubject toandf (x, y k ),(2.29)h(x, y k ) = 0g(x, y k ) ≤ 0,constraints.where x ∈ Rv are continuous variables, and y k are fixed integer assignments to y at iteration2.2.1Subspace partitioning methodsk.
The solutions of the NLP subproblem are feasible solutions to the original MINLP andMany MINLP solution methods are based on subspace partitioning and solve the subprob-provides an upper bound to the solution objective.lems obtained by decomposing the search space of a problem instance. MINLP methodsThe global phase of GBD solves a mixed-integer linear programming (MILP) mastergenerally apply subspace partitioning to decompose the search space of a MINLP problemproblem derived from the duality theory and linear approximation to predict a new lowerinto subproblems in such a way that, after fixing a subset of the variables, each subproblembound of f for the MINLP problem and generate new integer value assignments for y.
Theis convex and is easily solvable, or can be relaxed and be approximated. There are severalsearch terminates when the predicted lower bound equals the current upper bound. GBDapproaches.requires the continuous subspace to be a compact and convex set, and the objective andTo review the existing methods, we rewrite the MINLP problem Pm in (1.1) below:constraint functions to be convex and differentiable.GBD is a subspace-partitioning method because it partitions the mixed-integer variable(Pm ) :minx,ysubject tof (x, y),h(x, y) = 0 and g(x, y) ≤ 0,(2.28)space by fixing the values of integer variables.
It explores multiple subspaces by definingdifferent values for integer variables at each iteration. Promising subspaces (integer valueassignments) are found by solving a master problem based on a linear approximation.where x ∈ Rv are continuous variables, and y ∈ D w are integer variables.31b) Outer approximation (OA) [20] is similar to GBD and solves subproblems with fixed32integer variables. The main difference between OA and GBD is that the master problem ofBranch and bound is also a subspace-partitioning method. Each subproblem exploresOA is formulated using primal information and outer linearization, while the master problema subset of the search space by fixing the value of some integer variables. It uses lowerof GBD is formulated using a dual representation. It requires the continuous subspace toand upper bounds to eliminate those subspaces where no solution is contained.
Many ofbe a nonempty, compact and convex set, and the objective and constraint functions to bethe range-reduction techniques in branch and bound are applicable only when the relaxedconvex. Similar to GBD, OA is a subspace-partitioning method.problems are convex.c) Branch and bound methods [75, 76] solve MINLPs by performing a tree enumerationd) Branch and reduce [76] is a general search framework implemented in the BARONin which a subset of integer variables are successively fixed at each node of the tree.
Forsolver [76]. The branch and reduce optimization framework is a variation of branch andeach node, it relaxes the integrality requirements on the unfixed integer values, forms abound that partitions the subspace by branching on the values of discrete variables. The keycontinuous NLP subproblem, and solves the continuous NLP subproblem. The solution ofdifference between branch-and-reduce and branch-and-bound is in the derivation of lowerthe corresponding NLP at each node provides a lower bound for the optimal MINLP objectivebounds for subproblems.function value. In addition, if the integer variables in the solution to a NLP subproblemInstead of using conventional techniques, such as linear approximation and Lagrangiantake on integer values, an upper bound is also obtained at the node.
The branch and boundrelaxation, BARON employs a variety of duality-based and interval arithmetic-based rangemethod keeps track of the best lower and upper bounds and stops when the two boundscontraction techniques to derive lower bounds of subproblems.
These techniques are special-meet. A node is pruned when its lower bound exceeds the current upper bound or whenized modules designed for problems with special properties, such as bilinear programming,its subproblem is infeasible. A popular MINLP solver MINLP BB [54] implements branchfixed-charge programming, indefinite quadratic programming, linear multiplicative program-and bound and uses the sequential quadratic programming (SQP) algorithm to solve theming, separable concave programs, and univariate polynomial programming.
These special-continuous NLP subproblem for each node of the tree search.ized modules can derive tighter lower bounds than general conventional methods and leadOne major approach to get the lower bound in branch and bound is Lagrangian relaxation [29, 31, 27, 78, 7]. Lagrangian relaxation reformulates Pm into a dual problem. Basedto more effective pruning of the subspaces [76].
Like branch and bound, branch and reduceis also a subspace partitioning method.on the duality theory [29, 31], the optimal solution to the dual problem has an objectivee) Generalized cross decomposition (GCD) [38, 39, 74] is a variation of GBD that iteratesvalue less than or equal to the objective value of the optimal solution to the original problem.between a phase solving the primal and dual subproblems and a phase solving the masterTherefore, a Lagrangian relaxation can derive a lower bound for each node in branch andproblem. It differs from GBD only in the definition of the master problem.
Similar to OAbound. It should be noted that when the problem is linear or convex, Lagrangian relaxationand GBD, GCD requires the objective and constraint functions of subproblems to be propercan be used directly to find an optimal primal solution when given an optimal dual solution,convex functions.or vice versa. However, as pointed out in [81], it does not work well for nonlinear problems.33f) Extended Cutting Plane (ECP) is a recent method for solving convex MINLPs [93]34directly without partitioning.
Unlike GBD, OA, and branch and bound, ECP does not relyinto multiple much simpler subproblems, each involving only a subset of the constraints andon the use of NLP subproblem and algorithms. Instead, it relies on the iterative solution of avariables.series of approximated linear problems based on linearization of some constraint functions. Ineach step, it cuts the feasible region by adding a linearization of the most violated constraintA typical problem that can be solved by separable programming has the following form,where variables x has m components x1 , · · · , xm of dimension n1 , · · · , nm , respectively:at the current point.
The ECP method requires the objective function to be linear and theconstraint functions to be convex.minimizefi (xi )(2.30)i=1g) Direct-solution methods attack a problem without any transformation or partitioning.Examples include reject/discarding methods [41, 6, 73, 67], repair methods [43, 63], feasible-msubject tomgij ≤ 0, j = 1, · · · , r,i=1direction methods [9, 53, 58], preserving feasibility methods [59], and strategic oscillationxi ∈ Xi , i = 1, · · · , m.methods [33, 77]. These methods try to limit the probes in the feasible region or transformingthem into feasible points by some repair operators.
They are very limited in handlingproblems with nonlinear constraints and disconnected feasible regions.Here fi and gij are given continuous and differentiable functions, and Xi is a given subset inRni . Note that if the constraints mi=1 gij ≤ 0 were not present in (2.30), then it would beIn summary, existing MINLP methods solve a problem either as a whole or by subspacestraightforward to decompose this problem into m independent subproblems. However, thepartitioning. They are not applicable for solving general MINLPs in (1.1) due to theirconstraints link all the subproblems together and create possibly global inconsistencies.restricted requirements on the decomposed subproblems.
All these methods require theSeparable programming methods consider the following dual problem of (2.30):MINLP problem to have some special properties, such as nonempty and compact subspaceswith convex objective and constraint functions.2.2.2Separable programming methodsAnother class of decomposition methods are separable programming methods based on du-maximizeq(μ)(2.31)subject toμ ≥ 0,(2.32)where μ is the vector of Lagrange multipliers and the dual function q(μ) is formulated as:ality [10]. An extensive introduction to separable programming can be found in Chapters5 and 6 of the nonlinear programming book by Bertsekas [10].
Dual methods solve a dualproblem instead of the primal problem by finding Lagrange multipliers to maximize the dualq(μ) = infxi ∈Xi ,i=1..m m rmμj gij (xi )=qi (μ)fi (xi ) +i=1j=1function, instead of minimizing the Lagrangian function in the original variable subspace. Ifthe problem has some separable properties, maximizing the dual function can be decomposed3536i=1(2.33)and sufficient, and there is a one-to-one correspondence between the points satisfying ourandqi (μ) = infxi ∈Xi fi (xi ) +rcondition and the local optimal points.μj gij (xi ) ,i = 1, · · · , m.(2.34)j=12.2.3Therefore, the minimization involved in computing the dual function q(μ) in (2.33) can bedecomposed into m simpler subproblems in (2.34).