1 / 14

Lecture 11. Constraint Handling

Lecture 11. Constraint Handling. 학습목표 진화방식의 최적화 문제에서 제약조건을 부가하여 다루는 방법에 대하여 크게 3 가지로 나누어 이해. Outline. Review of the previous two lectures Unconstrained optimization Search bias and search operators Search step size and search operators

jhuguley
Télécharger la présentation

Lecture 11. Constraint Handling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 11. Constraint Handling 학습목표 진화방식의 최적화 문제에서 제약조건을 부가하여 다루는 방법에 대하여 크게 3가지로 나누어 이해

  2. Outline • Review of the previous two lectures • Unconstrained optimization • Search bias and search operators • Search step size and search operators • Examples: Gaussian mutation, Cauchy mutation, self-adaptation, quadratic recombination • Different types of constraints • Different types of constraint handling techniques • The penalty function approach • Summary

  3. Problem Formulation • The general problem we consider here can be described as: Minx {f(x)} subject to: gi (x) <= 0, i = 1, 2, …, m hj (x) = 0, j = 1, 2, …, p where x is the n-d vector, x = (x1, x2, …, xn); f(x) is the objective function; gi(x) is the inequality constraint; hj(x) is the equality constraint. • Denote the whole search space as S • Denote the feasible search space as F • The global optimum in F may not be the same as that in S F <= S : feasible space is within the whole space S

  4. Different Types of Constraint Handling Techniques • The penalty function approach: It converts a constrained problem into an unconstrained one, by penalizing constraint violations • The repair approach: It maps (repairs) an infeasible solution into a feasible one • The pure approach: It is pure because it does not search in the infeasible space at all. Only feasible solutions will be generated and examined • The separatist approach: It considers objective functions and constraints separately in the evolution. There is no fixed single fitness to measure an individual • The hybrid approach: It usually combines the evolutionary approach with another existing approach in dealing with constraints

  5. unconstrained optimization The Penalty Function Approach: Introduction • The general formulation of the exterior penalty function: f(x) = f(x) +- (S ri Gi(x) + S cj Hj(x)) where f(x) is the new objective function to be minimized; f(x) is the original objective function; +- means it can be “+” or “-” depending on whether we minimize or maximize; Gi(x) = max (0, |gi(x)|b); Hj(x) = max (0, |hj(x)|u); b and u are usually chosen as 1 or 2 ri and cj are penalty factors (coefficients) New Objective Function = Original Objective Function + Penalty Factor * Degree of Constraint Violation

  6. The Penalty Function Approach: Overview • Static penalties • The penalty function is pre-defined and fixed during evolution • Dynamic penalties • The penalty function changes according to a pre-defined sequence (or called function), which often depends on the generation number • Adaptive penalties and self-adaptive penalties • The penalty function changes adaptively according to the current or previous populations. There is no fixed sequence to follow

  7. Static Penalty Functions

  8. Dynamic Penalties: An Example

  9. Dynamic Penalties: Summary and Generalization

  10. Adaptive Penalties: An Example

  11. Adaptive Penalties: Another Example

  12. Self-Adaptive Penalties (I)

  13. Self-Adaptive Penalties (II)

  14. Summary • We mainly looked at numerical problems in this lecture • Constraints can be classified into • Linear vs. nonlinear, equality vs. inequality • There are different constraint handling techniques. Some of them are not unique to evolutionary algorithms • We examined different penalty methods • Static penalty, dynamic penalty, adaptive penalty, self-adaptive penalty • The key is how to balance the penalty term in relation to the objective term • References • T. Back, D.B. Fogel and Z. Michalewicz (eds), Handbook of Evolutionary Computation, IOP Pub., 1997 (C5.1~C5.6)

More Related