1 / 27

Outline

Simulated Annealing for Constrained Optimization Duygun Fatih Demirel Linet Özdamar Department of Systems Engineering Yeditepe University, Istanbul. Outline. Introduction HSA: Hybrid SA HSAP: HSA with Penalty Method HSAD: Dual Sequence HSA Numerical Results Conclusion. Introduction.

deanna
Télécharger la présentation

Outline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Simulated Annealing for Constrained OptimizationDuygun Fatih DemirelLinet ÖzdamarDepartment of Systems EngineeringYeditepe University, Istanbul

  2. Outline • Introduction • HSA: Hybrid SA • HSAP: HSA with Penalty Method • HSAD: Dual Sequence HSA • Numerical Results • Conclusion

  3. Introduction A Constrained Optimization Problem – COP- is defined as: minimize f(x1,…xn) Subject to restrictions: gj (x1,…xn) ≤ 0, j =1,…k; hj (x1,…xn) = 0, j =k+1,…m; xi  [XLi, XUi], i =1,…n. where gj (x1,…xn), hj (x1,…xn) are linear or nonlinear inequalities and equations.

  4. Simulated Annealing • SA is a black box algorithm which can be used to solve COP. • Special SA applications developed for: • structural optimization problems [1,2], • economic dispatch [3], • power generator scheduling [4,5,6], • thermoelastic scaling behavior [7]. [1] Bennage W.A. and Dhingra A.K., Single and multi-objective structural optimization in discrete continuous variables using simulated annealing, International Journal of Numerical Methods in Engineering, Vol. 38, 1995, pp. 2753-2773. [2] Leite, J.P.B. and Topping, B.H.V., Parallel simulated annealing for structural optimization, Computers and Structures, Vol. 73, 1999, pp. 545-564. [3] Wong, K.P. and Fung, C.C., Simulated-annealing-based economic dispatch algorithm, IEE Proceedings Part C, Vol. 140, 1993, pp. 509-515. [4] Wong, K.P. and Wong, Y.W., Thermal generator scheduling using hybrid genetic/simulated-annealing approach, IEE Proceedings Generation Transmission Distribution, Vol. 142, 1995, pp. 372-380. [5] Wong, K.P. and Wong, S.Y.W, Combined genetic algorithm/simulated annealing/fuzzy set approach to short-term generation scheduling with take-or-pay fuel contract, IEEE Trans. on Power Systems, Vol. 11, 1996, pp. 112-118. [6] Wong, K.P. and Wong S.Y.W., Hybrid genetic/simulated annealing approach to short-term multiple-fuel-constrained generation scheduling, IEEE Trans. on Power Systems, Vol. 12, 1997, pp. 776- 784. [7] Wong, Y. C., Leung, K. S., Wong, C. K., Simulated annealing-based algorithms for the studies of the thermoelastic scaling behavior, IEEE Transactions on Systems, Man, and Cybernetics, Part C, Vol. 30, 2000, pp. 506-516

  5. A Hybrid SA Algorithm: HSA • HSA has various diversification and intensification schemes. • HSA also invokes Feasible Sequential Quadratic Programming (FSQP) - a local search method- with an annealing probability while searching for the global optimum.

  6. A Hybrid SA Algorithm: HSA (cont.) • Two SA algorithms are developed based on HSA: • HSAP:HSA with penalty method • HSAD: dual sequence HSA

  7. HSAP: HSA With Penalty Method • HSAP uses a penalty augmented objective function in all solution assessments. • Various penalty forms are attempted here. • Some are based on the magnitude of total infeasibility, some depend on the number of constraints violated, some combine the progress of the search with both criteria, etc.

  8. Penalty Methods • Penalty methods convert the COP into an unconstrained problem where a penalty term reflecting the degree of infeasibility of the solution is added to the objective function. • Penalty methods proposed in the literature: • Static • Dynamic • Adaptive

  9. Penalty Methods (cont.) • Disadvantages: • Static methods: Hard to decide on the magnitude of penalties: if too large, penalties may prevent the search from exploring infeasible regions.If too small,search may result in failure to identify feasible solutions. • Dynamic and adaptive penalty functions can also be sensitive to certain parameters related to run time .

  10. Consequences of Using Augmented Objective Functions in HSA • HSA generates a sequence of solutions where each solution is derived by perturbing the previous one. • Probability of acceptance of worse solutions typically depends on the difference in the objective function of two consecutive solutions.

  11. Consequences of Using Augmented Objective Functions in HSA (cont.) • A feasible solution that is an immediate successor to an infeasible one might be accepted right away, just because it does not have a penalty term in its assessment criterion. • Similarly, if an infeasible solution succeeds a feasible one, it is less likely that it will be accepted because its probability of acceptance might be too small due to the penalty term.

  12. HSAD:Dual sequence HSA • HSAD uses the original objective function • Differentiates between feasible and infeasible solution sequences, treating them independently. • Aims at converging to as many local points as possible in a given feasible subspace while concurrently searching for other feasible subspaces that the current infeasible sequence can lead to. • Removes the need for augmenting the objective with penalties.

  13. HSAD avoids the problems caused by penalties In each iteration ofHSAD: • if a candidate neighbor is feasible, then it is compared with the last feasible solution obtained in the feasible sequence, • and similarly, if it is infeasible, then it is compared with the last infeasible solution.

  14. HSA Algorithm • Starts with a random initial solution • Selects one coordinate i randomly and perturbs its value xi respecting its range [XLi, XUi]. Calculates new f of candidate solution x′i • Intensification phase: • if f(x′i) < f(xi), then x′i replaces xi. Intensification counter is reset. • else, intensification counter is incremented. If intensification cycle is completed, then intensification counter is reset and annealing probability of acceptance is calculated for the worse candidate solution x′i. If x′i is accepted, then it replaces xi. Otherwise, xi is preserved and a new candidate x′i is formed. Prob (accept) =

  15. HSA Algorithm (cont.) • Diversification scheme: • if a new accepted solution x′i is feasible and it is better than the best feasible solution found so far, f*, then diversification counter is reset and f* is replaced by f(x′i). • else, diversification counter is incremented. If diversification counter reaches its limit, then f* is reset and a totally new initial random solution is generated to start a new sequence.

  16. HSA Algorithm (cont.) • Invoking FSQP with annealing probability (Lawrence et al., 1997) • HSA provides the current feasible solution xi to FSQP as a starting point. • FSQP seeks for a local or global stationary point around xi and simply updates f* if it finds a better solution. Prob (call_FSQP) = Where feas_obji is the last feasible solution obtained till current iteration i. (NOTf*)

  17. HSAP: Difference from HSA • An HSAP iteration is the same as HSA’s; the only difference is that instead of f(x), HSAP uses the augmented function f‘(x)=f(x)+z(TIF(x)) • Various penalty functions used here: 1) MQ (Morales,Quezada, 98): static penalty function, function of number of infeasible constraints 2) QP (Quadratic): additive static function of squared TIF(x). 3) JH (Joines and Houck, 94): dynamic function of both progress of search and of TIF(x). Smaller penalty in early phase, higher in later. 4) MA (Michalewicz, Attia, 94): annealing penalty with quadratic penalty weight divided by temp. 5) CSBA (Carlson et al. 98 ): annealing penalty of multiplicative type with f(x) multiplied by an exponential function with arguments: TIF(x) and temp.

  18. HSAD Algorithm • HSAD traces two sequences at a time: feasible and infeasible. • In any iteration i of HSAD: • if xiis feasible, then, feas_obji =f(xi) and, infeasibility degree of the last infeasible solution becomes: TIFi= TIF(xi-1) since there is no change in the infeasible sequence. • otherwise, if xiis infeasible and accepted, TIFi= TIF(xi) and feas_obji= feas_obji-1. The annealing probability of acceptance is revised as follows.

  19. HSAD Algorithm (cont.) • HSADhas an additional diversification feature. • We define a new counter F that counts the number of feasible solutions obtained so far. When F>Fmax, we reset TIFito a large number so that new and worse infeasible solutions can be accepted.

  20. Numerical Results • Testbed: 32 COPs collected from different sources in the literature. • HSAP and HSAD are rerun 100 times for every test problem.

  21. Performance Measures • absolute deviation from the optimum of the worst solution in 100 runs averaged over 32 problems • average absolute deviation of 100 runs averaged over 32 problems • number of optimal solutions found among worst solutions • number of problems where a feasible solution could not be found in all 100 runs, i.e., total failure of the procedure in 32 problems. • average ratio of unsolved problems in 100 runs where no feasible solution was found;

  22. HSAP – HSAD Results Average absolute deviation from optimum in the worst and average cases for HSAP and HSAD.

  23. HSAP – HSAD Results (cont.) • # of optimal solutions • # of unsolved problems • Average ratio of unsolved problems

  24. Summary of Results • HSAD is more reliable than HSAP, it produces much better “worse case” results. • HSAD’s superiority over HSAP is statistically significant in the average case results.

  25. Conclusion • A new Simulated Annealing algorithm HSA is proposed for solving the COP. • HSA has a global diversification counter that detects stagnation in a sequence of solutions, avoids trapping in the same feasible region. • It also has a local counter that executes a hill-climbing approach to reject worse candidate neighbors for a given number of iterations, enforces a thorough scan in a neighborhood before accepting worse neighbors. • HSA is supported by the local search FSQPthroughout its exploration process.

  26. Conclusion (cont.) • Two versions of HSA are developed for the COP: The first version (HSAP) incorporates penalty methods for constraint handling and the second one (HSAD) eliminates the need of using the augmented objective function by tracing feasible and infeasible solution sequences independently. • HSAD avoids the rejection of infeasible solutions that are generated immediately after a feasible solution, and it also avoids almost unconditional acceptance of a feasible solution when it is generated right after an infeasible one. • Performance of HSAD is superior to that of HSAP.

  27. Thanks for listening!

More Related