1 / 53

Approximation and Visualization of Interactive Decision Maps Short course of lectures

Approximation and Visualization of Interactive Decision Maps Short course of lectures. Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy of Sciences and Lomonosov Moscow State University. Lecture 7 . Non-linear Feasible Goals Method and its applications. Plan of the lecture

ketan
Télécharger la présentation

Approximation and Visualization of Interactive Decision Maps Short course of lectures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Approximation and Visualization of Interactive Decision MapsShort course of lectures Alexander V. Lotov Dorodnicyn Computing Center of Russian Academy of Sciences and Lomonosov Moscow State University

  2. Lecture 7. Non-linear Feasible Goals Method and its applications Plan of the lecture • Approximation for visualization of the feasible objective set • Application of the FGNL for conceptual design of future aircrafts • Identification of economic systems by visualization of Y=f(X) • Approximation for visualization of the EPH • Hybrid methods for approximating the EPH in the non-convex case • Statistical tests • A simple hybrid method: combination of two-phase, three-phase and plastering methods • Study of a cooling equipment in continuous casting of steel • Parallel computing

  3. The main problems that arise in the non-linear case:1. non-convexity of the set Y=f(X);and2. time-consuming algorithms for global scalar optimization.

  4. Approximation for visualization of the feasible objective set Approximation of the feasible objective set f(X) may be needed at least in two cases: 1) decision maker does not want to maximize or minimize the performance indicators; 2) identification problem is considered. We approximate the set f(X) by using simulation of random decisions, filtering their outputs and approximating f(X) by a system of cubes. Then, on-line visualization of the feasible objective set is possible. Thus, we apply simulation-based multi-criteria optimization. Such an approach can be applied in the case of models given by computational modules, too.

  5. Example model: the well-known Peak Function where

  6. Let us consider five criterion functions which are subject of maximization. Let us consider an example. Imagine that we want to locate the monitoring station at the point where maximal pollution occurs. Let be the pollution level forecasted by the i-th expert. Different values of criteria characterize difference in knowledge of experts concerning pollution distribution.

  7. Let us consider several three-criterion graphs

  8. Software demonstration

  9. Application of the FGNL for conceptual design of future aircrafts Four construction parameters were considered 1. Draught of the engine per weight (P0); 2. Overall drag coefficient of the aircraft (Cx0); 3. Inductive drag coefficient of the aircraft (A); 4. Lift coefficient of the aircraft (CY). The aircraft was described by flight characteristics: 1. Rotation speed at a given height (W_max5000); 2. Time of elevation to a given height (TimeH); 3. Time of acceleration to a given speed (TimeV).

  10. Exploration of the decision space:Squeezed variety of feasible values of draft of the engine (P0), frontal resistance (СX0) and elevating force (CY).

  11. Identification of economic systems by visualization of Y=f(X)

  12. Approximation for visualization of the EPH The EPH is approximated by the set T* that is the union of the non-negative cones with apexes in a finite number of points y of the set Y=f(X). The set of such points y is called the approximation base and is denoted by T.Multiple slices of such an approximation can be computed and displayed fairly fast.

  13. Visualization example for 8 criteria

  14. Goal identification

  15. Demonstration of Pareto Front Viewer

  16. Hybrid methods for approximating the EPH in the non-convex case

  17. The models under study Computing of the objective functions (the model) can be given by a computational module (black box) provided by user and unknown for the researcher. Thus, a very broad scope of non-linear models can be studied. Our methods provide inputs which depend on the method; the module computes outputs of these inputs (or by using simulation, or by solving boundary problem, or in some other way). Due to it we can even use simulation-based local optimization of random decisions or genetic optimization.

  18. The scheme of the methods

  19. We apply hybrid methods that include: 1) global random search;2) adaptive local optimization;3) importance sampling;4) genetic algorithm.Statistical tests of the approximation quality play the leading role in approximation process.

  20. Statistical tests Quality of an approximation T* is studiedby using the concept of completeness hT = Pr {f(x)  T* : xX }. We estimate Pr { hT > h*} for a given reliability  by using a random sample HN = {x1, … ,  xN}. Let hT(N)= n/N, where n=|f(xi) T*|.Then, hT(N) is a non-biased estimate of hT .Moreover, hT(N) – (– ln (1 – ) / (2N) )1/2describes the confidence interval.

  21. Completeness function Let (T*)ε be the ε–neighborhood of T*. Then, hT (ε)= Pr {f(x)  (T*)ε: xX } is the completeness function. Important characteristics of the function hT(N)(ε) is the value εmax=δ(f(HN), T*).

  22. Two optimization-based completeness functions for different iterations (1 and 7)

  23. The optimization-based completeness In the problems of high dimension of decision variable can happen that the sample completeness is equal to 1, but the approximation is bad. In this case optimization-based completeness function is usedhT (ε) = Pr{f(Φ(x0)) (T*)ε : xX } where Φ:X → X is the “improvement” mapping, which is usually based on local optimization of a scalar function of criteria. The mapping moves the point f(Φ(x0)) in the direction of the Pareto frontier. As usually, a random sample HN {x1, … ,  xN} is generated and hT(N)(ε)=n(ε)/N, where n(ε)=|f(Φ(xi0)) (T*)ε| is computed.

  24. One-phase method An iteration. A current approximation base T must be given. 1. Testing the base T. Generate a random sample HNX , compute hT(N)(ε). If hT(N)(ε) (or some values as hT(N)(0) and εmax=δ(f(HN),T*) in automatic testing) satisfy the requirements, stop.2. Forming new base. Form a list that includes points of T and sample points that not belong to T*, exclude dominated points. By this a new approximation base is found. Start next iteration.

  25. Two-phase method An iteration. A current approximation base T must be given. 1. Testing the base T. Generate a random sample HNX , compute Φ(HN). Construct hT(N)(ε). If the function hT(N)(ε) (or some values as hT(N)(0) and εmax=δ(f(Φ(HN)),T*) in the case of automatic testing) satisfy the requirements, stop.2. Forming new base. As usually. Start next iteration.

  26. Three-phase method An iteration. Current base T and a neighborhood B of decisions which images constitute T must be given 1. Testing the base T. Generate two random samples H1X and H2 B, compute Φ(H1) and Φ(H2). Construct hT(N)(ε). If hT(N)(ε) satisfies the requirements, stop.2. Forming new base. As usually. 3. Forming new neighborhood B using statistics of extreme values.Start next iteration.

  27. Three-phase method An iteration. Current base T and a neighborhood B of decisions which images constitute T must be given 1. Testing the base T. Generate two random samples H1X and H2 B, compute Φ(H1) and Φ(H2). Construct hT(N)(ε). If hT(N)(ε) satisfies the requirements, stop.2. Forming new base. As usually. 3. Forming new neighborhood B using statistics of extreme values.Start next iteration.

  28. Forming new neighborhood B The neighborhood B is the constituted of the balls in decision space with centers in current Pareto-optimal decisions. They have the same radius k. The value of the radius is computed using the statistics of extreme values. Namely, we consider the distances of new Pareto-optimal decisions the old Pareto-optimal decisions. Then, we order the distances in accordance to their growth d(N), d(N-l),… where d(N) is the most distanced point. Then, k= d(N) + , where θ = r(l, χ)( d(N) – d(N-l)), while , is the reliability, 0<<1. Here r(l, χ) = {[1-(1-χ) 1/l](-1/a) – 1}(-1) and a = (ln l) / ln[(d(N) – d(N-l)) / (d(N) – d(N-1))], l << N (we took l=10).

  29. Plastering method “Plastering” method that has some properties of genetic algorithms (as cross-over) is used at the very end of the approximation process. An iteration. Current approximation base T and numbers q, 1, 2must be given. 1. Testing the base T. Let Hbe the set of inputs that result in points of the approximation base T. Select N random pairs (hi,  hj) that satisfy 1 ≤ d(f(hi), f(hj)) ≤ 2 from the set H. Select q random points on the segment connecting the points hiandhj, and denote them by Hl, l=1,…,N. Compute objective points for the points x Hl, l=1,…,N.Construct hT(N)(ε).If hT(N)(ε) satisfies the requirements, stop.2. Forming new base.3. Filtering if needed Start next iteration.

  30. A simple hybrid method: combination of two-phase, three-phase and plastering methods • Iterations of two-phase method are carried outuntil hT(N)(0) and εmax=δ(f(Φ(HN)),T*) are close to zero. • Iterations of three-phase method are carried outuntil hT(N)(0) and εmax=δ(f(Φ(HN)), T*) for it are close to zero. • Iterations the genetic method carried outuntil hT(N)(0) and εmax=δ(f(Φ(HN)), T*) for it satisfy some requirements.

  31. Study of a cooling equipment in continuous casting of steel.The research was carried out jointly with Dr. Kaisa Miettinen, Finland,at the University of Jyvaskyla, Finland.

  32. Cooling in the continuous casting process

  33. Criteria J1 is the original single optimization criterion: deviation from the desired surface temperature of the steel strand must be minimized.J2to J5 are the penalty criteria introduced to describe violation of constraints imposed on : J2–surface temperature; J3– gradient of surface temperature along the strand; J4– on the temperature after point z3; and J5 – on the temperature at point z5. J2to J5 were considered in this study.

  34. Description of the module FEM/FDM module was developed in Finland,by researchers from University of Jyvaskyla. Properties of the model: 325 control variables that describe intensity of water application. Properties of local simulation-based optimization:one local optimization required about 11-12 calculations of the gradient and about 1000-2000 additional calculations of the value of f(x).

  35. Next pictures demonstrate the approximation

  36. Parallel computing

  37. Parallel computing (processor clusters and grid-computing) The method has the form that can be used in parallel computing. Thus, it can be easily implemented at parallel platforms – it is sufficient to separate data generation and data analysis(Research in the framework of contract with Russian Federal Agency for Science).

  38. Important property of our hybrid methods Since our methods are based on random sampling, partial loss of the results is not dangerous. It influences the reliability of the results but does not destroy the process. Due to it, application in GRID network is possible.

  39. Two platform implementation is needed

More Related