180 likes | 294 Vues
This presentation outlines the Local Probabilistic Sensitivity Measure (LPSM), originally defined by R.M. Cooke and J. van Noortwijk. While this measure aligns with the FORM method for linear models, it presents significant challenges in computation for non-linear models. The talk reviews the problems associated with calculating LPSM, particularly when using Monte Carlo simulations, and introduces Isaco's method as a potential solution. The results and conclusions suggest that current outcomes are unreliable, emphasizing the need for further investigation into the methodology's issues.
E N D
Local Probabilistic Sensitivity Measure By M.J.Kallen March 16th, 2001
Presentation outline • Definition of the LPSM • Problems with calculating the LPSM • Possible solution: Isaco’s method • Results • Conclusions
LPSM Definition The following local sensitivity measure was proposed by R.M. Cooke and J. van Noortwijk: For a linear model this measure agrees with the FORM method. Therefore this measure can be used to capture the local sensitivity of a non-linear model to the variables Xi.
Problem with calculating the LPSM • The derivative of the conditional expectation can only be analytically determined for a few simple models. • Using a Monte Carlo simulation introduces many problems resulting in a significant error.
Using Monte Carlo • Algorithm: • Save a large number of samples • Compute E(X|Z=z0± ) and divide by 2 For good results needs to be small, but then the number of samples used in step 2 is small and a large error is introduced after dividing by 2.
Alternative: Isaco’s method An alternative to calculating is proposed by Isaco Meilijson. The idea is to expand E(X|Z) around z0 using the Taylor expansion:
Isaco’s method (cont.) We can then calculate the covariance:
Isaco’s method (cont.) The main idea in this algorithm is to now take a ‘local distribution’ Z* such that the term is equal to zero. By doing this we get
Choosing Z* • We want to take Z* such that • Z* should be as close as possible to Z, therefore we want to minimize the relative information. This results in a entropy optimization problem.
Relative information Definition:the relative information of Q with respect to P is given by: “The distribution with minimum information with respect to a given distribution under given constraints is the smoothest distribution with has a density similar to the given distribution.”
Solving the EO problem There are a number of ways to implement this entropy optimization problem. We have tried the following: • Newton’s method • the MOSEK toolbox for MATLAB
Newton’s method There are a number of reasons not to use Newton’s method for solving the EO problem: • The implementation of Newton’s method requires a lot of work. • Since you have to solve a system, a matrix has to be inverted and this introduces large errors in many cases.
MOSEK A much easier way of solving the EO problem is by using MOSEK created by Erling Andersen: • The MOSEK toolbox has a special function for entropy optimization problems, therefore the variables and constraints are easily set up. • No long calculations needed, constraints can be changed in a few seconds.
Attempts to fix Isaco • We’ve tried many things to get better results. These attempts mostly consisted of adding and/or changing constraints. • Using only the samples from a small interval around z0. • A few different approaches to this problem have been tried, but these seem to give similar results.
Conclusions • Until now the results cannot be trusted, therefore I recommend not to use this method. • We need to gain insight into what is going wrong and why it’s behaving in this way. • Maybe Isaco Meilijson has an idea!