1 / 15

Model Discrimination

Model Discrimination. Martina Heitzig. Outline of the presentation. Prior, posterior probabilities and Bayes‘ theorem Model discrimination based on Bayes‘ theorem – Stewart Case Studies – Stewart Confidence intervals. Prior, posterior probability distribution and Bayes ’ theorem.

Télécharger la présentation

Model Discrimination

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Model Discrimination Martina Heitzig

  2. Outline of the presentation • Prior, posterior probabilities and Bayes‘ theorem • Model discrimination based on Bayes‘ theorem – Stewart • Case Studies – Stewart • Confidence intervals

  3. Prior, posterior probability distribution and Bayes’ theorem Prior probability distribution • Before designing and/or evaluating new experiments the modeler always has some prior knowledge about the system. • prior knowledge can result from already preliminary/published experimental data, expert knowledge or literature on parameter values, reaction mechanisms, modeling of similar systems etc. • Using Bayesian methods this knowledge can be considered by summarizing and expressing it in the apriori probability distributions. • For model discrimination and parameter estimation the apriori probabilities considered are: a) Prior probability distribution of the different model candidates b) Prior probability distribution of the parameter values.

  4. Prior, posterior probability distribution and Bayes’ theorem Posterior probability distribution • posterior probability reflects the knowledge about the system after having evaluated the newly available experimental data. -> represents the updated knowledge on the system! • posteriori probability density (for (model candidates, parameter values) is obtained by weighing the prior knowledge about the with the newly available information from the data (represented by the likelihood function ) • calculated applying Bayses’ theorem: • Posterior probability distribution from previous will become prior probability for next step -> if more and more datasets are evaluated the influence of the initial prior probability will decrease

  5. Model discrimination based on posterior probability – Bayes’ theorem Posterior probability distribution of model Mj: with: Models contain unknown parameters -> integrate out effect of parameters: Stewart et al. (1996): assumptions to evaluate probabilities and integral: • Experimental error is normally distributed • Prior probability of parameters constant • Likelihood function is linearized around the point estimates of the parameters • Prior probability of parameter is calculated based on prior knowledge (probability of model Mj equals prior probability, normal error distribution)

  6. Model discrimination based on posterior probability – Stewart Posterior probability distribution of model Mj: apriori apriori p(Mj): p(θj|Mj): apriori p(Y|Mj, σ ) p(Mj):

  7. Model discrimination based on posterior probability – Stewart Case I a): single-response data, known variance: Case I b): single-response data, unknown variance: Case II a): multi-response data, known covariance matrix: Case II b): multi-response data, unknown covariance matrix:

  8. Case studies - Stewart Case study 1:Discrimination between 2 models implementing different reaction mechanisms: Model M1: Model M2: Experimental data: generated with model M1 18 duplicate data points (-> total 36)

  9. Case studies - Stewart Discrimination between models M1 and M2 based on their posterior probabilities: Case I b): single response data, variance unknown: • - minimized sum of least squares: • - Number of parameters: Model M1: 2 Model M2: 3 • - Degrees of freedom for experimental error estimate: 36-18=18

  10. Case studies - Stewart Results:

  11. Case studies - Stewart Case study 2: Discrimination between 4 models implementing different reaction mechanisms based on multi response data: Model M1: Model M2: Model M3: Model M4: Experimental data: • generated with model M1 • 6 duplicate data points for 3 measured variables A1, A2, A3 (-> total 36)

  12. Case studies - Stewart Discrimination between models M1 – M4 based on their posterior probabilities: Case 2 b): multi response data, variance unknown: • - minimized determinant of - matrix: • - Number of parameters: Model M1: 2 Model 3: 1 Model M2: 3 Model 4: 4 3. - Degrees of freedom for experimental error estimate: 36-6=6

  13. Case studies - Stewart Results: Model M1 has highest probability, model M3 almost as probable Models M2 and M4: improvement of fit due to adding extra parameter cannot overcome the penalty of extra model parameters

  14. Case studies - Stewart Case Study 3: Discrimination of models for diffusion in catalyst:

  15. Confidence intervals and Bayes’ theorem • Confidence levels can be calculated based on posterior probability of parameters: Integration of posterior probability distribution over parameter sub-space R results in probability that parameter values are within R: • Posterior probability of parameter (Bayes’ theorem): • Model discrimination: based on posterior probability of model Mj: for 1 candidate model: - > model discrimination related confidence level and prior probabilities of candidate models.

More Related