150 likes | 323 Vues
Model Discrimination. Martina Heitzig. Outline of the presentation. Prior, posterior probabilities and Bayes‘ theorem Model discrimination based on Bayes‘ theorem – Stewart Case Studies – Stewart Confidence intervals. Prior, posterior probability distribution and Bayes ’ theorem.
E N D
Model Discrimination Martina Heitzig
Outline of the presentation • Prior, posterior probabilities and Bayes‘ theorem • Model discrimination based on Bayes‘ theorem – Stewart • Case Studies – Stewart • Confidence intervals
Prior, posterior probability distribution and Bayes’ theorem Prior probability distribution • Before designing and/or evaluating new experiments the modeler always has some prior knowledge about the system. • prior knowledge can result from already preliminary/published experimental data, expert knowledge or literature on parameter values, reaction mechanisms, modeling of similar systems etc. • Using Bayesian methods this knowledge can be considered by summarizing and expressing it in the apriori probability distributions. • For model discrimination and parameter estimation the apriori probabilities considered are: a) Prior probability distribution of the different model candidates b) Prior probability distribution of the parameter values.
Prior, posterior probability distribution and Bayes’ theorem Posterior probability distribution • posterior probability reflects the knowledge about the system after having evaluated the newly available experimental data. -> represents the updated knowledge on the system! • posteriori probability density (for (model candidates, parameter values) is obtained by weighing the prior knowledge about the with the newly available information from the data (represented by the likelihood function ) • calculated applying Bayses’ theorem: • Posterior probability distribution from previous will become prior probability for next step -> if more and more datasets are evaluated the influence of the initial prior probability will decrease
Model discrimination based on posterior probability – Bayes’ theorem Posterior probability distribution of model Mj: with: Models contain unknown parameters -> integrate out effect of parameters: Stewart et al. (1996): assumptions to evaluate probabilities and integral: • Experimental error is normally distributed • Prior probability of parameters constant • Likelihood function is linearized around the point estimates of the parameters • Prior probability of parameter is calculated based on prior knowledge (probability of model Mj equals prior probability, normal error distribution)
Model discrimination based on posterior probability – Stewart Posterior probability distribution of model Mj: apriori apriori p(Mj): p(θj|Mj): apriori p(Y|Mj, σ ) p(Mj):
Model discrimination based on posterior probability – Stewart Case I a): single-response data, known variance: Case I b): single-response data, unknown variance: Case II a): multi-response data, known covariance matrix: Case II b): multi-response data, unknown covariance matrix:
Case studies - Stewart Case study 1:Discrimination between 2 models implementing different reaction mechanisms: Model M1: Model M2: Experimental data: generated with model M1 18 duplicate data points (-> total 36)
Case studies - Stewart Discrimination between models M1 and M2 based on their posterior probabilities: Case I b): single response data, variance unknown: • - minimized sum of least squares: • - Number of parameters: Model M1: 2 Model M2: 3 • - Degrees of freedom for experimental error estimate: 36-18=18
Case studies - Stewart Results:
Case studies - Stewart Case study 2: Discrimination between 4 models implementing different reaction mechanisms based on multi response data: Model M1: Model M2: Model M3: Model M4: Experimental data: • generated with model M1 • 6 duplicate data points for 3 measured variables A1, A2, A3 (-> total 36)
Case studies - Stewart Discrimination between models M1 – M4 based on their posterior probabilities: Case 2 b): multi response data, variance unknown: • - minimized determinant of - matrix: • - Number of parameters: Model M1: 2 Model 3: 1 Model M2: 3 Model 4: 4 3. - Degrees of freedom for experimental error estimate: 36-6=6
Case studies - Stewart Results: Model M1 has highest probability, model M3 almost as probable Models M2 and M4: improvement of fit due to adding extra parameter cannot overcome the penalty of extra model parameters
Case studies - Stewart Case Study 3: Discrimination of models for diffusion in catalyst:
Confidence intervals and Bayes’ theorem • Confidence levels can be calculated based on posterior probability of parameters: Integration of posterior probability distribution over parameter sub-space R results in probability that parameter values are within R: • Posterior probability of parameter (Bayes’ theorem): • Model discrimination: based on posterior probability of model Mj: for 1 candidate model: - > model discrimination related confidence level and prior probabilities of candidate models.