1 / 35

Statistics between Inductive Logic and Empirical Science

3 rd PROGIC Workshop, Canterbury. Statistics between Inductive Logic and Empirical Science. Jan Sprenger University of Bonn Tilburg Center for Logic and Philosophy of Science. I. The Logical Image of Statistics. Inductive Logic. Deductive logic discerns valid, truth-preserving inferences

zebulon
Télécharger la présentation

Statistics between Inductive Logic and Empirical Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 3rd PROGIC Workshop, Canterbury Statistics between Inductive Logic and Empirical Science Jan Sprenger University of Bonn Tilburg Center for Logic and Philosophy of Science

  2. I. The Logical Image of Statistics

  3. Inductive Logic • Deductive logic discerns valid, truth-preserving inferences P; P  Q  Q

  4. Inductive Logic • Deductive logic discerns valid, truth-preserving inferences P; P  Q  Q • Inductive logic generalizes that idea to non-truth-preserving inferences P; P supports Q  (more) probably Q

  5. Inductive Logic • Inductive logic: truth of premises indicates truth of concluions Main concepts: confirmation, evidential support

  6. Inductive Logic • Inductive logic: truth of premises indicates truth of concluions • Inductive inference: objective and independent of external factors Main concepts: confirmation, evidential support

  7. The Logical Image of Statistics • Statistics infers from particular data to general models

  8. The Logical Image of Statistics • Statistics infers from particular data to general models • Formal theory of inductive inference, governed by general, universally applicable principles

  9. The Logical Image of Statistics • Statistics infers from particular data to general models • Formal theory of inductive inference, governed by general, universally applicable principles • Separation of statistics and decision theory (statistics summarizes data in a way that makes a decision-theoretic analysis possible)

  10. The Logical Image of Statistics • Contains theoretical (mathematics, logic) as well as empirical elements (problem-based engineering of useful methods, interaction with „real science“) Where to locate on that scale?

  11. The Logical Image of Statistics • Pro: mathematical, „logical“ character of theoretical statistics

  12. The Logical Image of Statistics • Pro: mathematical, „logical“ character of theoretical statistics • Pro: mechanical character of a lot of statistical practice (SPSS & Co.)

  13. The Logical Image of Statistics • Pro: mathematical, „logical“ character of theoretical statistics • Pro: mechanical character of a lot of statistical practice (SPSS & Co.) • Pro: Connection between Bayesian statistics and probabilistic logic

  14. The Logical Image of Statistics • Pro: mathematical, „logical“ character of theoretical statistics • Pro: mechanical character of a lot of statistical practice (SPSS & Co.) • Pro: Connection between Bayesian statistics and probabilistic logic • Cons: presented in this work...

  15. II. Parameter Estimation

  16. A Simple Experiment • Five random numbers are drawn from {1, 2, ..., N} (N unknown): • 21, 4, 26, 18, 12 • What is the optimal estimate of N on the basis of the data?

  17. A Simple Experiment • Five random numbers are drawn from {1, 2, ..., N} (N unknown): • 21, 4, 26, 18, 12 • What is the optimal estimate of N on the basis of the data? That depends on the loss function!

  18. Estimation and Loss Functions • Aim: estimated parameter value close to true value • Loss function measures distance between estimated and true value

  19. Estimation and Loss Functions • Aim: estimated parameter value close to true value • Loss function measures distance between estimated and true value • Choice of loss function sensitive to external constraints

  20. A Bayesian approach • Elicit prior distribution for the parameter N • Use incoming data for updating via conditionalization • Summarize data in a posterior distribution (credal set, etc.) • Perform a decision-theoretic analysis

  21. III. Model Selection

  22. Model Selection • True model usually „out of reach“ • Main idea: minimzing discrepancy between the approximating and the true model • Discrepancy can be measured in various ways • cf. choice of a loss function • Kullback-Leibler divergence, Gauß distance, etc.

  23. Model Selection • A lot of model selection procedures focuses on estimating the discrepancy between the candidate model and the true model • Choose the model with the lowest estimated discrepancy to the true model That is easier said than done...

  24. Problem-specific Premises • Asymptotic behavior • Small or large candidate model set? • Nested vs. non-nested models • Linear vs. non-linear models • Random error structure

  25. Problem-specific Premises • Asymptotic behavior • Small or large candidate model set? • Nested vs. non-nested models • Linear vs. non-linear models • Random error structure Scientific understanding required to fix the premises!

  26. Bayesian Model Selection • Idea: Search for the most probable model (or the model that has the highest Bayes factor) • Variety of Bayesian methods (BIC, intrinsic and fractional Bayes Factors, ...)

  27. Bayesian Model Selection • Idea: Search for the most probable model (or the model that has the highest Bayes factor) • Variety of Bayesian methods (BIC, intrinsic and fractional Bayes Factors, ...) Does Bayes show a way out of the problems?

  28. Bayesian Model Selection • If the true model is not contained in the set of candidate models: must Bayesian methods be justified by their distance-minimizing properties?

  29. Bayesian Model Selection • If the true model is not contained in the set of candidate models: must Bayesian methods be justified by their distance-minimizing properties? • It is not trivial that a particular distance function (e.g. K-L divergence) is indeed minimized by the model with the highest posterior! • Bayesian probabilities = probabilities of being close to the true model?

  30. Model Selection and Parameter Estimation • In the elementary parameter estimation case, posterior distributions were independent of decision-theoretic elements (utilities/loss functions) • The reasonableness of a posterior distribution in Bayesian model selection is itself relative to the choice of a distance/loss function

  31. IV. Conclusions

  32. Conclusions (I) • Quality of a model selection method subject to a plethora of problem-specific premises • Model selection methods must be adapted to a specific problem (“engineering“)

  33. Conclusions (I) • Quality of a model selection method subject to a plethora of problem-specific premises • Model selection methods must be adapted to a specific problem (“engineering“) • Bayesian methods in model selection should have an instrumental interpretation • Difficult to separate proper statistics from decision theory

  34. Conclusions (II) • Optimality of an estimator is a highly ambiguous notions • Statistics more alike to scientific modelling than to a branch of mathematics? • More empirical science than inductive logic?

  35. Thanks a lot for your attention!!! © by Jan Sprenger, Tilburg, September 2007

More Related