1 / 31

"Classical" Inference

"Classical" Inference. Two simple inference scenarios. Question 1: Are we in world A or world B?. Possible worlds: World A World B. Jerzy Neyman and Egon Pearson. D : Decision in favor of:. H 0 : Null Hypothesis. H 1 : Alternative Hypothesis.

hollie
Télécharger la présentation

"Classical" Inference

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. "Classical" Inference

  2. Two simple inference scenarios • Question 1: Are we in world A or world B?

  3. Possible worlds:World A World B

  4. JerzyNeyman and Egon Pearson

  5. D: Decision in favor of: H0: Null Hypothesis H1: Alternative Hypothesis T: The Truth of the matter: H0:Null Hypothesis H1: Alternative Hypothesis

  6. Definition. A subset C of the sample space is a best critical region of size α for testing the hypothesis H0 against the hypothesis H1 if • and for every subset A of the sample space, whenever: • we also have:

  7. Neyman-Pearson Theorem: Suppose that for for some k > 0: Then C is a best critical region of size α for the test of H0 vs. H1.

  8. When the null and alternative hypotheses are both Normal, the relation between the power of a statistical test (1 – ) and  is given by the formula •  is the cdf of N(0,1), and q is the quantile determined by . •  fixes the type I error probability, but increasing n reduces the type II error probability

  9. Question 2: Does the evidence suggest our world is not like World A?

  10. World A

  11. Sir Ronald Aymler Fisher

  12. Fisherian theory • Significance tests: their disjunctive logic, and p-values as evidence: • ``[This very low p-value] is amply low enough to exclude at a high level of significance any theory involving a random distribution….. The force with which such a conclusion is supported is logically that of the simple disjunction: Either an exceptionally rare chance has occurred, or the theory of random distribution is not true.'' (Fisher 1959, 39)

  13. Fisherian theory ``The meaning of `H' is rejected at level α' is `Either an event of probability α has occurred, or H is false', and our disposition to disbelieve H arises from our disposition to disbelieve in events of small probability.'' (Barnard 1967, 32)

  14. Fisherian theory: Distinctive features • Notice that the actual data x is used to define the event whose significance is evaluated. • Also based on H0 and H1 • Can only reject H0, evidence cannot allow one to accept H0. • Many other theories besides H0 could alsoexplain the data.

  15. Common philosophical simplification: • Hypothesis space given qualitatively; • H0 vs. –H0, • Murderer was Professor Plum, Colonel Mustard, Miss Scarlett, or Mrs. Peacock • More typical situation: • Very strong structural assumptions • Hypothesis space given by unknown numeric `parameters' • Test uses: • a transformation of the raw data, • a probability distribution for this transformation (≠ the original distribution of interest)

  16. Three Commonly Used Facts • Assume is a collection of independent and identically distributed (i.i.d.) random variables. • Assume also that the Xis share a mean of μ and a standard deviation of σ.

  17. Three Commonly Used Facts • For the mean estimator :

  18. Three Commonly Used Facts • The Central Limit Theorem. If {X1,…, Xn} are i.i.d. random variables from a distribution with mean  and variance 2, then: • Equivalently:

  19. Examples • Data: January 2012 CPS • Sample: PhD’s, working full time, age 28-34 • H0: mean income is 75k

  20. 21996.00 89999.52 119999.9 40999.92 67600.00 68640.00 96999.76 77296.96 65000.00 71999.72 100100.0 45999.72 149999.7 19968.00 10140.00 37999.52 74999.60 69992.00 31740.80 65000.00 57512.00 87984.00 35999.60 38939.68 99999.64 74999.60 149999.7 47996.00 62920.00 62920.00 54999.88 104000.0

  21. Hyp. Value Probability • H0 -1.024022 0.3138

  22. Comments • The background conditions (e.g., the i.i.d. condition behind the sample) are a clear example of `Quine-Duhem’ conditions. • When background conditions are met, ``large samples’’ don’t make inferences ``more certain’’ • Multiple tests • Monitoring or ``peeking'‘ at data, etc.

  23. Point estimates and Confidence Intervals

  24. Many desiderata of an estimator: • Consistent • Maximum Likelihood • Unbiased • Sufficient • Minimum variance • Minimum MSE (mean squared error) • (most) efficient

  25. By CLT: approximately: • Thus: • By algebra: • So:

  26. Interpreting confidence intervals • The only probabilistic component that determines what occurs is . • Everything else are constants. • Simulations, examples • Question: Why ``center’’ the interval?

  27. Confidence Intervals • $68,898.16 ± $12,152.85 • ``C.I. = mean ± m.o.e’’ • = ($56,745.32 , $81,051.01)

  28. Using similar logic, but different computing formulae, one can extend these methods to address further questions • e.g., for standard deviations, equality of means across groups, etc.

  29. Equality of Means: BAs • Sex Count Mean Std. Dev. • 1 223 63619.54 31370.01 • 2 209 51395.43 25530.66 • All 432 57705.56 29306.13 • Value Probability • 4.424943 0.0000

  30. Equality of Means: PhDs • Sex Count Mean Std. Dev. • 21 66452.71 36139.78 • 11 73566.76 29555.10 • All 32 68898.16 33707.49 • Value Probability • -0.560745 0.5791

More Related