1 / 32

STATS 730: Lectures 6

STATS 730: Lectures 6. Exponential families Minimal sufficiency Mean squared error Unbiased estimation Cramer-Rao lower bound. Exponential families. Mean is sufficient for normal, Poisson, exponential,… Is this a coincidence?

bdana
Télécharger la présentation

STATS 730: Lectures 6

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. STATS 730: Lectures 6 • Exponential families • Minimal sufficiency • Mean squared error • Unbiased estimation • Cramer-Rao lower bound 730 Lectures 5&6

  2. Exponential families • Mean is sufficient for normal, Poisson, exponential,… • Is this a coincidence? • No, all are special cases of the exponential family of distributions 730 Lectures 5&6

  3. Exponential families (cont) These are joint distributions of the form Exp(C(q)S(x)+d(q)+B(x)) The range of X can’t depend on q. It is clear that S(x) is sufficient (why?) 730 Lectures 5&6

  4. Exponential families (cont) Poisson, Normal, exponential are all members of this family. eg for the exponential, joint density is 730 Lectures 5&6

  5. Multiparameter exponential families • Sometimes q is a vector parameter, eg N(m,s2), q=(m,s2). • Multiparameter exponential families are of the form Exp(SCi(q)Si(x) + d(q)+b(x)) Vector statistic (S1(X),…,Sk(X)) is sufficient for q. • Note dimensions of parameter and statistic don’t have to be the same. 730 Lectures 5&6

  6. C1(q) S1(X) C2(q) S2(X) d(q) (S1(X),S2(X))=(SXi2,SXi) is sufficient. Example: Normal Take N(m,s2), q=(m,s2). Then the joint density is 730 Lectures 5&6

  7. Minimal sufficiency Suppose X1,..,Xn are an iid sample from N(q,1). Then (a) X1,..,Xn are sufficient (b) X(1),..,X(n) are sufficient (c) Sample mean is sufficient (c) achieves the most “data reduction”. (c) is a function of (b) (b) is a function of (a) 730 Lectures 5&6

  8. Minimal sufficiency • A “minimal sufficient” statistic achieves the most data reduction. A statistic is minimal sufficient if it is a function of every other sufficient statistic. 730 Lectures 5&6

  9. Minimal sufficiency(cont) • Its hard to recognise minimal sufficient statistics using the definition. Here is a theorem that is more useful. • See the notes p 23. The proof is not examinable. 730 Lectures 5&6

  10. Minimal sufficiency theorem A statistic S is minimal sufficient if it has the following property: S(x)=S(y) iff f(x,q)/f(y,q) does not depend on q 730 Lectures 5&6

  11. This does not depend on q iff Thus the sample mean is minimal sufficient. Example • Poisson: 730 Lectures 5&6

  12. Small bias and standard error • Now we return to the ideas of bias and small standard error, and connect them with sufficiency • Recall that we want bias to be small and standard error small. Wee can combine these in the idea of mean squared error: 730 Lectures 5&6

  13. BIAS Zero! Mean squared error Defined as Can be written as 730 Lectures 5&6

  14. Mean square error Ideally, we want estimators with smaller mse than any other for all q. But this is impossible! (why?) 730 Lectures 5&6

  15. Mean squared error So either … pick an estimator that has smaller mse for likely values of q (Bayes estimator) or… pick an estimator that has smaller mse than a restricted set of estimators 730 Lectures 5&6

  16. Mean square error A restricted set All estimators 730 Lectures 5&6

  17. All estimators of q Unbiased estimators of q Mean square error Set we have in mind: Unbiased estimators 730 Lectures 5&6

  18. Unbiased estimators An estimator S is unbiased if the bias is zero: ie if E(S)=q. We want the unbiased estimator of q that has smallest MSE (same as smallest variance or standard error) This is called the UMVUE (uniformly minimum variance unbiased estimator) 730 Lectures 5&6

  19. Finding umvues • Find an estimate that attains the Cramer-Rao lower bound • Condition on suficient statistics • Use the MLE 730 Lectures 5&6

  20. CR lower bound • If S is an unbiased estimate of q, then Joint density 730 Lectures 5&6

  21. CR lower bound • U is the score function • I is the information matrix • q is a vector parameter • Cov(S) ³ I(q)-1 means that the matrix Cov(S) - I(q)-1 is positive definite. 730 Lectures 5&6

  22. CR lower bound • Proof follows from the following steps: • Step 1 730 Lectures 5&6

  23. CR lower bound (cont) • Step 2: This is 0! 730 Lectures 5&6

  24. Then Cov(T)= is PD Thus is PD CR lower bound (cont) Step 3: Put T=(S,U) Thus Cov(T) -1 is PD • Thus Cov(S)-I(q) -1 is PD 730 Lectures 5&6

  25. Useful fact: 730 Lectures 5&6

  26. p11 p12 p21 p22 Example • Consider 2 x 2 contingency table 730 Lectures 5&6

  27. Example(cont) Assume a multinomial probability model, with cell probabilities ie an independence model. 730 Lectures 5&6

  28. Example(cont) Log likelihood is ie an independence model. 730 Lectures 5&6

  29. Example(cont) Score is 730 Lectures 5&6

  30. Example(cont) Information is 730 Lectures 5&6

  31. Example(cont) Similarly Hence 730 Lectures 5&6

  32. Example(cont) Consider usual estimate Must be UMVUE!! 730 Lectures 5&6

More Related