1 / 41

730 Lecture 7

730 Lecture 7. Today’s lecture: More on CR bound (pp25-29) Rao-Blackwellisation (pp29-33). CR lower bound. If S is an unbiased estimate of q , then. Joint density. CR lower bound. U is the score function I( q ) is the information matrix q is a vector parameter

paulsparker
Télécharger la présentation

730 Lecture 7

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 730 Lecture 7 Today’s lecture: More on CR bound (pp25-29) Rao-Blackwellisation (pp29-33) 730 Lecture 7

  2. CR lower bound • If S is an unbiased estimate of q, then Joint density 730 Lecture 7

  3. CR lower bound • U is the score function • I(q) is the information matrix • q is a vector parameter • Cov(S) ³ I(q)-1 means that the matrix Cov(S) - I(q)-1 is positive definite. 730 Lecture 7

  4. CR lower bound • Proof follows from the following steps: • Step 1 730 Lecture 7

  5. CR lower bound (cont) • Step 2: This is 0! 730 Lecture 7

  6. Then Cov(T)= is PD Thus is PD CR lower bound (cont) Step 3: Put T=(S,U) Thus Cov(T) -1 is PD • Thus Cov(S)-I(q) -1 is PD 730 Lecture 7

  7. Useful fact: 730 Lecture 7

  8. Examples • Text p 27: Example 14 • Normal distribution • Text p 28: Example 15 • Multinomial distribution • Review 310 stuff as well 730 Lecture 7

  9. p11 p12 p21 p22 Example • Consider 2 x 2 contingency table 730 Lecture 7

  10. Example(cont) Assume a multinomial probability model, with cell probabilities ie an independence model. 730 Lecture 7

  11. Example(cont) Log likelihood is 730 Lecture 7

  12. Example(cont) Score function U is 730 Lecture 7

  13. Example(cont) Information is 730 Lecture 7

  14. Example(cont) Similarly Hence 730 Lecture 7

  15. Example(cont) Consider usual estimate Must be UMVUE!! 730 Lecture 7

  16. CR upper bound (cont) When is the bound attained? Consider If bound is attained, Cov(S)=I(q)-1, so we get 0!!!! 730 Lecture 7

  17. CR upper bound (cont) Then: U=I(q)(S- q) + b Take expected values, b=0 Hence U= I(q)(S- q) if bound is attained. Thus if bound is attained, we must have S=I(q)-1U + q. This can’t be an estimate unless it does not involve q. 730 Lecture 7

  18. CR upper bound (cont) Conversely… Easy to see that bound is attained if U= I(q)(S- q) . Because…… I =Cov(S,U) =Cov(S,I(q)(S- q)) =Cov(S)I(q) 730 Lecture 7

  19. Example • Use this to see that no unbiased estimate of the Normal variance can attain the bound. • From example 14 on p 27, Var (S) ³ n/(2s4). • If bound is attained, then we must have S=(1/n)S(Xi-m)2 • Not an estimate!!! Bound not much help. 730 Lecture 7

  20. How else? • How else can we find umvues? • Suppose we have a sufficient statistic T, and we can find an unbiased estimate U. • Then E(U|T) is a function of T, is unbiased, and has smaller variance than U. (or at least no bigger). 730 Lecture 7

  21. Memory jog…. • E(E(X|Y))=E(X) • Var(E(X|Y)) £ Var(X). 730 Lecture 7

  22. Complete statistics • A statistic T is complete if the only function g for which E(g(T))=0 for all q is g=0. • If a sufficient statistic T is complete, there is only one function of T that is unbiased. • Because…. • If u(T) and v(T) both have expectation q, then g(T)=u(T)-v(T) has expectation 0 and so g=0 ie u=v. 730 Lecture 7

  23. Rao-blackwellisation • The process of taking the conditional expectation of an unbiased statistic given a sufficient statistic is called…… Rao-Blackwellisation (invented by Rao and Blackwell) 730 Lecture 7

  24. Lehmann-Scheffe theorem • If T is a complete sufficient statistic, and U is unbiased, then E(U|T) is the umvue. • For suppose S is unbiased and VarS£Var E(U|T). Then VarE(S|T) £ VarS £ Var E(U|T). • But E(S|T) = E(U|T) by completeness. • Hence Var S=VarE(U|T). No unbiased estimator can have a smaller variance than E(U|T) so it must be the umvue. 730 Lecture 7

  25. Example • For normal, mean is sufficient and complete. Since it is unbiased, it must be the umvue. • Same argument for exponential. • Same argument for the Poisson. • CR lower bound argument works as well. 730 Lecture 7

  26. Example • Find the umvue for Pr(X=0) =e-q where X is Poisson. • Put S=1 if X1=0, and 0 otherwise. Then S is unbiased. • Consider T=X1+…+ Xn . It is a complete sufficient statistic. 730 Lecture 7

  27. Example (cont) • We need E(S|T)=P(X1=0|T=t). Consider 730 Lecture 7

  28. Can write as Example (cont) Thus the umvue is ((n-1)/n)T Note: This is biased but has smaller MSE for some values of q. 730 Lecture 7

  29. Example • Exponential. What is umvue of 1/q? • Might suspect that 1/mean estimates 1/q, since the mean estimates q. 730 Lecture 7

  30. Recall…. • If X1,…,Xn are Exponential with mean q, then T=X1+…+Xn is Gamma(n,q) • Density is 730 Lecture 7

  31. Example (cont) We have Thus (n-1)/T is unbiased, so is the umvue. 730 Lecture 7

  32. Example • Suppose X1,…,Xn are iid N(m,s2) • We have seen that (SXi, SXi2) is sufficient. It is also complete (hard). • Find the umvue of m/s 730 Lecture 7

  33. Solution • Try the sample version:ie Sample mean/sample sd • Note that for the Normal, mean and sd are independent (310 stuff) • Thus 730 Lecture 7

  34. Solution (cont) 730 Lecture 7

  35. Solution (cont) Thus is an unbiased estimate of m/s that is a function of a complete sufficient statistic and so is the umvue. 730 Lecture 7

  36. A final example • Unbiased is not the whole story! • Sample variance, normal data – divide by n-1, n or something else? • Let’s try dividing by n+c. What should c be???? 730 Lecture 7

  37. Sample variance(continued) • We have already seen ( see assignment 1) that Chisquare distn!! 730 Lecture 7

  38. Sample variance(continued) Thus 730 Lecture 7

  39. Sample variance(continued) Calculate MSE= Var + Bias2 For what c is MSE minimised? 730 Lecture 7

  40. Sample variance(continued) Differentiate Answer is 730 Lecture 7

  41. Sample variance(continued) Hence minimum when c=1. Does not depend on n!!! NB c¹-1!! Thus the estimate with minimum MSE is (for all n!!) 730 Lecture 7

More Related