1 / 51

Maximum Likelihood Estimation

Maximum Likelihood Estimation. Multivariate Normal distribution. The Method of Maximum Likelihood . Suppose that the data x 1 , … , x n has joint density function f ( x 1 , … , x n ; q 1 , … , q p )

carsyn
Télécharger la présentation

Maximum Likelihood Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Maximum Likelihood Estimation Multivariate Normal distribution

  2. The Method of Maximum Likelihood Suppose that the data x1, … , xnhas joint density function f(x1, … , xn; q1, … , qp) where q = (q1, … , qp) are unknown parameters assumed to lie in W(a subset of p-dimensional space). We want to estimate the parametersq1, … , qp

  3. Definition: The Likelihood function Suppose that the data x1, … , xnhas joint density function f(x1, … , xn; q1, … , qp) Then given the data the Likelihood function is defined to be = L(q1, … , qp) = f(x1, … , xn; q1, … , qp) Note: the domain of L(q1, … , qp) is the set W.

  4. Definition: Maximum Likelihood Estimators Suppose that the data x1, … , xnhas joint density function f(x1, … , xn; q1, … , qp) Then the Likelihood function is defined to be = L(q1, … , qp) = f(x1, … , xn; q1, … , qp) and the Maximum Likelihood estimators of the parameters q1, … , qp are the values that maximize = L(q1, … , qp)

  5. i.e. the Maximum Likelihood estimators of the parameters q1, … , qp are the values Such that Note: is equivalent to maximizing the log-likelihood function

  6. The Multivariate Normal Distribution Maximum Likelihood Estiamtion

  7. denote a sample (independent) Let from thep-variate normal distribution with mean vector and covariance matrix Note:

  8. The matrix is called the data matrix.

  9. The vector is called the data vector.

  10. The mean vector

  11. The vector is called the sample mean vector note

  12. also

  13. In terms of the data vector where

  14. Graphical representation of sample mean vector The sample mean vector is the centroid of the data vectors.

  15. The Sample Covariance matrix

  16. The sample covariance matrix: where

  17. There are different ways of representing sample covariance matrix:

  18. Maximum Likelihood Estimation Multivariate Normal distribution

  19. denote a sample (independent) Let from thep-variate normal distribution with mean vector and covariance matrix Then the joint density function of is:

  20. The Likelihood function is: and the Log-likelihood function is:

  21. To find the Maximum Likelihood estimators of we need to find to maximize or equivalently maximize

  22. Note: thus hence

  23. Now

  24. Now

  25. Summary: the Maximum Likelihood estimators of are and

  26. Sampling distribution of the MLE’s

  27. Note is: The joint density function of

  28. This distribution is np-variate normal with mean vector

  29. Thus the distribution of is p-variate normal with mean vector

  30. Summary The sampling distribution of is p-variate normal with

  31. The sampling distribution of the sample covariance matrix S and

  32. The Wishart distribution A multivariate generalization of the c2 distribution

  33. Definition: the p-variate Wishart distribution be k independent random p-vectors Let Each having a p-variate normal distribution with Then U is said to have the p-variate Wishart distribution with k degrees of freedom

  34. The density ot the p-variate Wishart distribution Then the joint density of U is: Suppose where Gp(·) is the multivariate gamma function. It can be easily checked that when p = 1 and S = 1 then the Wishart distribution becomes the c2 distribution with k degrees of freedom.

  35. Theorem Suppose then Corollary 1: Corollary 2: Proof

  36. Theorem are independent, then Suppose Theorem are independent and Suppose then

  37. Theorem Let be a sample from then Theorem Let be a sample from then

  38. Theorem Proof etc

  39. Theorem Let be a sample from then is independent of Proof be orthogonal Then

  40. Note H* is also orthogonal

  41. Properties of Kronecker-product

  42. This the distribution of is np-variate normal with mean vector

  43. Thus the joint distribution of is np-variate normal with mean vector

  44. Thus the joint distribution of is np-variate normal with mean vector

More Related