1 / 39

The Multivariate Normal Distribution, Part 2

The Multivariate Normal Distribution, Part 2. BMTRY 726 1/14/2014. Multivariate Normal PDF. Recall the pdf for the MVN distribution Where x is a p -length vector of observed variables m is also a p -length vector and E( x )= m S is a p x p matrix, and Var ( x )= S

hide
Télécharger la présentation

The Multivariate Normal Distribution, Part 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Multivariate Normal Distribution, Part 2 BMTRY 726 1/14/2014

  2. Multivariate Normal PDF • Recall the pdf for the MVN distribution • Where • x is a p-length vector of observed variables • m is also a p-length vector and E(x)=m • S is a pxp matrix, and Var(x)=S • Note, S must also be positive definite

  3. Univariate and Bivariate Normal

  4. Contours of Constant Density • Recall projections of f(x) onto the hyperplane created by x are called contours of constant density • Properties include: • P-dimensional ellipsoid defined by: • Centered at m • Axes lengths:

  5. Bivariate Examples

  6. Why Multivariate Normal • Recall, statisticians like the MVN distribution because… • Mathematically simple • Multivariate central limit theorem applies • Natural phenomena are often well approximated by a MVN distribution • So what are some “fun” mathematical properties that make is so nice?

  7. Properties of MVN Result 4.2: If then has a univariate normal distribution with mean and variance

  8. Example

  9. Properties of MVN Result 4. 3: Any linear transformation of a multivatiate normal random vector has a normal distribution So if and and B is a k x p matrix of constants then

  10. Spectral Decomposition Given S is a non-negative definite, symmetric, real matrix, then S can be decomposed according to: Where the eigenvalues are The eigenvectors of S are e1, e2,...,ep And these satisfy the expression

  11. Where Recall that Then And

  12. Definition: The square root of S is And Also

  13. From this it follows that the inversesquare root of S is Note This leads us to the transformation to the canonical form: If

  14. Marginal Distributions Result 4.4: Consider subsets of Xi’s in X. These subsets are also distributed (multivariate) normal. If Then the marginal distributions of X1 and X2 is

  15. Example • Consider , find the marginal distribution of the 1st and 3rd components

  16. Example • Consider , find the marginal distribution of the 1st and 3rd components

  17. Marginal Distributions cont’d The converse of result 4.4 is not always true, an additional assumption is needed. Result 4.5(c): If… and X1 is independent of X2 then

  18. Result 4. 5(a): If X1(qx1) and X2(p-qx1) are independent then Cov(X1,X2)= 0 (b) If Then X1(qx1) and X2(p-qx1) are independent iff

  19. Example • Consider • Are x1 and x2independent of x3?

  20. Conditional Distributions Result 4.6: Suppose Then the conditional distribution of X1 given that X2 = x2 is a normal distribution Note the covariance matrix does not depend on the value of x2

  21. Proof of Result 4.6

  22. Proof of Result 4.6

  23. Multiple Regression Consider The conditional distribution of Y|X=x is univariate normal with

  24. Example Consider find the conditional distribution of the 1st and 3rd components

  25. Example

  26. Example

  27. Result 4.7: If and S is positive definite, then Proof:

  28. Result 4.7: If and S is positive definite, then Proof cont’d:

  29. Result 4.8: If are mutually independent with Then Where vector of constants And are n constants. Additionally if we have and which are r x pmatrices of constants we can also say

  30. Sample Data • Let’s say that X1,X2, …, Xnare i.i.d. random vectors • If the data vectors are sampled from a MVN distribution then

  31. Multivariate Normal Likelihood • We can also look at the joint likelihood of our random sample

  32. Some needed Results (1) Given A > 0 and are eigenvalues of A (a) (b) (c) (2) From (c) we can show that:

  33. Some needed Results (2) Proof that:

  34. Some needed Results (2) Proof that:

  35. Some needed Results (1) Given A > 0 and are eigenvalues of A (a) (b) (c) (2) From (c) we can show that: (3) Given Spxp> 0, Bpxp> 0 and scalar b > 0

  36. MLE’s for .

  37. MLE’s for .

  38. MLE’s for .

  39. Next Time • Sample means and covariance • The Wishart distribution • Introduction of some basic statistical tests

More Related