1 / 70

Expectation

Expectation. for multivariate distributions. Definition. Let X 1 , X 2 , …, X n denote n jointly distributed random variable with joint density function f ( x 1 , x 2 , …, x n ) then. Example.

kaia
Télécharger la présentation

Expectation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Expectation for multivariate distributions

  2. Definition Let X1, X2, …, Xn denote n jointly distributed random variable with joint density function f(x1, x2, …, xn ) then

  3. Example Let X, Y, Z denote 3jointly distributed random variable with joint density function then Determine E[XYZ].

  4. Solution:

  5. Multivariate Moments Non-central and Central

  6. Definition Let X1 and X2be a jointly distirbuted random variables (discrete or continuous), then for any pair of positive integers (k1, k2)the jointmoment of (X1, X2) of order (k1, k2) is defined to be:

  7. Definition Let X1 and X2be a jointly distirbuted random variables (discrete or continuous), then for any pair of positive integers (k1, k2)the jointcentralmoment of (X1, X2) of order (k1, k2) is defined to be: where m1 = E [X1]and m2 = E [X2]

  8. Note = the covariance of X1 and X2. Definition: For any two random variables X and Y then define the correlation coefficient rXYto be:

  9. Properties ofthe correlation coefficient rXY The converse is not necessarily true. i.e. rXY= 0 does not imply that X and Y are independent.

  10. More properties ofthe correlation coefficient if there exists a and b such that where rXY = +1 if b > 0 and rXY = -1 if b< 0 Proof: Let Let for all b. Consider choosing b to minimize

  11. Consider choosing b to minimize or Since g(b) ≥ 0, then g(bmin) ≥ 0

  12. Hence g(bmin) ≥ 0 Hence

  13. or Note If and only if This will be true if i.e.

  14. Summary if there exists a and b such that where

  15. Tchebychev’s Inequality Let X denote a random variable with mean m =E(X) and variance Var(X) = E[(X – m)2] = s2 then Note: Is called the standard deviation of X,

  16. Proof

  17. thus or hence

  18. Some Rules for Expectation

  19. Thus you can calculate E[Xi] either from the joint distribution of X1, … , Xn or the marginal distribution of Xi. Proof:

  20. The Linearity property Proof:

  21. (The Multiplicative property)Suppose X1, … , Xq are independent of Xq+1, … , Xk then In the simple case when k = 2 if X and Y are independent

  22. Proof:

  23. Some Rules for Variance

  24. Proof Thus

  25. Note: If X and Y are independent, then

  26. Definition: For any two random variables X and Y then define the correlation coefficient rXY to be: if X and Y are independent

  27. Proof Thus

  28. Some Applications Let X1, … , Xn be n mutually independent random variables each having mean mand standard deviation s(variance s2). Let Then

  29. Also or Thus Hence the distribution of is centered at mand becomes more and more compact about mas n increases

  30. The Law of Large Numbers Let X1, … , Xn be n mutually independent random variables each having mean m. Let Then for any d> 0 (no matter how small)

  31. Proof We will use Tchebychev’s inequality which states for any random variable X. Now

  32. Thus Thus

  33. A Special case Let X1, … , Xn be n mutually independent random variables each having Bernoulli distribution with parameter p. Thus the Law of Large Numbers states

  34. Thus the Law of Large Numbers states that converges to the probability of success p Some people misinterpret this to mean that if the proportion of successes is currently lower that p then the proportion of successes in the future will have to be larger than p to counter this and ensure that the Law of Large numbers holds true. Of course if in the infinite future the proportion of successes is p than this is enough to ensure that the Law of Large numbers holds true.

  35. The mean and variance of a Binomial Random variable We have already computed this by other methods: • Using the probability function p(x). • Using the moment generating function mX(t). Suppose that we have observed n independent repetitions of a Bernoulli trial. Let X1, … , Xn be n mutually independent random variables each having Bernoulli distribution with parameter pand defined by

  36. Now X = X1 + … + Xn has a Binomial distribution with parameters n and p. X is the total number of successes in the n repetitions.

  37. The mean and variance of a Hypergeometric distribution The hypergeometric distribution arises when we sample with replacement n objects from a population of N = a + b objects. The population is divided into to groups (group A and group B). Group A contains a objects while group B contains b objects Let Xdenote the number of objects in the sample of n that come from group A. The probability function of X is:

  38. Let X1, … , Xn be n random variables defined by Then Proof

  39. Therefore and

  40. Thus

  41. Also and We need to also calculate Note:

  42. Thus and Note:

  43. Thus and

  44. Thus with and

  45. Thus

  46. Thus if X has a hypergeometric distribution with parameters a, b and n then

  47. The mean and variance of a Negative Binomial distribution The Negative Binomial distribution arises when we repeat a Bernoulli trial until k successes (S) occur. Then X = the trial on which the kth success occurred. The probability function of X is: Let X1= the number of trial on which the 1st success occurred. and Xi= the number of trials after the (i -1)stsuccess on which the ithsuccess occurred (i ≥ 2)

  48. Then X = X1 + … + Xk and X1, … , Xk are mutually independent Xieach have a geometric distribution with parameter p.

More Related