100 likes | 431 Vues
The Proof of unbiased estimator of 2 (Ref. To Gujarati (2003)pp.100-101). = 0. b 2. = k i (Y i -Y) = k i Y i - Yk i = k i Y. = k i. x i y i x i 2. =. = 0. = 1. = 0. The least squares formulas (estimators) in the simple regression case:. Since k i = 0 k i X = 1.
E N D
The Proof of unbiased estimator of 2 (Ref. To Gujarati (2003)pp.100-101) = 0 b2 = ki(Yi-Y) = kiYi- Yki= kiY = ki xiyi xi2 = = 0 = 1 = 0 The least squares formulas (estimators) in the simple regression case: Since ki= 0 kiX = 1 Substitute the PRF: Y = 1 +2X + u into the b2 formula b2 = ki (1 +2Xi + ui ) = 1ki+2 kiXi + kiui = 2 + kiu Take the expectation on both side: E(b2) = 2 + ki E(ui) E(b2) = 2 (unbiased estimator) By assumptions: E(ui) = 0 E(ui ,uj) = 0 Var(uI) = 2
The Proof of variance of 2 (Ref. To Gujarati (2003)pp.101-102) ^ E(b1) = 1 (unbiased estimator) = 0 = 0 Var (b2) = E[ b2 – E(b2)]2 = E[ b2 - 2]2 = E[ kiui]2 = E[k12u12+ k22u22 + k32u32 +….+2k1k2u1u2+2k1k3u1u3 + …..] = k12E(u12)+ k22E(u22) + k32E(u32) +….+2k1k2E(u1u2) +2k1k3E(u1u3 )+….. = k122 + k222+ k322+ …. + 0 + 0 + 0 + … = 2ki2 = 2(xi/ xi2 )2 = 2xi2/ (xi2 )2 = 2/ xi2 By assumptions: E(i) = 0 E(i ,j) = 0 Var(I) = 2
The Proof of covariance of 1 and 2 : cov(1,2) ^ ^ ^ ^ (Ref. To Gujarati (2003)pp.102) Cov (b1, b2) = E{[b1- E(b1)][b2- E(b2)]} = E{[(Y – b2X) – (Y - 2X)][b2 - 2]} = E{[-X(b2 - 2)][b2 - 2]} = -X E(b2 - 2)2 = -X[2 / xi2] By definition:
The Proof of minimum variance property of OLS (Ref. To Gujarati (2003)pp.104-105) xiyi xi2 The OLS estimator of 2 is:b2==kiYi Now suppose an other linear estimator of 2 is b2*=wiYi And assume wi ki Since b2* = wiYi = wi(2+2Xi+ui) = wi1+2wiXi+ wi uI Take the expectation of b2*: E( b2*) = E(wi1)+2E(wiXi)+ E(wi ui) = 1wi +2wiXi since E(ui)=0 For b2* to be unbiased, i.e., E(b2*) = 2 there must be wi =0, and wiXi = 1
And the variance of b2* is: Var(b2*) = E[b2* - E(b2*)]2 = E[ b2* - 2]2 = E(wi ui)2 = wi2 E(ui)2 = 2 wi2 = 2 [(wi - ki) + ki)]2 = 2 (wi - ki)2 + 2 ki2 + 2 2(wi - ki)ki = 2 (wi - ki)2 + 2 ki2 Since ki =0 = 2 (wi - ki)2 + 2/xi2 = 2 (wi - ki)2 + Var(b2) Therefore, only if wi = ki, then Var(b2*) = Var(b2), Hence the OLS estimator b2is the min. variance. If it is not min.=>OLS isn’t the best = 0 If b2* is an unbiased estimator, then b2* = wiYi = wi(1+2Xi+ui) = 2 + wi ui Therefore, (b2* - 2)= wi ui
Or Var( e ) = E(e2) = 2 ^ ^ Var( u ) = E(u2) = 2 ^ Since Y = 1 + 2X + u and Y = 1 + 2X + u =>y = 2x + (u - u ) e = Y – b1 – b2X and 0 = Y – b1 – b2X => e = y – b2x e = 2x + (u - u ) – b2x = (2– b2)x + (u - u ) Deviation form Take squares and summing on both sides: e2 = (2 –b2)2x2+ (u - u )2 – 2(2 –b2)x(u - u) Take expectation on both sides: E(e2) = E[(2 –b2)2]x2 + E[(u - u )2] – 2E[(2 –b2)x(u - u)] II III I The Proof of unbiased estimator of 2 (Ref. To Gujarati (2003)pp.102-03)
Substituting these three terms, I, II, and III into the equation and get E(e2) = (n-2)2 And if define 2 = e2 /(n-2) Therefore, the expected value is E(2) = E(e2)/(n-2) = (n-2)2/(n-2) = 2 ^ ^ Utilize the OLS assumptions, that are E(u) = 0, E(u2) = 2 and E(uiuj) = 0 And get I = 2 II = (n-1) 2 III = -2 2