1 / 9

Stochastic Regressors

Stochastic Regressors. If the X variables are random then it is difficult to prove unbiasedness. For example, for the OLS estimator we have:. However, we can’t take the X variables outside the expectations operator since they are themselves random variables. Also:.

diep
Télécharger la présentation

Stochastic Regressors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stochastic Regressors If the X variables are random then it is difficult to prove unbiasedness. For example, for the OLS estimator we have: However, we can’t take the X variables outside the expectations operator since they are themselves random variables. Also:

  2. To deal with stochastic regressors we need to introduce the idea of a probability limit. The probability limit of the estimator equals the true value if for a sufficiently large sample the probability that the estimator differs from the true value is greater than any arbitrarily small number equals zero. The shorthand notation for this can be written: and in these circumstances we say that the estimator is consistent.

  3. The following diagram shows what must happen for an estimator to be consistent. The true value is 0.5 and as the sample size gets large the PDF becomes a vertical line centred on this value.

  4. The big advantage of probability limits is that they obey certain rules which do not hold for the expectations operator. Using these rules it is often possible to show that an estimator is consistent even if we cannot show that it is unbiased.

  5. The OLS estimator with stochastic regressors we assume the following conditions hold: and both these are assumed to be constant.

  6. It therefore follows that: σ2X ≠ 0 by assumption and therefore the OLS estimator is consistent providing that the covariance of the X variable and the error term u is equal to zero.

  7. An estimator may be biased in small samples but still be consistent for large samples. Suppose, for example, that all the CLRM assumptions hold but we use the following estimator rather than OLS. This is clearly biased because However, because plim(1/N) = 0 the estimator will still be consistent.

  8. Reasons why the X variable may be correlated with the error • The X variable is measured with error – ‘the errors in • variables model’. • 2. The equation is one equation taken from a set of • simultaneous equations – ‘the simultaneous equations • model’.

  9. It is tempting to try to test cov(Xi,ui) = 0 by looking at the sample covariance of X and the regression residuals but… this will be equal to zero by construction.

More Related