1 / 79

Lecture Ten

Lecture Ten. Where Do We Go From Here?. Contingency Tables. Regression. Properties Assumptions Violations Diagnostics. Modeling. ANOVA. Count. Probability. Probability. Lecture. Part I: Regression properties of OLS estimators assumptions of OLS pathologies of OLS

anevay
Télécharger la présentation

Lecture Ten

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture Ten

  2. Where Do We Go From Here? Contingency Tables Regression Properties Assumptions Violations Diagnostics Modeling ANOVA Count Probability Probability

  3. Lecture • Part I: Regression • properties of OLS estimators • assumptions of OLS • pathologies of OLS • diagnostics for OLS • Part II: Experimental Method

  4. Properties of OLS Estimators • Unbiased: • Note: y(i) = a + b*x(i) + e(i) • And summing over observations i and dividing by n: • Recall, the estimator for the slope is:

  5. And substituting in this expression for the estimator, the expression for • And taking expectations • Note:

  6. So • The dispersion in the estimate for the slope depends upon unexplained variance, and inversely on the dispersion in x. • the estimate, the unexplained mean square, is used for the variance of e.

  7. Other Properties of Estimators • Efficiency: makes optimum use of the sample information to obtain estimators with minimum dispersion • Consistency: As the sample size increases the estimator approaches the population parameter

  8. Outline: Regression • The Assumptions of Least Squares • The Pathologies of Least Squares • Diagnostics for Least Squares

  9. Assumptions • Expected value of the error is zero, E[e]= 0 • The error is independent of the explanatory variable, E{e [x-Ex]}=0 • The errors are independent of one another, E[e(i)e(j)] = 0 , i not equal to j. • The variance is homoskedatic, E[e(i)]2=E[e(j)]2 • The error is normal with mean zero and variance sigma squared,

  10. 18.4 Error Variable: Required Conditions • The error e is a critical part of the regression model. • Four requirements involving the distribution of e must be satisfied. • The probability distribution of e is normal. • The mean of e is zero: E(e) = 0. • The standard deviation of e is sefor all values of x. • The set of errors associated with different values of y are all independent.

  11. E(y|x3) b0 + b1x3 E(y|x2) b0 + b1x2 E(y|x1) b0 + b1x1 The Normality of e The standard deviation remains constant, m3 m2 but the mean value changes with x m1 From the first three assumptions we have: y is normally distributed with mean E(y) = b0 + b1x, and a constant standard deviation se x1 x2 x3

  12. Pathologies • Cross section data: error variance is heteroskedatic. Example, could vary with firm size. Consequence, all the information available is not used efficiently, and better estimates of the standard error of regression parameters is possible. • Time series data: errors are serially correlated, i.e auto-correlated. Consequence, inefficiency.

  13. Lab 6: Autocorrelation?

  14. Lab Six: Durbin-Watson Statistic

  15. Genr: Error = resid Genr: errorlag1=resid(-1) Error (t) = a +b *error(t-1) + e(t)

  16. Pathologies ( Cont. ) • Explanatory variable is not independent of the error. Consequence, inconsistency, i.e. larger sample sizes do not lead to lower standard errors for the parameters, and the parameter estimates (slope etc.) are biased. • The error is not distributed normally. Example, there may be fat tails. Consequence, use of the normal may underestimate true 95 % confidence intervals.

  17. Pathologies (Cont.) • Multicollinearity: The independent variables may be highly correlated. As a consequence, they do not truly represent separate causal factors, but instead a common causal factor.

  18. View/open selected/one window/one group In Group Window: View/ correlations View/open selected/one window/one group In Group Window: View/Multiple Graphs/Scatter/ Matrix of all pairs

  19. Price = a +b*bedrooms+c*house_size01 + d*lot_sixe01+e

  20. Price = a*dummy2 +b*dummy34 +c*dummy5 +d*house_size01 +e

  21. 18.9 Regression Diagnostics - I • The three conditions required for the validity of the regression analysis are: • the error variable is normally distributed. • the error variance is constant for all values of x. • The errors are independent of each other. • How can we diagnose violations of these conditions?

  22. Residual Analysis • Examining the residuals (or standardized residuals), help detect violations of the required conditions. • Example 18.2 – continued: • Nonnormality. • Use Excel to obtain the standardized residual histogram. • Examine the histogram and look for a bell shaped. diagram with a mean close to zero.

  23. Diagnostics ( Cont. ) • Multicollinearity may be suspected if the t-statistics for the coefficients of the explanatory variables are not significant but the coefficient of determination is high. The correlation between the explanatory variable can then be calculated. To see if it is high.

  24. Diagnostics • Is the error normal? Using EViews, with the view menu in the regression window, a histogram of the distribution of the estimated error is available, along with the coefficients of skewness and kurtosis, and the Jarque-Bera statistic testing for normality.

  25. Lab 6

  26. View/Residual tests/Histogram-Normality Test

  27. Diagnostics (Cont.) • To detect heteroskedasticity: if there are sufficient observations, plot the estimated errors against the fitted dependent variable

  28. ^ y + + + + + + + + + + + + + + + + + + + + + + + ^ The spread increases with y Heteroscedasticity • When the requirement of a constant variance is violated we have a condition of heteroscedasticity. • Diagnose heteroscedasticity by plotting the residual against the predicted y. Residual + + + + + + + + + + + + + ^ + + + y + + + + + + + +

  29. Homoscedasticity • When the requirement of a constant variance is not violated we have a condition of homoscedasticity. • Example 18.2 - continued

  30. Diagnostics ( Cont.) • Autocorrelation: The Durbin-Watson statistic is a scalar index of autocorrelation, with values near 2 indicating no autocorrelation and values near zero indicating autocorrelation. Examine the plot of the residuals in the view menu of the regression window in EViews.

  31. Non Independence of Error Variables • A time series is constituted if data were collected over time. • Examining the residuals over time, no pattern should be observed if the errors are independent. • When a pattern is detected, the errors are said to be autocorrelated. • Autocorrelation can be detected by graphing the residuals against time.

  32. Non Independence of Error Variables Patterns in the appearance of the residuals over time indicates that autocorrelation exists. Residual Residual + + + + + + + + + + + + + + 0 0 + Time Time + + + + + + + + + + + + + Note the runs of positive residuals, replaced by runs of negative residuals Note the oscillating behavior of the residuals around zero.

  33. Fix-Ups • Error is not distributed normally. For example, regression of personal income on explanatory variables. Sometimes a transformation, such as regressing the natural logarithm of income on the explanatory variables may make the error closer to normal.

  34. Fix-ups (Cont.) • If the explanatory variable is not independent of the error, look for a substitute that is highly correlated with the dependent variable but is independent of the error. Such a variable is called an instrument.

  35. Data Errors: May lead to outliers • Typos may lead to outliers and looking for ouliers is a good way to check for serious typos

  36. Outliers • An outlier is an observation that is unusually small or large. • Several possibilities need to be investigated when an outlier is observed: • There was an error in recording the value. • The point does not belong in the sample. • The observation is valid. • Identify outliers from the scatter diagram. • It is customaryto suspect an observation is an outlier if its |standard residual| > 2

  37. + + + + + + + + + + + An influential observation An outlier + + … but, some outliers may be very influential + + + + + + + + + + + + + + The outlier causes a shift in the regression line

  38. Procedure for Regression Diagnostics • Develop a model that has a theoretical basis. • Gather data for the two variables in the model. • Draw the scatter diagram to determine whether a linear model appears to be appropriate. • Determine the regression equation. • Check the required conditions for the errors. • Check the existence of outliers and influential observations • Assess the model fit. • If the model fits the data, use the regression equation.

  39. Part II: Experimental Method

  40. Outline • Critique of Regression

  41. Critique of Regression • Samples of opportunity rather than random sample • Uncontrolled Causal Variables • omitted variables • unmeasured variables • Insufficient theory to properly specify regression equation

  42. Experimental Method: # Examples • Deterrence • Aspirin • Miles per Gallon

  43. Deterrence and the Death Penalty

  44. Isaac Ehrlich Study of the Death Penalty: 1933-1969 • Homicide Rate Per Capita • Control Variables • probability of arrest • probability of conviction given charged • Probability of execution given conviction • Causal Variables • labor force participation rate • unemployment rate • percent population aged 14-24 years • permanent income • trend

More Related