 Download Download Presentation Chapter 6

# Chapter 6

Télécharger la présentation ## Chapter 6

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
##### Presentation Transcript

1. Chapter 6 Autocorrelation

2. What is in this Chapter? • How do we detect this problem? • What are the consequences? • What are the solutions?

3. What is in this Chapter? • Regarding the problem of detection, we start with the Durbin-Watson (DW) statistic, and discuss its several limitations and extensions. We discuss Durbin's h-test for models with lagged dependent variables and tests for higher-order serial correlation. • We discuss (in Section 6.5) the consequences of serially correlated errors and OLS estimators.

4. What is in this Chapter? • The solutions to the problem of serial correlation are discussed in Section 6.3 (estimation in levels versus first differences), Section 6.9 (strategies when the DW test statistic is significant), and Section 6.10 (trends and random walks). • This chapter is very important and the several ideas have to be understood thoroughly.

5. 6.1 Introduction • The order of autocorrelation • In the following sections we discuss how to: • 1. Test for the presence of serial correlation. • 2. Estimate the regression equation when the errors are serially correlated.

6. 6.2 Durbin-Watson Test

7. 6.2 Durbin-Watson Test

8. 6.2 Durbin-Watson Test

9. 6.2 Durbin-Watson Test

10. 6.2 Durbin-Watson Test

11. 6.3 Estimation in Levels Versus First Differences • Simple solutions to the serial correlation problem: First Difference • If the DW test rejects the hypothesis of zero serial correlation, what is the next step? • In such cases one estimates a regression by transforming all the variables by ρ-differencing(quasi-first difference) or first-difference

12. 6.3 Estimation in Levels Versus First Differences

13. 6.3 Estimation in Levels Versus First Differences

14. 6.3 Estimation in Levels Versus First Differences • When comparing equations in levels and first differences, one cannot compare the R2 because the explained variables are different. • One can compare the residual sum of squares but only after making a rough adjustment. (Please refer to P.231)

15. 6.3 Estimation in Levels Versus First Differences

16. 6.3 Estimation in Levels Versus First Differences

17. 6.3 Estimation in Levels Versus First Differences • Since we have comparable residual sum of squares (RSS), we can get the comparable R2 as well, using the relationship RSS = Syy(l — R2)

18. 6.3 Estimation in Levels Versus First Differences

19. 6.3 Estimation in Levels Versus First Differences • Illustrative Examples

20. 6.3 Estimation in Levels Versus First Differences

21. 6.3 Estimation in Levels Versus First Differences

22. 6.3 Estimation in Levels Versus First Differences

23. 6.3 Estimation in Levels Versus First Differences

24. 6.3 Estimation in Levels Versus First Differences • Usually, with time-series data, one gets high R2 values if the regressions are estimated with the levels yt and Xt but one gets low R2 values if the regressions are estimated in first differences (yt — yt-1) and (xt — xt-1) • Since a high R2 is usually considered as proof of a strong relationship between the variables under investigation, there is a strong tendency to estimate the equations in levels rather than in first differences. • This is sometimes called the “R2 syndrome."

25. 6.3 Estimation in Levels Versus First Differences • However, if the DW statistic is very low, it often implies a misspecified equation, no matter what the value of the R2 is • In such cases one should estimate the regression equation in first differences and if the R2 is low, this merely indicates that the variables y and x are not related to each other.

26. 6.3 Estimation in Levels Versus First Differences • Granger and Newbold present some examples with artificially generated data where y, x, and the error u are each generated independently so that there is no relationship between y and x • But the correlations between yt and yt-1,.Xt and Xt-1, and ut and ut-1 are very high • Although there is no relationship between y and x the regression of y on x gives a high R2 but a low DW statistic

27. 6.3 Estimation in Levels Versus First Differences • When the regression is run in first differences, the R2 is close to zero and the DW statistic is close to 2 • Thus demonstrating that there is indeed no relationship between y and x and that the R2 obtained earlier is spurious • Thus regressions in first differences might often reveal the true nature of the relationship between y and x. • Further discussion of this problem is in Sections 6.10 and 14.7

28. Homework • Find the data • Y is the Taiwan stock index • X is the U.S. stock index • Run two equations • The equation in levels (log-based price) • The equation in the first differences • A comparison between the two equations • The beta estimate and its significance • The R square • The value of DW statistic • Q: Adopt the equation in levels or the first differences?

29. 6.3 Estimation in Levels Versus First Differences • For instance, suppose that we have quarterly data; then it is possible that the errors in any quarter this year are most highly correlated with the errors in the corresponding quarter last year rather than the errors in the preceding quarter • That is, ut could be uncorrelated with ut-1 but it could be highly correlated with ut-4. • If this is the case, the DW statistic will fail to detect it • What we should be using is a modified statistic defined as

30. 6.3 Estimation in Levels Versus First Differences

31. 6.4 Estimation Procedures with Autocorrelated Errors

32. 6.4 Estimation Procedures with Autocorrelated Errors

33. 6.4 Estimation Procedures with Autocorrelated Errors

34. 6.4 Estimation Procedures with Autocorrelated Errors

35. 6.4 Estimation Procedures with Autocorrelated Errors • GLS (Generalized least squares)

36. 6.4 Estimation Procedures with Autocorrelated Errors

37. 6.4 Estimation Procedures with Autocorrelated Errors • In actual practice ρ is not known • There are two types of procedures for estimating • 1. Iterative procedures • 2. Grid-search procedures.

38. 6.4 Estimation Procedures with Autocorrelated Errors

39. 6.4 Estimation Procedures with Autocorrelated Errors

40. 6.4 Estimation Procedures with Autocorrelated Errors

41. 6.4 Estimation Procedures with Autocorrelated Errors

42. 6.4 Estimation Procedures with Autocorrelated Errors

43. 6.4 Estimation Procedures with Autocorrelated Errors

44. Homework • Redo the example (see Table 3.11 for the data) in the Textbook • OLS • C-O procedure • H-L procedure with the interval of 0.01 • Compare the R2 (Note: please calculate the comparable R2 form the levels equation)

45. 6.5 Effect of AR(1) Errors on OLS Estimates • In Section 6.4 we described different procedures for the estimation of regression models with AR(1) errors • We will now answer two questions that might arise with the use of these procedures: • 1. What do we gain from using these procedures? • 2. When should we not use these procedures?

46. 6.5 Effect of AR(1) Errors on OLS Estimates • First, in the case we are considering (i.e., the case where the explanatory variable Xt is independent of the error ut), the OLS estimates are unbiased • However, they will not be efficient • Further, the tests of significance we apply, which will be based on the wrong covariance matrix, will be wrong.

47. 6.5 Effect of AR(1) Errors on OLS Estimates • In the case where the explanatory variables include lagged dependent variables, we will have some further problems, which we discuss in Section 6.7 • For the present, let us consider the simple regression model

48. 6.5 Effect of AR(1) Errors on OLS Estimates

49. 6.5 Effect of AR(1) Errors on OLS Estimates

50. 6.5 Effect of AR(1) Errors on OLS Estimates