1 / 71

Chapter 6

Chapter 6. Autocorrelation. What is in this Chapter?. How do we detect this problem? What are the consequences? What are the solutions?. What is in this Chapter?.

Télécharger la présentation

Chapter 6

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6 Autocorrelation

  2. What is in this Chapter? • How do we detect this problem? • What are the consequences? • What are the solutions?

  3. What is in this Chapter? • Regarding the problem of detection, we start with the Durbin-Watson (DW) statistic, and discuss its several limitations and extensions. • Durbin's h-test for models with lagged dependent variables • Tests for higher-order serial correlation. • We discuss (in Section 6.5) the consequences of serially correlated errors and OLS estimators.

  4. What is in this Chapter? • The solutions to the problem of serial correlation are discussed in: • Section 6.3: estimation in levels versus first differences • Section 6.9: strategies when the DW test statistic is significant • Section 6.10: trends and random walks • This chapter is very important and the several ideas have to be understood thoroughly.

  5. 6.1 Introduction • The order of autocorrelation • In the following sections we discuss how to: 1. Test for the presence of serial correlation. 2. Estimate the regression equation when the errors are serially correlated.

  6. 6.2 Durbin-Watson Test

  7. 6.2 Durbin-Watson Test • The sampling distribution of d depends on values of the explanatory variables and hence Durbin and Watson derived upper limits and lower limits for the significance level for d. • There are tables to test the hypothesis of zero autocorrelation against the hypothesis of first-order positive autocorrelation. ( For negative autocorrelation we interchange .)

  8. 6.2 Durbin-Watson Test • If , we reject the null hypothesis of no autocorrelation. • If , we do not reject the null hypothesis. • If , the test is inconclusive.

  9. 6.2 Durbin-Watson Test Illustrative Example • Consider the data in Table 3.11. The estimated production function is • Referring to the DW table with k’=2 and n=39 for 5% significance level, we see that . • Since the observed , we reject the hypothesis at the 5% level.

  10. 6.2 Limitations of D-W Test • It test for only first-order serial correlation. • The test is inconclusive if the computed value lies between . • The test cannot be applied in models with lagged dependent variables.

  11. 6.3 Estimation in Levels Versus First Differences • Simple solutions to the serial correlation problem: First Difference • If the DW test rejects the hypothesis of zero serial correlation, what is the next step? • In such cases one estimates a regression by transforming all the variables by • ρ-differencing(quasi-first difference) • First-difference

  12. 6.3 Estimation in Levels Versus First Differences

  13. 6.3 Estimation in Levels Versus First Differences

  14. 6.3 Estimation in Levels Versus First Differences • When comparing equations in levels and first differences, one cannot compare the R2 because the explained variables are different. • One can compare the residual sum of squares but only after making a rough adjustment. (Please refer to P.231)

  15. 6.3 Estimation in Levels Versus First Differences • Let • from the first difference equation • residual sum of squares from the levels equation • residual sum of squares from the first difference equation • comparable from the levels equation Then

  16. 6.3 Estimation in Levels Versus First Differences Illustrative Examples • Consider the simple Keynesian model discussed by Friedman and Meiselman. The equation estimated in levels is where Ct= personal consumption expenditure (current dollars) At= autonomous expenditure (current dollars)

  17. 6.3 Estimation in Levels Versus First Differences • The model fitted for the 1929-1030 period gave (figures in parentheses are standard)

  18. 6.3 Estimation in Levels Versus First Differences • This is to be compared with from the equation in first differences.

  19. 6.3 Estimation in Levels Versus First Differences • For the production function data in Table 3.11 the first difference equation is • The comparable figures the levels equation reported earlier in Chapter 4, equation (4.24) are

  20. 6.3 Estimation in Levels Versus First Differences • This is to be compared with from the equation in first differences.

  21. 6.3 Estimation in Levels Versus First Differences • Harvey gives a different definition of .He defines it as • This does not adjust for the fact that the error variances in the levels equations and the first difference equation are not the same. • The arguments for his suggestion are given in his paper.

  22. 6.3 Estimation in Levels Versus First Differences • Usually, with time-series data, one gets high R2 values if the regressions are estimated with the levels yt and Xt but one gets low R2 values if the regressions are estimated in first differences (yt - yt-1) and (xt - xt-1).

  23. 6.3 Estimation in Levels Versus First Differences • Since a high R2 is usually considered as proof of a strong relationship between the variables under investigation, there is a strong tendency to estimate the equations in levels rather than in first differences. • This is sometimes called the “R2 syndrome." • An example

  24. 6.3 Estimation in Levels Versus First Differences • However, if the DW statistic is very low, it often implies a misspecified equation, no matter what the value of the R2 is • In such cases one should estimate the regression equation in first differences and if the R2 is low, this merely indicates that the variables y and x are not related to each other.

  25. 6.3 Estimation in Levels Versus First Differences • Granger and Newbold present some examples with artificially generated data where y, x, and the error u are each generated independently so that there is no relationship between y and x. • But the correlations between yt and yt-1,.Xt and Xt-1, and ut and ut-1 are very high. • Although there is no relationship between y and x the regression of y on x gives a high R2 but a low DW statistic.

  26. 6.3 Estimation in Levels Versus First Differences • When the regression is run in first differences, the R2 is close to zero and the DW statistic is close to 2. • Thus demonstrating that there is indeed no relationship between y and x and that the R2 obtained earlier is spurious. • Thus regressions in first differences might often reveal the true nature of the relationship between y and x. • An example

  27. Homework • Find the data • Y is the Taiwan stock index • X is the U.S. stock index • Run two equations • The equation in levels (log-based price) • The equation in the first differences • A comparison between the two equations • The beta estimate and its significance • The R square • The value of DW statistic • Q: Adopt the equation in levels or the first differences?

  28. 6.3 Estimation in Levels Versus First Differences • For instance, suppose that we have quarterly data; then it is possible that the errors in any quarter this year are most highly correlated with the errors in the corresponding quarter last year rather than the errors in the preceding quarter • That is, ut could be uncorrelated with ut-1 but it could be highly correlated with ut-4. • If this is the case, the DW statistic will fail to detect it.

  29. 6.3 Estimation in Levels Versus First Differences • What we should be using is a modified statistic defined as • The quarterly data (e.g. GDP) • The monthly data (e.g. Industrial product index)

  30. 6.4 Estimation Procedures with Autocorrelated Errors • GLS (Generalized least squares) iid

  31. 6.4 Estimation Procedures with Autocorrelated Errors

  32. 6.4 Estimation Procedures with Autocorrelated Errors • In actual practice is not known • There are two types of procedures for estimating • 1. Iterative procedures • 2. Grid-search procedures.

  33. 6.4 Estimation Procedures with Autocorrelated Errors Iterative Procedures • Among the iterative procedures, the earliest was the Cochrane-Orcutt (C-O) procedure. • In the Cochrane-Otcutt procedure we estimate equation(6.2) by OLS, get the estimated residuals , and estimate .

  34. 6.4 Estimation Procedures with Autocorrelated Errors • Durbin suggested an alternative method of estimating . • In this procedure, we write equation (6.5) as • We regress and take the estimated coefficient of as an estimate of .

  35. 6.4 Estimation Procedures with Autocorrelated Errors • Use equation (6.6) and (6.6’) and estimate a regression of y* on x*. • The only thing to note is that the slop coefficient in this equation is , but the intercept is . • Thus after estimating the regression of y* on x*, we have to adjust the constant term appropriately to get estimates of the parameters of the original equation (6.2).

  36. 6.4 Estimation Procedures with Autocorrelated Errors • Further, the standard errors we compute from the regression of y* on x* are now ”asymptotic” standard errors because of the fact that has been estimated. • If there are lagged values of y as explanatory variables, these standard errors are not correct even asymptotically. • The adjustment needed in this case is discussed in Section 6.7.

  37. 6.4 Estimation Procedures with Autocorrelated Errors Grid-Search Procedures • One of the first grid-search procedures is the Hildreth and Lu procedure suggested in 1960. • The procedure is as follows. Calculate in equation(6.6) for different values of at intervals of 0.1 in the rang . • Estimate the regression of and calculate the residual sum of squares RSS in each case.

  38. 6.4 Estimation Procedures with Autocorrelated Errors • Choose the value of for which the RSS is minimum. • Again repeat this procedure for smaller intervals of around this value. • For instance, if the value of for which RSS is minimum is -0.4, repeat this search procedure for values of at intervals of 0.01 in the range .

  39. 6.4 Estimation Procedures with Autocorrelated Errors • This procedure is not the same as the ML procedure. If the errors et are normally distributed, we can write the log-likelihood function as (derivation is omitted) where • Thus minimizing Q is not the same as maximizing log L. • We can use the grid-search procedure to get the ML estimate.

  40. 6.4 Estimation Procedures with Autocorrelated Errors • Consider the data in Table 3.11 and the estimation of the production function • The OLS estimation gave a DW statistic of 0.86, suggesting significant positive autocorrelation. • Assuming that the errors were AR(1), two estimation procedures were used: the Hildreth-Lu grid search and the iterative Cochrane-Orcutt (C-O).

  41. 6.4 Estimation Procedures with Autocorrelated Errors • The Hildreth-Lu procedure gave . • The iterative C-O procedure gave . • The DW test statistic implied that .

  42. 6.4 Estimation Procedures with Autocorrelated Errors • The estimates of the parameters (with standard errors in parentheses) were as follows:

  43. 6.5 Effect of AR(1) Errors on OLS Estimates

  44. 6.5 Effect of AR(1) Errors on OLS Estimates • If is known, it is true that one can get estimators better than OLS that take account of autocorrelation. However, in practice is known and has to be estimated. In small samples it is not necessarily true that one gains (in terms of mean-square error for ) by estimating . This problem has been investigated by Rao and Griliches, who suggest the rule of thumb (for sample of size 20) that one can use the methods that take account of autocorrelation if ,where is the estimated first-order serial correlation from an OLS regression. In samples of larger sizes it would be worthwhile using these methods for smaller than 0.3.

  45. 6.5 Effect of AR(1) Errors on OLS Estimates • 2. The discussion above assumes that the true errors are first-order autoregressive. If they have a more complicated structure (e.g., second-order autoregressive), it might be thought that it would still be better to proceed on the assumption that the errors are first-order autoregressive rather than ignore the problem completely and use the OLS method??? • Engle shows that this is not necessarily true (i.e., sometimes one can be worse off making the assumption of first-order autocorrelation than ignoring the problem completely).

  46. 6.5 Effect of AR(1) Errors on OLS Estimates • In regressions with quarterly (or monthly) data, one might find that the errors exhibit fourth (or twelfth)-order autocorrelation because of not making adequate allowance for seasonal effects. In such case if one looks for only first-order autocorrelation, one might not find any. This does not mean that autocorrelation is not a problem. In this case the appropriate specification for the error term may be for quarterly data and monthly data.

  47. 6.5 Effect of AR(1) Errors on OLS Estimates • Finally, and most important, it is often possible to confuse misspecified dynamics with serial correlation in the errors. For instance, a static regression model with first-order autocorrelation in the errors, that is, ,can written as

  48. 6.5 Effect of AR(1) Errors on OLS Estimates 4. The model is the same as with the restriction . We can estimate the model (6.11’) and test this restriction. If it is rejected, clearly it is not valid to estimate (6.11).(the test procedure is described in Section 6.8.)

  49. 6.5 Effect of AR(1) Errors on OLS Estimates • The errors would be serially correlated but not because the errors follow a first-order autoregressive process but because the term xt-1 and yt-1 have been omitted. • Thus is what is meant by “misspecified dynamics.” Thus significant serial correlation in the estimated residuals does not necessarily imply that we should estimate a serial correlation model.

  50. 6.5 Effect of AR(1) Errors on OLS Estimates • Some further tests are necessary (like the restriction in the above-mentioned case). • In fact, it is always best to start with an equation like (6.11’) and test this restriction before applying any test for serial correlation.

More Related