1 / 32

Basic Econometrics

Basic Econometrics. Chapter 5 : TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing. 5-1. Statistical Prerequisites

slade
Télécharger la présentation

Basic Econometrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Basic Econometrics Chapter 5: TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing Prof. Himayatullah

  2. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-1. Statistical Prerequisites • See Appendix A with key concepts such as probability, probability distributions, Type I Error, Type II Error,level of significance, power of a statistic test, and confidence interval Prof. Himayatullah

  3. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-2. Interval estimation: Some basic Ideas • How “close” is, say, ^2 to 2 ? Pr (^2 -   2  ^2 + ) = 1 - (5.2.1) • Random interval ^2 -   2  ^2 +  if exits, it known as confidence interval • ^2 -  is lower confidence limit • ^2 +  is upper confidence limit Prof. Himayatullah

  4. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-2. Interval estimation: Some basic Ideas • (1 - ) is confidence coefficient, • 0 <  < 1 is significance level • Equation (5.2.1) does not mean that the Pr of 2 lying between the given limits is (1 - ), but the Pr of constructing an interval that contains 2 is (1 - ) • (^2 -  , ^2 + ) is random interval Prof. Himayatullah

  5. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-2. Interval estimation: Some basic Ideas • In repeated sampling, the intervals will enclose, in (1 - )*100 of the cases, the true value of the parameters • For a specific sample, can not say that the probability is (1 - ) that a givenfixed interval includes the true 2 • If the sampling or probability distributions of the estimators are known, one can make confidence interval statement like (5.2.1) Prof. Himayatullah

  6. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-3. Confidence Intervals for Regression Coefficients • Z= (^2 - 2)/se(^2) = (^2 - 2) x2i / ~N(0,1) (5.3.1) We did not know  and have to use ^ instead, so: • t= (^2 - 2)/se(^2) = (^2 - 2) x2i /^ ~ t(n-2) (5.3.2) • => Interval for 2 Pr [ -t /2  t t /2] = 1-  (5.3.3) Prof. Himayatullah

  7. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-3. Confidence Intervals for Regression Coefficients • Or confidence interval for 2 is Pr [^2-t /2se(^2)  2  ^2+t /2se(^2)] = 1-  (5.3.5) • Confidence Interval for 1 Pr [^1-t /2se(^1)  1  ^1+t /2se(^1)] = 1-  (5.3.7) Prof. Himayatullah

  8. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-4. Confidence Intervals for 2 Pr [(n-2)^2/ 2/2  2(n-2)^2/ 21- /2] = 1-  (5.4.3) • The interpretation of this interval is: If we establish (1- ) confidence limits on 2 and if we maintain a priori that these limits will include true 2, we shall be right in the long run (1- ) percent of the time Prof. Himayatullah

  9. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-5. Hypothesis Testing: General Comments • The stated hypothesis is known as the null hypothesis: Ho • The Ho is tested against and alternative hypothesis: H1 5-6. Hypothesis Testing: The confidence interval approach One-sided or one-tail Test H0: 2  * versus H1:2 > * Prof. Himayatullah

  10. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing Two-sided or two-tail Test H0: 2 = * versus H1:2 # * ^2 - t /2se(^2)  2  ^2 + t /2se(^2) values of 2 lying in this interval are plausible under Ho with 100*(1- )% confidence. • If 2 lies in this region we do not reject Ho (the finding is statistically insignificant) • If 2 falls outside this interval, we reject Ho (the finding is statistically significant) Prof. Himayatullah

  11. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-7. Hypothesis Testing: The test of significance approach A test of significance is a procedure by which sample results are used to verify the truth or falsity of a null hypothesis • Testing the significance of regression coefficient: The t-test Pr [^2-t /2se(^2)  2  ^2+t /2se(^2)]= 1-  (5.7.2) Prof. Himayatullah

  12. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing • 5-7. Hypothesis Testing: The test of significance approach • Table 5-1: Decision Rule for t-test of significance Prof. Himayatullah

  13. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing • 5-7. Hypothesis Testing: The test of significance approach Testing the significance of 2 : The 2 Test Under the Normality assumption we have: ^2 2 = (n-2) ------- ~ 2(n-2) (5.4.1) 2 From (5.4.2) and (5.4.3) on page 520 => Prof. Himayatullah

  14. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing • 5-7. Hypothesis Testing: The test of significance approach • Table 5-2: A summary of the2 Test Prof. Himayatullah

  15. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-8. Hypothesis Testing: Some practical aspects 1) The meaning of “Accepting” or “Rejecting” a Hypothesis 2) The Null Hypothesis and the Rule of Thumb 3) Forming the Null and Alternative Hypotheses 4) Choosing , the Level of Significance Prof. Himayatullah

  16. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-8. Hypothesis Testing: Some practical aspects 5) The Exact Level of Significance: The p-Value [See page 132] 6) Statistical Significance versus Practical Significance 7) The Choice between Confidence- Interval and Test-of-Significance Approaches to Hypothesis Testing [Warning: Read carefully pages 117-134 ] Prof. Himayatullah

  17. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-9. Regression Analysis and Analysis of Variance • TSS = ESS + RSS • F=[MSS of ESS]/[MSS of RSS] = = 2^2xi2/ ^2 (5.9.1) • If ui are normally distributed; H0:2 = 0then F follows the F distribution with 1 and n-2 degree of freedom Prof. Himayatullah

  18. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing • 5-9. Regression Analysis and Analysis of Variance • F provides a test statistic to test the null hypothesis that true 2 is zero by compare this F ratio with the F-critical obtained from F tables at the chosen level of significance, or obtain the p-value of the computed F statistic to make decision Prof. Himayatullah

  19. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing • 5-9. Regression Analysis and Analysis of Variance • Table 5-3. ANOVA for two-variable regression model Prof. Himayatullah

  20. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-10. Application of Regression Analysis: Problem of Prediction • By the data of Table 3-2, we obtained the sample regression (3.6.2) : Y^i = 24.4545 + 0.5091Xi , where Y^i is the estimator of true E(Yi) • There are two kinds of prediction as follows: Prof. Himayatullah

  21. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-10. Application of Regression Analysis: Problem of Prediction • Mean prediction: Prediction of the conditional mean value of Y corresponding to a chosen X, say X0, that is the point on the population regression line itself (see pages 137-138 for details) • Individual prediction: Prediction of an individual Y value corresponding to X0 (see pages 138-139 for details) Prof. Himayatullah

  22. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-11. Reporting the results of regression analysis • An illustration: Y^I= 24.4545 + 0.5091Xi (5.1.1) Se = (6.4138) (0.0357) r2= 0.9621 t = (3.8128) (14.2405) df= 8 P = (0.002517) (0.000000289) F1,2=2202.87 Prof. Himayatullah

  23. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-12. Evaluating the results of regression analysis: • Normality Test: The Chi-Square (2) Goodness of fit Test 2N-1-k =  (Oi– Ei)2/Ei (5.12.1) Oi is observed residuals (u^i) in interval i Ei is expected residuals in interval i N is number of classes or groups; k is number of parameters to be estimated. If p-value of obtaining 2N-1-k is high (or 2N-1-k is small) => The Normality Hypothesis can not be rejected Prof. Himayatullah

  24. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-12. Evaluating the results of regression analysis: • Normality Test: The Chi-Square (2) Goodness of fit Test H0: ui is normally distributed H1: ui is un-normally distributed Calculated-2N-1-k =  (Oi– Ei)2/Ei (5.12.1) Decision rule: Calculated-2N-1-k > Critical-2N-1-k then H0 can be rejected Prof. Himayatullah

  25. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-12. Evaluating the results of regression analysis: The Jarque-Bera (JB) test of normality This test first computes the Skewness (S) and Kurtosis (K) and uses the following statistic: JB = n [S2/6 + (K-3)2/24] (5.12.2) Mean= xbar = xi/n ; SD2 = (xi-xbar)2/(n-1) S=m3/m2 3/2 ;K=m4/m22 ; mk= (xi-xbar)k/n Prof. Himayatullah

  26. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-12. (Continued) Under the null hypothesis H0 that the residuals are normally distributed Jarque and Bera show that in large sample (asymptotically) the JB statistic given in (5.12.12) follows the Chi-Square distribution with 2 df. If the p-value of the computed Chi-Square statistic in an application is sufficiently low, one can reject the hypothesis that the residuals are normally distributed. But if p-value is reasonable high, one does not reject the normality assumption. Prof. Himayatullah

  27. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 1. Estimation and Hypothesis testing constitute the two main branches of classical statistics 2. Hypothesis testing answers this question: Is a given finding compatible with a stated hypothesis or not? 3. There are two mutually complementary approaches to answering the preceding question: Confidence interval and test of significance. Prof. Himayatullah

  28. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 4. Confidence-interval approach has a specified probability of including within its limits the true value of the unknown parameter. If the null-hypothesized value lies in the confidence interval, H0 is not rejected, whereas if it lies outside this interval, H0 can be rejected Prof. Himayatullah

  29. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 5. Significance test procedure develops a test statistic which follows a well-defined probability distribution (like normal, t, F, or Chi-square). Once a test statistic is computed, its p-value can be easily obtained. The p-value The p-value of a test is the lowest significance level, at which we would reject H0. It gives exact probability of obtaining the estimated test statistic under H0. If p-value is small, one can reject H0, but if it is large one may not reject H0. Prof. Himayatullah

  30. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 6. Type I error is the error of rejecting a true hypothesis. Type II error is the error of accepting a false hypothesis. In practice, one should be careful in fixing the level of significance , the probability of committing a type I error (at arbitrary values such as 1%, 5%, 10%). It is better to quote the p-value of the test statistic. Prof. Himayatullah

  31. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 7. This chapter introduced the normality test to find out whether ui follows the normal distribution. Since in small samples, the t, F,and Chi-square tests require the normality assumption, it is important that this assumption be checked formally Prof. Himayatullah

  32. Chapter 5 TWO-VARIABLE REGRESSION:Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions (ended) 8. If the model is deemed practically adequate, it may be used for forecasting purposes. But should not go too far out of the sample range of the regressor values. Otherwise, forecasting errors can increase dramatically. Prof. Himayatullah

More Related