1 / 46

ECONOMETRICS I

ECONOMETRICS I. CHAPTER 5: TWO-VARIABLE REGRESSION: INTERVAL ESTIMATION AND HYPOTHESIS TESTING. Textbook: Damodar N. Gujarati (2004)  Basic Econometrics , 4th edition, The McGraw-Hill Companies. 5.2 Interval Estimation: Some Basic Ideas.

sagira
Télécharger la présentation

ECONOMETRICS I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECONOMETRICS I CHAPTER 5: TWO-VARIABLE REGRESSION: INTERVAL ESTIMATION AND HYPOTHESIS TESTING • Textbook: Damodar N. Gujarati (2004) Basic Econometrics, 4th edition, The McGraw-Hill Companies

  2. 5.2 Interval Estimation: Some Basic Ideas • Because of sampling fluctuations, a single estimate is likely todiffer from the true value, although in repeated sampling its mean value isexpected to be equal to the true value.

  3. 5.2 Interval Estimation: Some Basic Ideas • In statistics the reliability of a point estimator is measured by its standard error. Therefore, instead of relying on the point estimate alone, we may construct an interval around the point estimator, say within two or three standard errors on either side of the point estimator, such that this interval has, say, 95 percent probability of including the true parameter value. This is roughly the idea behind interval estimation.

  4. 5.2 Interval Estimation: Some Basic Ideas

  5. 5.2 Interval Estimation: Some Basic Ideas • Confidence coefficient= 0.95 = 95 % • Level of significance= 0.05 = 5 % If α = 0.05, or 5 percent, (5.2.1) would read: Theprobability that the (random) interval shown there includes the true β2 is0.95, or 95 percent. The interval estimator thus gives a range of valueswithin which the true β2 may lie.

  6. 5.2 Interval Estimation: Some Basic Ideas • It is very important to know the following aspects of interval estimation:

  7. 5.2 Interval Estimation: Some Basic Ideas

  8. 5.2 Interval Estimation: Some Basic Ideas

  9. 5.3 CONFIDENCE INTERVALS FOR REGRESSIONCOEFFICIENTS β1AND β2 • CONFIDENCE INTERVAL FOR β2 With the normality assumptionfor ui, the OLSestimatoris normally distributed.

  10. CONFIDENCE INTERVAL FOR β2 • We can use the normal distribution to make probabilistic statements about β2provided the true population variance σ2 is known. If σ2 is known, an importantproperty of a normally distributed variable with mean μ and variance σ2is that the area under the normal curve between μ ± σ is about 68 percent,that between the limits μ ± 2σ is about 95 percent, and that between μ ± 3σis about 99.7 percent.

  11. CONFIDENCE INTERVAL FOR β2

  12. CONFIDENCE INTERVAL FOR β2 • Thet value in the middle of this double inequality is the t value given by (5.3.2) and where tα/2 is the value of the t variable obtained from the t distribution for α/2 level of significance and n − 2 df. it is often called thecritical t value at α/2 level of significance.

  13. CONFIDENCE INTERVAL FOR β2

  14. CONFIDENCE INTERVAL FOR β2

  15. CONFIDENCE INTERVAL FOR β2

  16. CONFIDENCE INTERVAL FOR β1

  17. CONFIDENCE INTERVAL FOR β1

  18. 5.4 CONFIDENCE INTERVAL FOR σ2

  19. 5.4 CONFIDENCE INTERVAL FOR σ2

  20. 5.4 CONFIDENCE INTERVAL FOR σ2

  21. 5.4 CONFIDENCE INTERVAL FOR σ2

  22. 5.5 HYPOTHESIS TESTING: GENERAL COMMENTS

  23. 5.5 HYPOTHESIS TESTING: GENERAL COMMENTS • Confidenceintervalapproach • Test of significanceapproach Both these approaches predicate that the variable (statistic or estimator) under consideration has some probability distribution and that hypothesis testing involves making statements or assertions about the value(s) of the parameter(s) of such distribution.

  24. 5.6 HYPOTHESIS TESTING: THE CONFIDENCE-INTERVAL APPROACH 95 % CI for Beta-2 is (0.4268, 0.5914).

  25. In statistics, when we reject the null hypothesis, we say that our finding is statistically significant. On the other hand, when we do not reject the null hypothesis, we say that our finding is not statistically significant. Two-sided test vs. one-sided test • → two-sided test • → one-sided test

  26. 5.7 HYPOTHESIS TESTING: THE TEST-OF-SIGNIFICANCE APPROACH • Broadly speaking, a test of significance is a procedure by which sample results are used to verify the truth or falsity of a null hypothesis. The key idea behind tests of significance is that of a test statistic (estimator) and the sampling distribution of such a statistic under the null hypothesis. The decision to accept or reject H0 is made on the basis of the value of the test statistic obtained from the data at hand.

  27. 5.7 HYPOTHESIS TESTING: THE TEST-OF-SIGNIFICANCE APPROACH • Thisvariablefollows the t distribution with n−2 df.

  28. 5.7 HYPOTHESIS TESTING: THE TEST-OF-SIGNIFICANCE APPROACH • is the value of β2 under H0 and where −tα/2 andtα/2 are the valuesof t (the critical t values) obtained from the t table for (α/2) level of significanceand n − 2 df.

  29. 5.7 HYPOTHESIS TESTING: THE TEST-OF-SIGNIFICANCE APPROACH

  30. 5.7 HYPOTHESIS TESTING: THE TEST-OF-SIGNIFICANCE APPROACH

  31. 5.7 HYPOTHESIS TESTING: THE TEST-OF-SIGNIFICANCE APPROACH

  32. 5.7 HYPOTHESIS TESTING: THE TEST-OF-SIGNIFICANCE APPROACH

  33. 5.7 HYPOTHESIS TESTING: THE TEST-OF-SIGNIFICANCE APPROACH • Since we use the t distribution, the preceding testing procedure is calledappropriately the t test. In the language of significance tests, a statisticis said to be statistically significant if the value of the test statistic liesin the critical region. In this case the null hypothesis is rejected. By thesame token, a test is said to be statistically insignificant if the value ofthe test statistic lies in the acceptance region. In this situation, the nullhypothesis is not rejected. In our example, the t test is significant and hencewe reject the null hypothesis.

  34. 5.7 HYPOTHESIS TESTING: THE TEST-OF-SIGNIFICANCE APPROACH • To test this hypothesis, we use the one-tail test (the right tail), as shown inFigure 5.5. • The test procedure is the same as before except that the upper confidencelimit or critical value now corresponds to tα = t0.05, that is, the 5 percent level.As Figure 5.5 shows, we need not consider the lower tail of the t distributionin this case. • CI = (- ∞, 0.3664)

  35. TABLE 5.1 (page 133)

  36. Testing the Significance of σ2: The χ2 Test

  37. Testing the Significance of σ2: The χ2 Test

  38. The Meaning of “Accepting” or “Rejecting” a Hypothesis

  39. The Exact Level of Significance: The p Value • Once a test statistic (e.g., the t statistic) is obtained in a given example, why not simply go to the appropriate statistical table and find out the actual probability of obtaining a value of the test statistic as much as or greater than that obtained in the example? This probability is called the p value (i.e., probability value), also known as the observed or exact level of significance or the exact probability of committing a Type I error. More technically, the p value is defined as the lowest significance level at which a null hypothesis can be rejected.

  40. 5.9 REGRESSION ANALYSIS AND ANALYSIS OF VARIANCE • TSS = ESS + RSS • A study of these components of TSS is known as theanalysis of variance (ANOVA) from the regression viewpoint.

  41. 5.9 REGRESSION ANALYSIS AND ANALYSIS OF VARIANCE

  42. 5.9 REGRESSION ANALYSIS AND ANALYSIS OF VARIANCE

  43. 5.9 REGRESSION ANALYSIS AND ANALYSIS OF VARIANCE

  44. 5.11 REPORTING THE RESULTS OF REGRESSION ANALYSIS

  45. 5.11 REPORTING THE RESULTS OF REGRESSION ANALYSIS In Eq. (5.11.1) the figures in the first set of parentheses are the estimated standard errors of the regression coefficients, the figures in the second set are estimated t values computed from (5.3.2) under the null hypothesis that the true population value of each regression coefficient individually is zero (e.g., 3.8128 = 24.4545 ÷ 6.4138), and the figures in the third set are the estimated p values. Thus, for 8 df the probability of obtaining a t value of 3.8128 or greater is 0.0026 and the probability of obtaining a t value of 14.2605 or larger is about 0.0000003.

More Related