1 / 58

Inference for Regression

STAT E-150 Statistical Methods. Inference for Regression. We have discussed how to find a simple linear regression model to make predictions. Now we will investigate our model further: - How do we evaluate the effectiveness of the model?

meir
Télécharger la présentation

Inference for Regression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. STAT E-150Statistical Methods Inference for Regression

  2. We have discussed how to find a simple linear regression model to make predictions. Now we will investigate our model further: - How do we evaluate the effectiveness of the model? - How do we know that the relationship is significant? - How much of the variability in the response variable can be explained by its relationship to the predictor? The simple linear model is of the form y = β1x + β0 + ε where ε ~ N(0, σε). Recall that inference methods can address questions about the population based on our sample data.

  3. Consider our earlier example: Medical researchers have noted that adolescent females are more likely to deliver low-birthweight babies than are adult females. Because LBW babies tend to have higher mortality rates, studies have been conducted to examine the relationship between birthweight and the mother’s age. We found the fitted regression model

  4. How can we assess this model? If the slope β1 is equal to zero, then there is no change in the response variable associated with a change in the predictor. We will therefore test the value of the slope to investigate whether there is a linear relationship between the two variables

  5. T-test for the Slope of a Simple Linear Model H0: β1=0 Ha: β1≠0 The test statistic is with n-2 degrees of freedom.

  6. Assumptions for the model and the errors: • 1. Linearity Assumption • Straight Enough Condition: does the scatterplot appear linear? • Check the residuals to see if they appear to be randomly scattered • Quantitative Data Condition: Is the data quantitative?

  7. Assumptions for the model and the errors: • 2. Independence Assumption: the errors must be mutually independent • Randomization Condition: the individuals are a random sample • Check the residuals for patterns, trends, clumping

  8. Assumptions for the model and the errors: • 3. Equal Variance Assumption: the variability of y should be about • the same for all values of x • Does The Plot Thicken? Condition: Is the spread about the line nearly constant in the scatterplot? • Check the residuals for any patterns

  9. Assumptions for the model and the errors: • 4. Normal Population Assumption: the errors follow a Normal model at each value of x • Nearly Normal Condition: • Look at a histogram or NPP of the residuals

  10. We can check the conditions to see if all assumptions are true. If this is the case, the idealized regression line will have a distribution of y-values for each x-value, and these distributions will be approximately normal with equal variation and with means along the regression line:

  11. Here are the steps: • Create a scatterplot to see if the data is “straight enough”. • Fit a regression model and find the residuals (ε) and predicted values ( ). • Draw a scatterplot of the residuals vs. x or ; this should have no pattern or bend, or thickening, or thinning, or outliers. • If the scatterplot is “straight enough”, create a histogram and NPP of the residuals to check the “nearly normal” condition. • Continue with the inference if all conditions are reasonably satisfied.

  12. Here are the results for our data: - The scatterplot of the data indicates a positive linear relationship between the variables

  13. - The scatterplot of the residuals shows no particular pattern

  14. - The Normal Probability Plot for the residuals indicates a Normal distribution

  15. Here is our data: H0: Ha:

  16. Here is our data: H0: β1 = 0 Ha:β1 ≠ 0

  17. The SPSS output included this table: What does this value represent?

  18. The SPSS output included this table: -1163.45 What does this value represent?

  19. The SPSS output included this table: -1163.45 What does this value represent? This is the observed (sample) value of the y-intercept

  20. The SPSS output included this table: What does this value represent?

  21. The SPSS output included this table: 245.15 What does this value represent?

  22. The SPSS output included this table: 245.15 What does this value represent? This is the observed (sample) value of the slope

  23. The SPSS output included this table: What does this value represent?

  24. The SPSS output included this table: 45.908 What does this value represent?

  25. The SPSS output included this table: 45.908 What does this value represent? This is the standard error for the slope; this is how much we expect the sample slope to vary from one sample to another.

  26. What is the value of the test statistic?

  27. 5.34 What is the value of the test statistic?

  28. 5.34 What is the value of the test statistic? 5.34

  29. What is the p-value?

  30. What is the p-value? p = .001

  31. What is the p-value? p = .001 What is your decision?

  32. What is the p-value? p = .001 What is your decision? Since p is small, reject the null hypothesis

  33. What is the p-value? p = .001 What is your decision? Since p is small, reject the null hypothesis What can you conclude?

  34. What is the p-value? p = .001 What is your decision? Since p is small, reject the null hypothesis What can you conclude? The data indicates that there is a linear relationship between the mother’s age and the baby’s birthweight.

  35. Confidence Interval for the Slope The slope of the population regression line, , is the rate of change of the mean response as the explanatory variable increases. The slope of the least squares line, , is an estimate of . A confidence interval for the slope will show how accurate the estimate is.

  36. What is the confidence interval for the slope of the regression line?

  37. What is the confidence interval for the slope of the regression line? (139.285, 351.015)

  38. Confidence Interval for the Slope of a Simple Linear Model The confidence interval has the form where t* is the critical value for the tn-2 density curve to obtain the desired confidence level. How can we construct this?

  39. Confidence Interval for the Slope of a Simple Linear Model The confidence interval has the form We know that = 245.15 and that = 45.908, but how do we find t*?

  40. Confidence Interval for the Slope of a Simple Linear Model The confidence interval has the form We know that df = n - 2 = 8. On the line for df=8, the value of t for a 95% confidence interval is 2.306.

  41. Calculate = 245.15 ± 2.306(45.908) = 245.15 ± 105.86 = (139.29, 351.01)

  42. Calculate = 245.15 ± 2.306(45.908) = 245.15 ± 105.86 = (139.29, 351.01) What is the confidence interval found by SPSS?

  43. Calculate = 245.15 ± 2.306(45.908) = 245.15 ± 105.86 = (139.29, 351.01) What is the confidence interval found by SPSS?

  44. Calculate = 245.15 ± 2.306(45.908) = 245.15 ± 105.86 = (139.29, 351.01) What does this interval tell us?

  45. Calculate = 245.15 ± 2.306(45.908) = 245.15 ± 105.86 = (139.29, 351.01) What does this interval tell us? Based on the sample data, we are 95% confident that the true average increase in the weight of the baby associated with a one-year increase in age of the mother is between 139.29 and 351.01 g.

  46. Partitioning Variability - ANOVA ANOVA measures the effectiveness of the model by measuring how much of the variability in the response variable y is explained by the predictions based on the fitted model. We can partition this variability into two parts: the variability explained by the model, and the unexplained variability due to error, as measured by the residuals.

  47. In our SPSS output, SS(Model) = SS(Error) = SS(Total) =

  48. In our SPSS output, SS(Model) = 1201970.45 SS(Error) = 337212.45 SS(Total) = 1539182.9

  49. What is the value of the test statistic? What is the p-value? Decision: Conclusion:

  50. What is the value of the test statistic? 28.515 What is the p-value? Decision: Conclusion:

More Related