1 / 21

Stat 112 Notes 9

Learn how to assess multicollinearity in multiple regression and measure the quality of predictions using data splitting. Understand the consequences of multicollinearity and explore methods for dealing with it.

agarrity
Télécharger la présentation

Stat 112 Notes 9

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stat 112 Notes 9 • Today: • Multicollinearity (Chapter 4.6) • Multiple regression and causal inference

  2. Assessing Quality of Prediction (Chapter 3.5.3) • R squared is a measure of a fit of the regression to the sample data. It is not generally considered an adequate measure of the regression’s ability to predict the responses for new observations. • One method of assessing the ability of the regression to predict the responses for new observations is data splitting. • We split the data into a two groups – a training sample and a holdout sample (also called a validation sample). We fit the regression model to the training sample and then assess the quality of predictions of the regression model to the holdout sample.

  3. Measuring Quality of Predictions

  4. Multicollinearity • DATA: A real estate agents wants to develop a model to predict the selling price of a home. The agent takes a random sample of 100 homes that were recently sold and records the selling price (y), the number of bedrooms (x1), the size in square feet (x2) and the lot size in square feet (x3). Data is in houseprice.JMP.

  5. Note: These results illustrate how the F test is more powerful for testing whether a group of slopes in multiple regression are all zero than individual t tests.

  6. Multicollinearity • Multicollinearity: Explanatory variables are highly correlated with each other. It is often hard to determine their individual regression coefficients. • There is very little information in the data set to find out what would happen if we fix house size and change lot size.

  7. Since house size and lot size are highly correlated, for fixed house size, lot sizes do not change much. • The standard error for estimating the coefficient of lot sizes is large. Consequently the coefficient may not be significant. • Similarly for the coefficient of house size. • So, while it seems that at least one of the coefficients is significant (See ANOVA) you cannot tell which one is the useful one.

  8. Consequences of Multicollinearity • Standard errors of regression coefficients are large. As a result t statistics for testing the population regression coefficients are small. • Regression coefficient estimates are unstable. Signs of coefficients may be opposite of what is intuitively reasonable (e.g., negative sign on lot size). Dropping or adding one variable in the regression causes large change in estimates of coefficients of other variables.

  9. Detecting Multicollinearity • Pairwise correlations between explanatory variables are high. • Large overall F-statistic for testing usefulness of predictors but small t statistics. • Variance inflation factors

  10. Using VIFs • To obtain VIFs, after Fit Model, go to Parameter Estimates, right click, click Columns and click VIFs. • Detecting multicollinearity with VIFs: • Any individual VIF greater than 10 indicates multicollinearity.

  11. Multicollinearity and Prediction • If interest is in predicting y, as long as pattern of multicollinearity continues for those observations where forecasts are desired (e.g., house size and lot size are either both high, both medium or both small), multicollinearity is not particularly problematic. • If interest is in predicting y for observations where pattern of multicollinearity is different than that in sample (e.g., large house size, small lot size), no good solution (this would be extrapolation).

  12. Problems caused by multicollinearity • If interest is in predicting y, as long as pattern of multicollinearity continues for those observations where forecasts are desired (e.g., house size and lot size are either both high, both medium or both small), multicollinearity is not particularly problematic. • If interest is in obtaining individual regression coefficients, there is no good solution in face of multicollinearity. • If interest is in predicting y for observations where pattern of multicollinearity is different than that in sample (e.g., large house size, small lot size), no good solution (this would be extrapolation).

  13. Dealing with Multicollinearity • Suffer: If prediction within the range of the data is the only goal, not the interpretation of the coefficients, then leave the multicollinearity alone. • Omit a variable. Multicollinearity can be reduced by removing one of the highly correlated variables. However, if one wants to estimate the partial slope of one variable holding fixed the other variables, omitting a variable is not an option, as it changes the interpretation of the slope.

  14. California Test Score Data • The California Standardized Testing and Reporting (STAR) data set californiastar.JMP contains data on test performance, school characteristics and student demographic backgrounds from 1998-1999. • Average Test Score is the average of the reading and math scores for a standardized test administered to 5th grade students. • One interesting question: What would be the causal effect of decreasing the student-teacher ratio by one student per teacher?

  15. Multiple Regression and Causal Inference • Goal: Figure out what the causal effect on average test score would be of decreasing student-teacher ratio and keeping everything else in the world fixed. • Lurking variable: A variable that is associated with both average test score and student-teacher ratio. • In order to figure out whether a drop in student-teacher ratio causes higher test scores, we want to compare mean test scores among schools with different student-teacher ratios but the same values of the lurking variables, i.e. we want to hold the value of the lurking variable fixed. • If we include all of the lurking variables in the multiple regression model, the coefficient on student-teacher ratio represents the change in the mean of test scores that is caused by a one unit increase in student-teacher ratio.

  16. Omitted Variables Bias • Schools with many English learners tend to have worst resources. The multiple regression that shows how mean test score changes when student teacher ratio changes but percent of English learners is held fixed gives a better idea of the causal effect of the student-teacher ratio than the simple linear regression that does not hold percent of English learners fixed. • Omitted variables bias: bias in estimating the causal effect of a variable from omitting a lurking variable from the multiple regression. • Omitted variables bias of omitting percentage of English learners = -2.28-(-1.10)=-1.28.

  17. Key Warning About Using Multiple Regression for Causal Inference • Even if we have included many lurking variables in the multiple regression, we may have failed to include one or not have enough data to include one. There will then be omitted variables bias. • The best way to study causal effects is to do a randomized experiment.

  18. Path Diagram Other Lurking Variables Student- Teacher Ratio Average Test Score Calworks % Percent English Learners

More Related