1 / 45

Lecture 9- Chapter 19

Lecture 9- Chapter 19. Multiple regression. 19.1 Introduction. In this chapter we extend the simple linear regression model and allow for any number of independent variables. We expect to build a model that fits the data better than the simple linear regression model.

Télécharger la présentation

Lecture 9- Chapter 19

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 9- Chapter 19 Multiple regression

  2. 19.1 Introduction • In this chapter we extend the simple linear regression model and allow for any number of independent variables. • We expect to build a model that fits the data better than the simple linear regression model.

  3. We will use computer printout to • Assess the model • How well does it fit the data? • Is it useful? • Are any required conditions violated? • Employ the model • interpreting the coefficients • predictions using the prediction equation • estimating the expected value of the dependent variable

  4. Dependent variable Independent variables 19.2 Model and required conditions • We allow for k independent variables to potentially be related to the dependent variable: y = b0 + b1x1+ b2x2 + …+ bkxk + e Coefficients Random error variable

  5. The simple linear regression model allows for one independent variable x. y = b0 + b1x + e y y = b0 + b1x Note how the straight line becomes a plane and ... y = b0 + b1x1 + b2x2 X 1 The multiple linear regression model allows for more than one independent variable. y = b0 + b1x1 + b2x2 + e X2

  6. y X 1 … a parabola becomes a parabolic surface. X2

  7. Required conditions for the error variable e • The mean of e is zero: E(e) = 0. • The standard deviation of e is a constant (se). • The errors are independent. • The errors are independent of the independent variable x. • The error e is normally distributed. • These conditions are required in order to • estimate the model coefficients with desirable properties • test hypotheses about the model coefficients • assess the resulting model.

  8. 19.3 Estimating the coefficients and assessing the model • The procedure • Obtain the model coefficients and statistics using statistical computer software. • Diagnose violations of required conditions. Try to remedy problems identified. • Assess the model fit and usefulness using the model statistics. • If the model passes the assessment tests, use it to interpret the coefficients and generate predictions.

  9. Example • The Holiday Inns group is planning an expansion. • Management wishes to predict which sites are likely to be profitable. • Several areas where predictors of profitability can be identified are: • competition • market awareness • demand generators • demographics • physical quality.

  10. Physical Margin Profitability Competition Market awareness Customers Community Rooms Nearest Office space University enrolment Income Distance to town Number of hotel/motel rooms within 3 km of the site Distance to the nearest Holiday Inn Median household income Distance to downtown

  11. Data were collected from 100 randomly- selected Holiday Inns and run for the following suggested model: Margin = b0 + b1 Rooms + b2 Nearest + b3 Office + b4 Enrolment + b5 Income + b6 Distance to town + 

  12. This is the sample regression equation (sometimes called the prediction equation) MARGIN = 72.455 – 0.0076ROOMS–1.646NEAREST + 0.02OFFICE + 0.212ENROLMT – 0.413INCOME + 0.225DISTTWN • Excel output Let us assess this equation

  13. Standard error of estimate • We need to estimate the standard error of estimate • where k is the number of X (independent) variables. • Compare seto the mean value of y • from the printout, standard error = 5.5121 • calculating the mean value of y we have • It seems se is not particularly small (relative to y values). • Can we conclude that the model does not fit the data well?

  14. Coefficient of determination R2 • The definition of R2 is • From the printout R2 = 0.5251 • 52.51% of the variation in the measure of profitability is explained by the linear regression model formulated above. • When adjusted for degrees of freedom, adjusted R2 = 1 – [SSE/(n – k – 1)]/[SSy /(n – 1)] = 49.44%

  15. Testing the utilityof the model • We pose the question: Is there at least one independent variable linearly related to the dependent variable? • To answer the question, we test the hypotheses H0: b1 = b2 = … = bk = 0 HA: at least one bi is not equal to zero • If at least one bi is not equal to zero, the model is useful.

  16. MSR = F MSE • To test these hypotheses we perform an analysis of variance procedure. • The F-test • Construct the F-statistic • Rejection region F>Fa,k,n-k-1 MSR = SSR/k MSE = SSE/(n – k – 1) SST = [Variation in y] = SSR + SSE. Large F results from a large SSR. Then much of the variation in y is explained by the regression model. The null hypothesis should be rejected; thus the model is useful. Required conditions must be satisfied.

  17. Example – continued • Excel provides the following ANOVA results MSR/MSE MSE SSE MSR SSR

  18. Example - continued • Excel provides the following ANOVA results Conclusion: There is sufficient evidence to reject the null hypothesis in favour of the alternative hypothesis. At least one of the bi is not equal to zero. Thus, at least one independent variable is linearly related to y. This linear regression model is useful. Fa,k,n-k-1 = F0.05,6,100-6-1 = 2.17 F = 17.14 > 2.17 Also the p-value (significance F) = 3.03382(10)-13 Clearly a = 0.05 > 3.03382(10)-13 and the null hypothesis is rejected.

  19. Interpreting the coefficients • This is the intercept, the value of y when all the variables take the value zero. Since the data range of all the independent variables do not cover the value zero, do not interpret the intercept. • In this model, for each additional 1 000 rooms within 3 km of the Holiday Inn, the operating margin decreases on average by 7.6% (assuming the other variables are held constant).

  20. In this model, for each additional km that the nearest competitor is to the Holiday Inn, the average operating margin decreases by 1.65%. • For each additional 1 000 sq-metre of office space, the average increase in operating margin will be 0.02%. • For each additional thousand students MARGIN increases by 0.21%. • For each additional $1 000 increase in median household income, MARGIN decreases by 0.41%. • For each additional km to downtown, MARGIN increases by 0.23% on average.

  21. H0: bi = 0 HA: bi 0 Testing the coefficients Test statistic • The hypothesis for each bi • Excel printout d.f. = n - k -1

  22. Using the regression equation • The model can be used by: • producing a prediction interval for the particular value of y, for a given set of values of xi. • producing an interval estimate for the expected value of y, for a given set of values of xi. • The model can be used to learn about relationships between the independent variables xi and the dependent variable y, by interpreting the coefficients bi

  23. Example – continued • Predict the MARGIN of an inn at a site with the following characteristics: • 3 815 rooms within 3 km • closest competitor 3.4 km away • 476 000 sq-metre of office space • 24 500 university students • $39 000 median household income • 3.6 km distance to downtown centre. MARGIN = 72.455 – 0.0076(3815) – 1.646(3.4) + 0.02(476) +0.212(24.5) – 0.413(39) + 0.225(3.6) = 37.1%

  24. ^ Plot the residuals versus y. 19.4 Regression diagnostics – II • The required conditions for the model assessment to apply must be checked. • Is the error variable normally distributed? • Is the error variance constant? • Are the errors independent? • Can we identify outliers? • Is multicollinearity a problem? Draw a histogram of the residuals or use a 2 test for normality. Plot the residuals versus the time periods. Calculate the paired correlation coefficients of the independent variables.

  25. Example 19.2 • A real estate agent believes that a house selling price can be predicted using the house size, number of bedrooms and lot size. • A random sample of 100 houses was drawn and data recorded. • Analyse the relationship among the four variables.

  26. Solution • The proposed model isPRICE = b0 + b1BEDROOMS + b2H-SIZE +b3LOTSIZE + e Excel solution The model is useful, but no variable is significantly related to the selling price!

  27. However, • when regressing the price on each independent variable alone, it is found that each variable is strongly related to the selling price. • Multicollinearity is the source of this problem. • Multicollinearity causes two kinds of difficulties: • The t statistics appear to be too small. • The b coefficients cannot be interpreted as ‘slopes’.

  28. Remedying violations of required conditions • Non-normality or heteroscedasticity can be remedied using transformations on the y variable. • The transformations can improve the linear relationship between the dependent variable and the independent variables. • Many computer software systems allow us to make the transformations easily.

  29. A brief list of transformations • y’ = log y (for y > 0) • Use when the se increases with y, or • Use when the error distribution is positively skewed. • y’ = y2 • Use when the s2e is proportional to E(y), or • Use when the error distribution is negatively skewed. • y’ = y1/2 (for y > 0) • Use when the s2e is proportional to E(y). • y’ = 1/y • Use when s2eincreases significantly when y increases beyond some value.

  30. M a r k s Example 19.3 • A statistics lecturer wanted to know whether time limit affect the marks on a quiz. • A random sample of 100 students was split into five groups. • Each student did a quiz, but each group was given a different time limit. See data below. Analyse these results and include diagnostics.

  31. The model tested: MARK = b0 + b1TIME + e This model is useful and provides a good fit. The errors seem to be normally distributed.

  32. The standard error of estimate seems to increase with the predicted value of y. Two transformations are used to remedy this problem: 1. y’ = logey 2. y’ = 1/y

  33. Let us see what happens when a transformation is applied: The original data, where ‘mark’ is a function of ‘time’ The modified data, where LogMark is a function of ‘time’ Loge23 = 3.135 40, 3.135 40,23 40, 2.89 Loge18 = 2.89 40,18

  34. The new regression analysis and diagnostics are: The model tested: LOGMARK = b’0 + b’1TIME + e’ Predicted LogMark = 2.1295 + 0.021 time This model is useful and provides a good fit.

  35. The errors seem to be normally distributed. The standard error still changes with the predicted y, but the change is smaller than before.

  36. How do we use the modified model • to predict? Let TIME = 55 minutes LogMark = 2.1295 + 0.0217 time = 2.1295 + 0.0217(55) = 3.323 To find the predicted mark, take the antilog: Mark = antiloge3.323 = e3.323 = 27.743

  37. 19.5 Regression diagnostics – III (time series) • Durbin–Watson test • This test detects first-order autocorrelation between consecutive residuals in a time series. • If autocorrelation exists, the error variables are not independent. Residual at time t

  38. Positive first-order autocorrelation occurs when consecutive residuals tend to be similar. Then the value of d is small (< 2). Positive first-order autocorrelation + Residuals + + + 0 Time + + + + Negative first-order autocorrelation Negative first-order autocorrelation occurs when consecutive residuals tend to differ markedly. Then the value of d is large (> 2). Residuals + + + 0 + + Time + +

  39. One-tail test for positive first-order autocorrelation • If d < dL there is enough evidence to show that positive first-order correlation exists. • If d > dU there is not enough evidence to show that positive first-order correlation exists. • If d is between dL and dU the test is inconclusive. • One-tail test for negative first-order autocorrelation • If d > 4 – dL negative first-order correlation exists. • If d < 4 – dU negative first-order correlation does not exist. • If d is between 4 – dU & 4 – dL the test is inconclusive.

  40. First-order correlation does not exist First-order correlation does not exist First-order correlation exists Inconclusive test First-order correlation exists Inconclusive test • Two-tail test for first-order autocorrelation • If d < dL or d > 4 – dL first-order autocorrelation exists. • If d falls between dL and dU or between 4 – dU and 4 – dL the test is inconclusive. • If d falls between dU and 4 – dU there is no evidence for first-order autocorrelation. 0 dL dU 2 4-dU 4-dL 4

  41. Example • How does the weather affect the sales of lift tickets in a ski resort? • Data of the past 20 years sales of tickets, along with the total snowfall and the average temperature during Christmas week in each year, were collected. • The model hypothesised was TICKETS = b0+ b1SNOWFALL+ b2TEMPERATURE + e • Regression analysis yielded the following results:

  42. The model seems to be very poor: • The fit is very low (R-square = 0.12) • It is not valid (signif. F = 0.33) • No variable is linearly related to sales Diagnosis of the required conditions resulted in the following findings:

  43. Residual vs. predicted y The error distribution The error variance is constant The errors may be normally distributed Residual over time The errors are not independent

  44. Test for positive first-order autocorrelation: n = 20, k = 2. From the Durbin–Watson table we have: dL= 1.10, dU = 1.54. The statistic d = 0.59. Conclusion: because d < dL there is sufficient evidence to infer that positive first-order autocorrelation exists. Using the computer – Excel Tools > Data Analysis > Regression (check residual option then OK) Tools > Data Analysis Plus > Durbin–Watson statistic > Highlight range of residuals from regression run > OK The residuals

  45. The modified regression model TICKETS = b0+ b1SNOWFALL+ b2TEMPERATURE+ b3YEARS+ e The autocorrelation has occurred over time. Therefore a time-dependent variable added to the model may correct the problem. • All the required conditions are met for this model. • The fit of this model is high: R2 = 0.74. • The model is useful. Significance F = 5.93 E – 5. • SNOWFALL and YEARS are linearly related to ticket sales. • TEMPERATURE is not linearly related to ticket sales.

More Related