1 / 86

MULTIPLE REGRESSION ANALYSIS

MULTIPLE REGRESSION ANALYSIS. MULTIPLE REGRESSION ANALYSIS. CONTENTS. Table for the types of Multiple Regression Analysis . Ordinary Least Square (OLS). Multiple Linear Regression with Dummy Variables. Multicollinearity. Goodness of Fit of Multiple Linear Regression (MLR) .

plato
Télécharger la présentation

MULTIPLE REGRESSION ANALYSIS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MULTIPLE REGRESSION ANALYSIS MULTIPLE REGRESSION ANALYSIS

  2. CONTENTS • Table for the types of Multiple Regression Analysis • Ordinary Least Square (OLS) • Multiple Linear Regression with Dummy Variables • Multicollinearity • Goodness of Fit of Multiple Linear Regression (MLR) • Residual Analysis • Multiple Linear Regression Method / Procedure • Cross Validation • Binomial or Binary Logistic Regression • Multinomial Logistics Regression • Measure of Fit ENGR. DIVINO AMOR P. RIVERA STATISTICAL COORDINATION OFFICER I

  3. Types of Multiple Regression Analysis

  4. Ordinary Least Square (OLS) also known as Multiple Linear Regression Assumptions and Conditions • Linearity Assumption • If the model is true then y is linearly related to each of the x’s. Straight enough condition: Scatterplots of y against each other of the predictors are reasonably straight. It is a good idea to plot the residuals against the predicted values and check for patterns, especially for bends or other nonlinearities.

  5. Independence Assumption • The errors in the true underlying regression model must be mutually independent. Since there is no way to be sure that the independence assumption is true. Randomness condition: The data should arise from a random sample or in a randomized experiment. Check for the regression residuals for evidence of patterns, trends, or clumping. • Equal Variance Assumption • The variability of y should be about the same for all values of every x. Check using scatterplot.

  6. Normality Assumption • We assume that the errors around the regression model at any specified values of x-variables follow a normal model. Nearly Normal condition: look at a histogram or Normal probability plot of the residuals and the normal probability plot is fairly straight. Normality Assumption becomes less important as the sample size grows.

  7. Ordinary Least Square (OLS) also known as Multiple Linear Regression Steps in applying OLS Procedure • Variables. Name the variables, Classify the variables with respect to level of measurements and Kinds of variables. • Plan. Check the conditions • Check multicollinearity. Lewis – Beck (1980) is to regress each independent variable on all other independent variables so that the relationship of each independent variables with all of the other independent variables are considered or apply correlation procedures to all independent variables with respect to each other. If the r is nearly 1.0 or using the rule of thumb that r is more than 0.7, which means multicollinearity exist.

  8. Using the appropriate Correlation Analysis Procedure

  9. Choose your method. • Interpretation.

  10. Multiple Linear Regression with Dummy Variables Because categorical predictor variables cannot be entered directly into a regression model and be meaningfully interpreted, some other method of dealing with information of this type must be developed. In general, a categorical variable with k levels will be transformed into k-1 variables each with two levels. For example, if a categorical variable had six levels, then five dichotomous variables could be constructed that would contain the same information as the single categorical variable. Dichotomous variables have the advantage that they can be directly entered into the regression model. The process of creating dichotomous variables from categorical variables is called dummy coding. Then researcher do the procedures in the OLS Procedure.

  11. Multicollinearity In some cases, multiple regression results may seem paradoxical. Even though the overall P value is very low, all of the individual P values are high. This means that the model fits the data well, even though none of the X variables has a statistically significant impact on predicting Y. How is this possible? When two X variables are highly correlated, both convey essentially the same information. In this case, neither may contribute significantly to the model after the other one is included. But together they contribute a lot. If you removed both variables from the model, the fit would be much worse. So the overall model fits the data well, but neither X variable makes a significant contribution when it is added to your model last. When this happens, the X variables are collinear and the results show multicollinearity.

  12. Assess multicollinearity, InStat tells you how well each independent (X) variable is predicted from the other X variables. The results are shown both as an individual R square value (distinct from the overall R square of the model) and a Variance Inflation Factor (VIF). When those R square and VIF values are high for any of the X variables, your fit is affected by multicollinearity. Multicollinearity can also be detected by Condition Index. Belsley, Kuh and Welsch (1980) construct the Condition Indices as the square roots of the ratio of the largest eigenvalue to each individual eigenvalue, . The Condition Number of the X matrix is defined as the largest Condition Index, . When this number is large, the data are said to be ill conditioned. A Condition Index of 30 to 300 indicates moderate to strong collinearity. A collinearity problem occurs when a component associated with a high condition index contributes strongly to the variance of the two or more variables.

  13. Goodness of Fit in Multiple Linear Regression In assessing the goodness of fit of a regression equation, a slightly different statistic, called R2-adjusted or R2adj is calculated: where N is the number of observations in the data set (usually the number of people or households) and n the number of independent variables or regressors. This allows for the extra regressors. R square adj will always be lower than R square if there is more than one regressor.

  14. The VIF associated with any X-variable will be found by regressing it on all other X-variables. The resulting R2 will be use to calculate that variable’s VIF. The VIF for any Xi represents the variable’s influence on multi-collinearity. VIF is computed as: In general, multicollinearity is not considered a significant problem unless the VIF of a single Xi measures at least 10, or the sum of the VIF’s for all Xi is at least 10.

  15. Residual Analysis (Durbin-Watson Test Statistics) The residuals are defined the differences where and Y is observed value of the dependent variable and is the corresponding fitted value obtained by the use of fitted regression model. In performing the regression analysis, the study have mean zero, a constant variance 2 and follow a normal distribution. If the fitted model is correct, the residuals should exhibit tendencies that tend to confirm the assumptions of the model made, or at least, should not exhibit a denial of the assumption. Neter et-al. (1983). Test statistics (DW)

  16. The Durbin-Watson coefficient, d, tests for autocorrelation. The value of d ranges from 0 to 4. Values close to 0 indicate extreme positive autocorrelation; close to 4 indicates extreme negative autocorrelation; and close to 2 indicates no serial autocorrelation. As a rule of thumb, d should be between 1.5 and 2.5 to indicate independence of observations. Positive autocorrelation means standard errors of the b coefficients are too small. Negative autocorrelation means standard errors are too large.

  17. Multiple Linear Regression Method or Procedure • Forward Selection Multiple Regression Method The computer will choose the variable that gives the largest regression sum of squares when performing a simple linear regression with y, or equivalently, that will give the largest value R2. Then choose the variable that when is inserted in the model gives the largest increase in R2. This will continue until the most recent variable inserted fails to induce a significant increase in the explained regression. Such an increase can be determined at each at each step by using the appropriate F-test or t-test.

  18. Backward Elimination Multiple Regression Method Fit all the variables included in the variable. Choosing the variable that gives the smallest value of the regression sum of squares adjusted for the others. Then fit a regression equation using the remaining variables then choose the smallest value of the regression sum of squares adjusted for the other variables remaining. Once again if it is significant, the variable is removed from the model. At each step the variance (s2) used in the F-test is the error mean square for the regression. This process is repeated until some step the variable with the smallest adjusted regression sum of squares results in a significant f-value for some predetermined significance level (to enter P-value is less than 0.05 and P-value to removed is greater than 0.10 below).

  19. Stepwise Multiple Regression Method It is accomplished with a slight but important modification of the forward selection procedure. The modification involves further testing at each stage to ensure the continued effectiveness of variables that had been inserted into the model at an earlier stage. This represents an improvement over forward selection, since it is quite possible that a variable entering in the regression equation at a nearly stage might have been rendered unimportant or redundant because of relationships that exist between it and other variables entering at later stages. Therefore, at a stage in which a new variable has been entered into the regression equation through a significant increase in R2 as determined by the F-test, all the variables already in the model are subjected to F-test (or, equivalent to t-test) in light of the new variable, and are deleted if they do not display a significant f-value. The procedure is continued until a stage is reached in which no additional variables can be inserted or deleted. (Walpole and Myers, 1989).

  20. Cross-Validation The multiple correlation coefficient of a regression equation can be use to estimate the accuracy of a prediction equation. However, this method usually overestimates the actual accuracy of the prediction. Cross-validation is used to test the accuracy of a prediction equation without having to wait for the new data. The first step in this process is to divide the sample randomly into groups of equal size. The cross-validation involved applying the predictor equation to the other group and determining how many times it correctly discriminated between those who would and those who would not.

  21. Binomial Logistic Regression (Binary Logistic Regression) Binomial logistic regression will be employed for this type of regression is suited if the dependent variable is dichotomy and the independents are of any type (scale or categorical). This statistical tool is employed to predict a dependent variable on the basis of independents and to determine the percent of variance in the dependent variable explained by the independents; to rank the relative importance of independents; to assess interaction effects; and to understand the impact of covariate control variables. (Miller and Volker, 1985)

  22. Logistic regression applies maximum likelihood estimation after transforming the dependent into a logit variable (the natural log of the odds of the dependent occurring or not). In this way, logistic regression estimates the probability of a certain event occurring. (Hosmer D. & Lemeshow S., 2000) After fitting the logistic regression model, questions about the suitability of the model, the variables to be retained, and goodness of fit are all considered. (Pampal, 2000).

  23. Logistic regression does not face strict assumptions of multivariate normality and equal variance-covariance matrices across groups, features not found in all situations. Furthermore, logistic regression accommodates all types of independent variables (metric and non-metric). Although assumptions regarding the distributions of predictors are not required for logistic regression, multivariate normality and linearity among the predictors may enhance power because a linear combination of predictors is used to form the exponent. Several ways of testing the goodness of fit was run to further prove that the logistic equation adequately fits the data.

  24. Multinomial Logistics Regression It is similar to Binary Logistic Regression the only difference is that the dependent variable is that there are more than 2 categories, an of example this is the grading system of Don Mariano Marcos Memorial State University. Multinomial logit models assume response counts at each level of covariate combination as multinomial and multinomial counts at different combinations are dependent. The benefit of using multinomial logit model is that it models the odds of each category relative to a baseline category as a function of covariates, and it can test the equality of coefficients even if confounders are different unlike in the case of pair-wise logistics where testing of equality of coefficients requires assumptions about confounder effects. The Multinomial Logit Model is arguably the most widely statistical model for polytomous (multi-category) response variables ( Powers and Xie, 2000: Chapter 7; Fox, 1997: Chapter15)

  25. Measures of Fit There are six (6) scalar measures of model fit: (1) Deviance, (2) Akaike Information Criterion, (3) The Bayesian Information Criterion, (4) McFadden’s , (5) Cox and Snell Pseudo and (6) Nagelkerke Pseudo . There is no convincing evidence that selection of a model that maximizes the value of a given measure necessarily results in a model that is optimal in any sense other than the model having a larger (or smaller value) of that measure (Long & Freese, 2001). However, it is still helpful to see any differences in their level of goodness of fit, and hence provide us some guidelines in choosing an appropriate model.

  26. Multinomial Logistics Regression • Deviance As a first measure of model of fit, the researcher use the Residual Deviance (D) for the model, which is defined as follows: where is a predicted value and is an observed value for i= 1, …, N

  27. Akaike Information Criterion As a second measure of fit, Akaike (1973) Information Criterion that is defined as follows: where is the maximum likelihood of the model and P is the number of parameters in the model. A model Having smaller AIC is considered the better fitting model.

  28. Bayesian Information Criterion As a third measure of fit, Bayesian Information Criterion (BIC) (Raftery, 1995) as a simple and accurate large-sample approximation, especially if there are as about 40 observations (Raftery, 1995). In the study, BIC is defined as follows: where is deviance of the model and is the degrees of freedom associated with deviance. The more negative is the better fit. Raftery (1995) also provides guidelines for the strength of evidence favoring against based on a difference in as follows:

  29. McFadden’s R2 The McFadden’s adjusted . This measure is also known as the “likelihood-ratio index” which compares a full model with all parameters with the model just with intercept and defined as: where is the number of parameter.

  30. Cox and Snell Pseudo R2 where is the likelihood of the current model and is the likelihood of the initial model; that is, if the constant is not included in the model; if the constant is not included in the model where and W is vector with element , the weight for the ith case .

  31. Nagelkerke Pseudo R2 where =

  32. How to run Regression on Statistical Software Statistical Package for Social Sciences (SPSS) • From the menu choose: • Analyze • Regression • Linear …..

  33. Statistical Package for Social Sciences • Method • Forward • Linear Regression dialog box • Backward • Stepwise • Statistics • Estimates • Confidence Intervals • Model Fit • R-square Change

  34. Statistical Package for Social Sciences • Plots • SRESID • Linear Regression Plots dialog box • ZPRED • Check • Histogram • Normal probability plot • Produce all partial plots

  35. Statistical Package for Social Sciences • Linear Regression Statistics dialog box

  36. Statistical Package for Social Sciences • Example of Multiple Linear Regression

  37. Statistical Package for Social Sciences • Example of Multiple Linear Regression • From the menu choose: • Analyze • Regression • Linear …..

  38. Statistical Package for Social Sciences • Example of Multiple Linear Regression

  39. Statistical Package for Social Sciences

  40. Statistical Package for Social Sciences

  41. Statistical Package for Social Sciences

  42. Statistical Package for Social Sciences

  43. Statistical Package for Social Sciences

  44. How to run Regression on Statistical Software Statistical Package for Social Sciences (SPSS) • From the menu choose: Binary Logistics Regression • Analyze • Regression • Binary …..

  45. Statistical Package for Social Sciences • Method • Forward conditional, Forward LR, Forward Wald • Binary Regression dialog box • Backward Conditional, Backward LR, Backward Wald • Categorical • Covariates

  46. Statistical Package for Social Sciences • Statistics & Plots • Binary Regression Statistics dialog box • Residuals • Standardized • Deviance • Statistics & Plots • Classification Plot • Hosmer – Lemeshow goodness of fit • CI for Exp (B)

  47. Statistical Package for Social Sciences • Example of Binary Regression (bankloan_cs.sav) • Data View ENGR. DIVINO AMOR P. RIVERA OIC- PROVINCIAL STATISTICS OFFICER

  48. Statistical Package for Social Sciences • Example of Binary Regression (bankloan.sav) • Variable View

  49. Statistical Package for Social Sciences

More Related