1 / 78

The Use of Dummy Variables

The Use of Dummy Variables. In the examples so far the independent variables are continuous numerical variables. Suppose that some of the independent variables are categorical.

elina
Télécharger la présentation

The Use of Dummy Variables

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Use of Dummy Variables

  2. In the examples so far the independent variables are continuous numerical variables. • Suppose that some of the independent variables are categorical. • Dummy variables are artificially defined variables designed to convert a model including categorical independent variables to the standard multiple regression model.

  3. Example:Comparison of Slopes of k Regression Lines with Common Intercept

  4. Situation: • k treatments or k populations are being compared. • For each of the k treatments we have measured both • Y (the response variable) and • X (an independent variable) • Y is assumed to be linearly related to X with • the slope dependent on treatment (population), while • the intercept is the same for each treatment

  5. The Model:

  6. This model can be artificially put into the form of the Multiple Regression model by the use of dummy variables to handle the categorical independent variable Treatments. • Dummy variables are variables that are artificially defined

  7. In this case we define a new variable for each category of the categorical variable. That is we will define Xi for each category of treatments as follows:

  8. Then the model can be written as follows: The Complete Model: where

  9. In this case Dependent Variable: Y Independent Variables: X1, X2, ... , Xk

  10. In the above situation we would likely be interested in testing the equality of the slopes. Namely the Null Hypothesis (q = k – 1)

  11. The Reduced Model: Dependent Variable: Y Independent Variable: X = X1+ X2+... + Xk

  12. Example: In the following example we are measuring • Yield Y as it depends on • the amount (X) of a pesticide. Again we will assume that the dependence of Y on X will be linear. (I should point out that the concepts that are used in this discussion can easily be adapted to the non-linear situation.)

  13. Suppose that the experiment is going to be repeated for three brands of pesticides: • A, B and C. • The quantity, X, of pesticide in this experiment was set at 4 different levels: • 2 units/hectare, • 4 units/hectare and • 8 units per hectare. • Four test plots were randomly assigned to each of the nine combinations of test plot and level of pesticide.

  14. Note that we would expect a common intercept for each brand of pesticide since when the amount of pesticide, X, is zero the four brands of pesticides would be equivalent.

  15. 2 4 8 A 29.63 28.16 28.45 31.87 33.48 37.21 28.02 28.13 35.06 35.24 28.25 33.99 B 32.95 29.55 44.38 24.74 34.97 38.78 23.38 36.35 34.92 32.08 38.38 27.45 C 28.68 33.79 46.26 28.70 43.95 50.77 22.67 36.89 50.21 30.02 33.56 44.14 The data for this experiment is given in the following table:

  16. Pesticide X (Amount) X1 X2 X3 Y A 2 2 0 0 29.63 A 2 2 0 0 31.87 A 2 2 0 0 28.02 A 2 2 0 0 35.24 B 2 0 2 0 32.95 B 2 0 2 0 24.74 B 2 0 2 0 23.38 B 2 0 2 0 32.08 C 2 0 0 2 28.68 C 2 0 0 2 28.70 C 2 0 0 2 22.67 C 2 0 0 2 30.02 A 4 4 0 0 28.16 A 4 4 0 0 33.48 A 4 4 0 0 28.13 A 4 4 0 0 28.25 B 4 0 4 0 29.55 B 4 0 4 0 34.97 B 4 0 4 0 36.35 B 4 0 4 0 38.38 C 4 0 0 4 33.79 C 4 0 0 4 43.95 C 4 0 0 4 36.89 C 4 0 0 4 33.56 A 8 8 0 0 28.45 A 8 8 0 0 37.21 A 8 8 0 0 35.06 A 8 8 0 0 33.99 B 8 0 8 0 44.38 B 8 0 8 0 38.78 B 8 0 8 0 34.92 B 8 0 8 0 27.45 C 8 0 0 8 46.26 C 8 0 0 8 50.77 C 8 0 0 8 50.21 C 8 0 0 8 44.14 The data as it would appear in a data file. The variables X1, X2 and X3 are the “dummy” variables

  17. ANOVA Coefficients Intercept df 26.24166667 SS MS F Significance F Regression X1 0.981388889 3 1095.815813 365.2719378 18.33114788 4.19538E-07 X2 Residual 1.422638889 32 637.6415754 19.92629923 Total X3 2.602400794 35 1733.457389 Fitting the complete model :

  18. ANOVA Coefficients Intercept df 26.24166667 SS MS F Significance F Regression X 1 1.668809524 623.8232508 623.8232508 19.11439978 0.000110172 Residual 34 1109.634138 32.63629818 Total 35 1733.457389 Fitting the reduced model :

  19. df SS MS F Significance F common slope zero 1 623.8232508 623.8232508 31.3065283 3.51448E-06 Slope comparison 2 471.9925627 235.9962813 11.84345766 0.000141367 Residual 32 637.6415754 19.92629923 Total 35 1733.457389 The Anova Table for testing the equality of slopes

  20. Example:Comparison of Intercepts of k Regression Lines with a Common Slope (One-way Analysis of Covariance)

  21. Situation: • k treatments or k populations are being compared. • For each of the k treatments we have measured both Y (then response variable) and X (an independent variable) • Y is assumed to be linearly related to X with the intercept dependent on treatment (population), while the slope is the same for each treatment. • Y is called the response variable, while X is called the covariate.

  22. The Model:

  23. Equivalent Forms of the Model: 1) 2)

  24. This model can be artificially put into the form of the Multiple Regression model by the use of dummy variables to handle the categorical independent variable Treatments.

  25. In this case we define a new variable for each category of the categorical variable. That is we will define Xi for categories I i = 1, 2, …, (k – 1) of treatments as follows:

  26. Then the model can be written as follows: The Complete Model: where

  27. In this case Dependent Variable: Y Independent Variables: X1, X2, ... , Xk-1, X

  28. In the above situation we would likely be interested in testing the equality of the intercepts. Namely the Null Hypothesis (q = k – 1)

  29. The Reduced Model: Dependent Variable: Y Independent Variable: X

  30. Example: In the following example we are interested in comparing the effects of five workbooks (A, B, C, D, E) on the performance of students in Mathematics. For each workbook, 15 students are selected (Total of n = 15×5 = 75). Each student is given a pretest (pretest score ≡ X) and given a final test (final score ≡ Y). The data is given on the following slide

  31. The data The Model:

  32. Graphical display of data

  33. Some comments • The linear relationship between Y (Final Score) and X (Pretest Score), models the differing aptitudes for mathematics. • The shifting up and down of this linear relationship measures the effect of workbooks on the final score Y.

  34. The Model:

  35. The data as it would appear in a data file.

  36. The data as it would appear in a data file with Dummy variables, (X1 , X2, X3, X4 )added

  37. Here is the data file in SPSS with the Dummy variables, (X1 , X2, X3, X4 )added. The can be added within SPSS

  38. Fitting the complete model The dependent variable is the final score, Y. The independent variables are the Pre-score X and the four dummy variables X1, X2, X3, X4.

  39. The Output

  40. The Output - continued

  41. The interpretation of the coefficients The common slope

  42. The interpretation of the coefficients The intercept for workbook E

  43. The interpretation of the coefficients The changes in the intercept when we change from workbook E to other workbooks.

  44. The model can be written as follows: The Complete Model: • When the workbook is E then X1 = 0,…, X4 = 0 and • When the workbook is A then X1 = 1,…, X4 = 0 and hence d1 is the change in the intercept when we change form workbook E to workbook A.

  45. Testing for the equality of the intercepts The reduced model The dependent variable in only X (the pre-score)

  46. Fitting the reduced model The dependent variable is the final score, Y. The independent variables is only the Pre-score X.

  47. The Output for the reduced model Lower R2

  48. The Output - continued Increased R.S.S

  49. The F Test

More Related