1 / 52

Problem from the Literature:  Women in Politics

Problem from the Literature:  Women in Politics. In today's class we will focus on the research article on women in politics by Susan Welch and Lee Sigelman, "Changes in Public Attitudes toward Women in Politics." Social Science Quarterly , Volume 63, No. 2, June 1982, pages 312-322.

Télécharger la présentation

Problem from the Literature:  Women in Politics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Problem from the Literature:  Women in Politics In today's class we will focus on the research article on women in politics by Susan Welch and Lee Sigelman, "Changes in Public Attitudes toward Women in Politics." Social Science Quarterly, Volume 63, No. 2, June 1982, pages 312-322. The data for this problem is WomenInPolitics.Sav. This file contains the data recoded in the format used by the article. The data set contains the data for the year 1974. Discriminant Analysis

  2. Stage One: Define the Research Problem • In this stage, the following issues are addressed: • Relationship to be analyzed • Specifying the dependent and independent variables • Method for including independent variables Relationship to be analyzed This article examines the relationship between attitudes toward women's participation in politics and a variety of demographic predictors, for three time periods in the 1970's. Discriminant Analysis

  3. Specifying the dependent and independent variables • The article incorporates three dependent variables: • FEPOL - Women Not Emotionally Suited for Politics • FEHOME - Women Should Leave Running the Country to Men • FEPRES - Vote for a Woman for President • The independent variables are gender, employment status, marital status, race, political party identification, political ideology, age, education, income, size of city of residence, frequency of church attendance, and religion. • Each dependent variable is a two group dichotomy with code values of 1 and 2 for the groups they contain. • The independent variables: gender, labor force status, marital status, race, and religion are nonmetric and have been converted to dummy-coded variables. Note that the dummy-coding for marital status and religion contain one new variable for each category of the original variable, in contradiction to the usual rule where the number of new dummy-coded variables is one less than the number of categories. Since we will do stepwise discriminant analysis, this will not create a problem for us because SPSS will not include variables that are a linear combination of other variables, which is what happens when we include the extra variable in the dummy-coding. If we attempt direct entry of variables, SPSS will fail the tolerance test for one of the variables in the dummy-coding set with the extra category. • The independent variables: party identification, political ideology, age, education, income, size of residence, and church attendance will be treated as metric variables. We might challenge some of these as metric variables, e.g. political party, but we will include them to replicate the analysis conducted by the authors. Discriminant Analysis

  4. Method for including independent variables The article states that the stepwise method was used for variable inclusion, using the Wilks's Lambda criterion, rather than the Mahalanobis distance criteria we used in previous analyses (page 315). Discriminant Analysis

  5. Stage 2: Develop the Analysis Plan: Sample Size Issues • In this stage, the following issues are addressed: • Missing data analysis • Minimum sample size requirement: 20+ cases per independent variable • Division of the sample: 20+ cases in each dependent variable group Discriminant Analysis

  6. Run the MissingDataCheck Script In the missing data analysis, we are looking for a pattern or process whereby the pattern of missing data could influence the results of the statistical analysis. Discriminant Analysis

  7. Complete the 'Check for Missing Data' Dialog Box Discriminant Analysis

  8. Number of Valid and Missing Cases per Variable Three independent variables have relatively large numbers of missing cases: POLPARTY  'Party Identification', POLVIEWS  'Political Ideology', and INCOME  'Family Income' However, all variables have valid data for 90% or more of cases, so no variables will be excluded for an excessive number of missing cases. Discriminant Analysis

  9. Frequency of Cases that are Missing Variables Next, we examine the number of missing variables per case.  Of the possible 22 variables in the analysis (19 independent variables and 3 dependent variables), no cases were missing half of the variables and only one case was missing 4 of the 22 variables.  No cases will be removed because they have an excessive number of missing variables. Discriminant Analysis

  10. Distribution of Patterns of Missing Data About 96.3% of the cases have no missing variables or one missing variable.  Of those cases missing two or more variables, the frequencies for the combinations are 4 or less.  There is no evidence of a predominant missing data pattern that will impact the analysis. Discriminant Analysis

  11. Correlation Matrix of Valid/Missing Dichotomous Variables None of the correlations for missing data values are above the weak level (largest I find is 0.249), so we can delete missing cases without fear that we are distorting the solution. Discriminant Analysis

  12. Minimum sample size requirement: 20+ cases per independent variable Restricting our sample to the cases for 1974 who were asked this question, we have 698 subjects for analysis. Using the author's count of 19 independent variables, we have a ratio of 36 cases per independent variable, well above the minimum requirement of 20. Division of the sample: 20+ cases in each dependent variable group There are over 300 subjects in each subgroup of the dependent variable. Discriminant Analysis

  13. Stage 2: Develop the Analysis Plan: Measurement Issues: • In this stage, the following issues are addressed: • Incorporating nonmetric data with dummy variables • Representing curvilinear effects with polynomials • Representing interaction or moderator effects Incorporating Nonmetric Data with Dummy Variables • Marital Status has already been recoded into: • MARRIED  'Married' • WIDOWED  'Widowed' • DIVORCD  'Divorced or Separated' • NEVMARR  'Never Married' • Similarly, Religion and Fundamentalist were combined and recoded into five dichotomous variables: • FUNDPROT  'Fundamentalist Protestant' • OTHRPROT  'Other Protestant' • CATHOLIC  'Roman Catholic' • JEWISH  'Jewish' • NONE_OTH  'None or Other' Discriminant Analysis

  14. Representing Curvilinear Effects with Polynomials We do not have any evidence of curvilinear effects at this point in the analysis. Representing Interaction or Moderator Effects We do not have any evidence at this point in the analysis that we should add interaction or moderator variables. Discriminant Analysis

  15. Stage 3: Evaluate Underlying Assumptions • In this stage, the following issues are addressed: • Nonmetric dependent variable and metric or dummy-coded independent variables • Multivariate normality of metric independent variables: assess normality of individual variables • Linear relationships among variables • Assumption of equal dispersion for dependent variable groups Nonmetric dependent variable and metric or dummy-coded independent variables The dependent variable is nonmetric.  All of the independent variables are metric or dichotomous dummy-coded variables. Multivariate normality of metric independent variables Since there is not a method for assessing multivariate normality, we assess the normality of the individual metric variables. Discriminant Analysis

  16. Run the 'NormalityAssumptionAndTransformations' Script Discriminant Analysis

  17. Complete the 'Test for Assumption of Normality' Dialog Box Discriminant Analysis

  18. Tests of Normality We find that all of the independent variables fail the test of normality, and that none of the transformations induced normality in any variable.  We should note the failure to meet the normality assumption for possible inclusion in our discussion of findings. Linear relationships among variables Since our dependent variable is not metric, we cannot use it to test for linearity of the independent variables. As an alternative, we can plot each metric independent variable against all other independent variables in a scatterplot matrix to look for patterns of nonlinear relationships.  If one of the independent variables shows multiple nonlinear relationships to the other independent variables, we consider it a candidate for transformation Discriminant Analysis

  19. Requesting a Scatterplot Matrix Discriminant Analysis

  20. Specifications for the Scatterplot Matrix Discriminant Analysis

  21. The Scatterplot Matrix Blue fit lines were added to the scatterplot matrix to improve interpretability. Having computed a scatterplot for all combinations of metric independent variables, we identify all of the variables that appear in any plot that shows a nonlinear trend. We will call these variables our nonlinear candidates. To identify which of the nonlinear candidates is producing the nonlinear pattern, we look at all of the plots for each row or column containing the candidate variables. The candidate variable that is not linear should show up in a nonlinear relationship in several plots with other variables in the column or row. Hopefully, the form of the plot will suggest the power term to best represent the relationship, e.g. squared term, cubed term, etc. None of the scatterplots show evidence of any nonlinear relationships. Discriminant Analysis

  22. Assumption of equal dispersion for dependent variable groups Box's M tests for homogeneity of dispersion matrices across the subgroups of the dependent variable.  The null hypothesis is that the dispersion matrices are homogenous.  If the analysis fails this test, we can request using separate group dispersion matrices in the classification phase of the discriminant analysis to see it this improves our accuracy rate. Box's M test is produced by the SPSS discriminant procedure, so we will defer this question until we have obtained the discriminant analysis output. Discriminant Analysis

  23. Stage 4: Estimation of Discriminant Functions and Overall Fit: The Discriminant Functions • In this stage, the following issues are addressed: • Compute the discriminant analysis • Overall significance of the discriminant function(s) Compute the discriminant analysis The steps to obtain a discriminant analysis are detailed on the following screens. Discriminant Analysis

  24. Requesting a Discriminant Analysis Discriminant Analysis

  25. Specifying the Dependent Variable Discriminant Analysis

  26. Specifying the Independent Variables Discriminant Analysis

  27. Specifying Statistics to Include in the Output Discriminant Analysis

  28. Specifying the Stepwise Method for Selecting Variables Discriminant Analysis

  29. Specifying the Classification Options Discriminant Analysis

  30. Complete the Discriminant Analysis Request Discriminant Analysis

  31. Overall significance of the discriminant function(s) The output to determine the overall statistical significance of the discriminant functions is shown below. As we can see, SPSS reports one statistically significant function, with a probability of significance less than 0.0001. The canonical correlation coefficient is 0.302 which is close to the value of 0.327 for this analysis in the journal article. We conclude that there is a relationship between the dependent variable and the independent variables. Our conclusion from this output is that there is one statistically discriminant function for this problem. Discriminant Analysis

  32. Stage 4: Estimation of Discriminant Functions and Overall Fit:  Assessing Model Fit • In this stage, the following issues are addressed: • Assumption of equal dispersion for dependent variable groups • Classification accuracy by chance criteria • Press's Q statistic • Presence of outliers Discriminant Analysis

  33. Assumption of equal dispersion for dependent variable groups In discriminant analysis, the best measure of overall fit is classification accuracy.  The appropriateness of using the pooled covariance matrix in the classification phase is evaluated by the Box's M statistic. For this problem, Box's M statistic is not statistically significant, so we conclude that the dispersion of our two groups is homogeneous and using the within-groups covariance matrix (pooled variance) for classification is justified. Had we failed this test, our remedy would be to re-run the discriminant analysis requesting the use of separate covariance matrices in classification. Discriminant Analysis

  34. Classification accuracy by chance criteria - 1 As shown below, the classification accuracy for our analysis is 62.5% (using the cross-validated accuracy rate in footnote C), which is close to the 64.6% reported in the article. (Note that when we use the stepwise method, SPSS may classify more cases than were included in the analysis to derive the discriminant functions.  Listwise deletion of missing cases is used in deriving the discriminant functions.  However, a case is excluded from classification only if it was missing data for one of the variables included in the functions by the stepwise procedure.  If this behavior is problematic, you can create a variable that counts the number of missing variables for each case and select into the analysis only those cases which are not missing data for any variable.  This will force SPSS to use the same set of cases for deriving the functions and classification.) Discriminant Analysis

  35. Classification accuracy by chance criteria - 2 To compute the proportional chance criteria, we look at the output titled Prior Probabilities for Groups and see that the percentage size of our two groups are .455 and .545. Computing the proportional chance criteria, we compute (.455)^2 + (.545)^2 = .504. A twenty five percent increase over .504 is equal to .631. Our accuracy rate of 62.5% falls just below the required standard. We might label our model as promising rather than definitive. The maximum chance criteria would set a target of 25% over the largest group size (.545) equal to .681. Our computed accuracy falls short of this standard. Though the maximum chance criteria is a more rigorous requirement, we have some latitude in determining when it applies. In this case, the two groups of the dependent variable are approximately the same size so we can choose the proportional chance criteria. If there is a dominant subgroup, the requirement to use the maximum chance criteria is more binding. Since the purpose of this article is to demonstrate the existence of relationships rather than produce a predictive model, we can accept a small shortfall in our 25% over chance criteria and label this model as one worth pursuing. Discriminant Analysis

  36. Press's Q statistic Substituting the parameters for this problem into the formula for Press's Q, we obtain [600-(381 x 2)] ^2 / (600 x (2-1)) = 43.7, which exceeds the critical value of 6.63. Our prediction accuracy is greater than expected by chance. Discriminant Analysis

  37. Presence of outliers - 1 • SPSS prints Mahalanobis distance scores for each case in the table of Casewise Statistics, so we can use this as a basis for detecting outliers. • According to the SPSS Applications Guide, p .227, cases with large values of the Mahalanobis Distance from their group mean can be identified as outliers. For large samples from a multivariate normal distribution, the square of the Mahalanobis distance from a case to its group mean is approximately distributed as a chi-square statistic with degrees of freedom equal to the number of variables in the analysis. The critical value of chi-square with 5 degrees of freedom (the stepwise procedure entered five variables in the function) and an alpha of 0.01 (we only want to detect major outliers) is 15.086. • We can request this figure from SPSS using the following compute command: • COMPUTE mahcutpt = IDF.CHISQ(0.99,5).EXECUTE. • Where 0.99 is the cumulative probability up to the significance level of interest and 5 is the number of degrees of freedom.  SPSS will create a column of values in the data set that contains the desired value, 15.086. • We scan the table of Casewise Statistics to identify any cases that have a Squared Mahalanobis distance greater than 15.086 for the group to which the case is most likely to belong, i.e. under the column labeled 'Highest Group.'  Discriminant Analysis

  38. Presence of outliers - 2 Scanning the Casewise Statistics for the Original sample, I do not find any cases which have a D² value this large for the highest classification group. Discriminant Analysis

  39. Stage 5: Interpret the Results • In this section, we address the following issues: • Number of functions to be interpreted • Relationship of functions to categories of the dependent variable • Assessing the contribution of predictor variables • Impact of multicollinearity on solution Number of functions to be interpreted As indicated previously, there is one significant discriminant function to be interpreted. Role of functions in differentiating categories of the dependent variable With only one function, it obviously differentiates between the two groups of the dependent variable. Discriminant Analysis

  40. Assessing the contribution of predictor variables - 1 Identifying the statistically significant predictor variables The summary table of variables entering and leaving the discriminant functions is shown below. Five variables were found to make a statistically significant contribution to distinguishing the two groups in the dependent variable: Political Ideology, Age in Years, Party Identification, Highest Grade Completed, and Frequency of Church Attendance.

  41. Assessing the contribution of predictor variables - 2 Importance of Variables and the Structure Matrix To determine which predictor variables are more important in predicting group membership when we use a stepwise method of variable selection, we can simply look at the order in which the variables entered, as shown in the following table. The three variables entered first in our analysis, Political Ideology, Age in Years, Party Identification, are also reported in the author's table 1, along with Fundamental Protestant, which we did not find in our analysis. We also found highest grade completed and frequency of church attendance as statistically significant variables, which the authors did not find. I would attribute the differences to a possible discrepancy in the way I coded the religious groups that differed from the authors’ and which I am unable to reconcile.

  42. Assessing the contribution of predictor variables - 3 The Structure Matrix, shown below, confirms our conclusions about the relative importance of the independent variables obtained above from the stepwise procedure. For the two group problem which has only one discriminant function, this information is redundant to the results of the stepwise inclusion of variables. When we have more than one discriminant function in our analysis, the structure matrix will help us identify which independent variables are associated with which discriminant functions separating which groups.

  43. Assessing the contribution of predictor variables - 4 Comparing Group Means to Determine Direction of Relationships We can examine the pattern of means on the significant independent for the two groups of the dependent variable to identify the role of the independent variables in predicting group membership. The following table contains an extract of the SPSS output for the variables which it included in the stepwise analysis. (I created this table with only these variables because the output from the discriminant analysis was too large to view.)

  44. Assessing the contribution of predictor variables - 5 The most important predictor variable, Political Ideology, was scaled so that the higher the score, the more conservative the individual was. The mean for Political Ideology was higher (more conservative) for respondents who agreed that women were not emotionally suited for politics. For the second most important predictor variable, Age in Years, respondents who agreed that women were not emotionally suited for politics were, on the average, seven years older than their counterparts who disagreed. The third most important predictor variable, Party Identification, was scaled so that higher scores were associated with being a Republican, rather than an Independent or a Democrat. The higher mean for the Agree group can be interpreted as indicating that more Republicans tended to view women as less suited for politics, though I would really want to look at a crosstabs table to make certain there the distribution was not bimodal or otherwise problematic. The final two predictor variables, Highest Grade Completed and Frequency of Church Attendance, indicate that those who believe women are less suited to politics than men tend to have lower levels of education and tend to attend church more often. All in all, I think the findings fit our image of the characteristics of groups that we would expect to be less supportive of women in politics.

  45. Impact of Multicollinearity on solution Multicollinearity is indicated by SPSS for discriminant analysis by very small tolerance values for variables, e.g. less than 0.10 (0.10 is the size of the tolerance, not its significance value). If we look at the table of Variables Not In The Analysis, we see that multicollinearity is not a problem in this analysis. Discriminant Analysis

  46. Stage 6: Validate The Model • In this stage, we are normally concerned with the following issues: • Conducting the Validation Analysis • Generalizability of the Discriminant Model Conducting  the Validation Analysis To validate the discriminant analysis, we can randomly divide our sample into two groups, a screening sample and a validation sample. The analysis is computed for the screening sample and used to predict membership on the dependent variable in the validation sample. If the model is valid, we would expect that the accuracy rates for both groups to be about the same. In the double cross-validation strategy, we reverse the designation of the screening and validation sample and re-run the analysis.  We can then compare the discriminant functions derived for both samples.  If the two sets of functions contain a very different set of variables, it indicates that the variables might have achieved significance because of the sample size and not because of the strength of the relationship. Our findings about these individual variables would that the predictive utility of these predictors is not generalizable. Discriminant Analysis

  47. Set the Starting Point for Random Number Generation Discriminant Analysis

  48. Compute the Variable to Randomly Split the Sample into Two Halves Discriminant Analysis

  49. Specify the Cases to Include in the First Screening Sample Discriminant Analysis

  50. Specify the Value of the Selection Variable for the First Validation Analysis Discriminant Analysis

More Related