1 / 30

Cronbach’s Alpha

mateo
Télécharger la présentation

Cronbach’s Alpha

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. It is very common in psychological research to collect multiple measures of the same construct. For example, in a questionnaire designed to measure optimism, there are typically many items that collectively measure the construct of optimism. To have confidence in a measure such as this, we need to test its reliability, the degree to which it is error-free. The type of reliability we'll be examining here is called internal consistency reliability. the degree to which multiple measures of the same thing agree with one another. Cronbach’s Alpha

  2. As an example we consider the “Benevolent Sexism Scale” part of the scale developed by Peter Glick and Susan Fiske (1996). Details are given in the Appendix to the print version of the notes. Cronbach’s Alpha

  3. Most of the items are phrased so that strong agreement indicates a belief that men should protect women, that men need women, and that women have positive qualities that men lack. However, three of the items are phrased in the reverse #3, #6, and #13. In order to make those items comparable to the other items, we will need to reverse score them. Reverse Scoring

  4. In this questionnaire, participants responded to the items using a 7 point Likert scale (the original scale had only 5 points) ranging from 1 (“Strongly Disagree”) to 7 (“Strongly Agree”). When we reverse score an item, we want 1's to turn into 7's, 7's to turn into 1's, and all the scores in between to become their appropriate opposite (6's into 2's, 5's into 3's, etc.). Reverse Scoring

  5. Fortunately, there is a simple mathematical rule for reverse scoring. reverse score(x) = max(x) + 1 – x Where max(x) is the maximum possible value for x. In our case, max(x) is 7 because the Likert scale only went up to 7. To reverse score, we take 7 + 1 = 8, and subtract our scores from that 8 - 7 = 1, 8 - 1 = 7. Reverse Scoring

  6. To get SPSS to reverse score. Transform > Compute Variable Reverse Scoring

  7. You will be creating a new variable for each of the variables you need to reverse score. #3, #6, and #13. The original variables are called ASI3, ASI6 and ASI13. For simplicity preserve the names for the new reverse scored variables. Name the first variable (the “Target Variable”) ASI3, and set it equal to 8 ASI3. Repeat the exercise for the remaining variables (ASI6, and ASI13). Reverse Scoring

  8. Reverse Scoring

  9. Now you're ready to compute the reliability of this scale, select Analyze > Scale > Reliability Analysis Reliability

  10. Move all the scored items into the 'Items' box. Reliability

  11. Click on the box labelled Statistics and select Scale if item deleted (explained below). Reliability

  12. Click on the box labelled Statistics and select Scale if item deleted (explained below). Reliability Press 'Continue' and then 'OK.' You should get the following output.

  13. Look at the top of the output and you will see “.741” under “Cronbach's Alpha.” This is the most common statistic used to describe the internal consistency reliability of a set of items. If you are using a questionnaire in your research, your results should include a report of the Cronbach's alpha for your questionnaire. Reliability

  14. Reliability

  15. Reliability The first two columns (Scale Mean if Item Deleted and Scale Variance if Item Deleted) of the next table generally aren't all that useful.

  16. The third column is the correlation between a particular item and the sum of the rest of the items. This tells you how well a particular item “goes with” the rest of the items. Reliability

  17. In the output above, the best item appears to be ASI1, with an item total correlation of r = .598. The item with the lowest item total correlation is ASI9 (r = .255). Reliability

  18. If this number is close to zero, then you should consider removing the item from your scale because it is not measuring the same thing as the rest of the items. Reliability

  19. Now look in the last column. “Alpha if item deleted.” This is a very important column. It estimates what the Cronbach's alpha would be if you got rid of a particular item. Alpha if Item Deleted

  20. For example, at the very top of this column, the number is .690. That means that the Cronbach's alpha of this scale would drop from .741 to .690 if you got rid of that item. Alpha if Item Deleted

  21. Because a higher alpha indicates more reliability, it would be a bad idea to get rid of the first item. Alpha if Item Deleted

  22. In fact, if you look down the "Alpha if item deleted" column, you will see that none of the values is greater than the current alpha of the whole scale: .741. This means that you don't need to drop any items. Alpha if Item Deleted

  23. If you are using an accepted scale obtained from a published source, you do not need to worry about improving reliability. You should use the whole scale, even if it has problems, because if you start changing the scale you will be unable to compare your results to the results of others who have used the scale. You only want to improve the reliability of a scale if it is a scale you are developing. Improving Reliability

  24. If one of the “Alpha if item deleted” values is greater than the overall alpha, you should re run Analyze > Scale > Reliability Analysis after moving the offending item from the “Items” box back over to the unused items box. Repeat this process until there are no values in the “Alpha if item deleted” column that are greater than the alpha for the overall scale. Improving Reliability

  25. The goal of this whole procedure is to produce a single score for your questionnaire. Once you've used reliability analysis to identify the items that will produce the most reliable measure, you can use those items to create an average score for your questionnaire, as described below. Computing a mean score for a questionnaire

  26. To compute a mean score, select Transform > Compute. In the Target Variable box, type in the name of your scale, ASI. In the Numeric Expression box, type the word MEAN, followed by “(” and then a list of the variables you want to average together, separated by commas. Make sure you only put in the variables that you decided were the best for the scale. At the end, close the expression with a “)”. Press OK to compute the new variable. Computing a mean score for a questionnaire

  27. Computing a mean score for a questionnaire

  28. Select Graphs > Legacy Dialogs > Histogram and put your new ASI variable into the variable box. Press OK. You should get output like this. Computing a mean score for a questionnaire

  29. Select Graphs > Legacy Dialogs > Histogram and put your new ASI variable into the variable box. Press OK. You should get output like this. Computing a mean score for a questionnaire

  30. A histogram is a plot of how often possible values occurred. It's one way to see if there is anything really strange in your data - any extreme values, or all the scores piled up on one side. If you've done everything correctly, you should find that the values on the right side of the image above correspond to the values in your output, standard deviation of .851, mean of 4.30, and N of 74. Computing a mean score for a questionnaire

More Related