1 / 16

Introduction to analysis of variance

Introduction to analysis of variance. Chapter 13. A new research situation. You want to know if psychology majors, physics majors, and math majors differ in their happiness

lindley
Télécharger la présentation

Introduction to analysis of variance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to analysis of variance Chapter 13

  2. A new research situation • You want to know if psychology majors, physics majors, and math majors differ in their happiness • You can’t use any of the tests we’ve discussed so far, since you have three levels of major (i.e., three different types of major people could have) • What to do?

  3. Analyze the variance • Where does the difference lie? • Is it between all the majors? • Is it between one major and the other majors? •  Analysis of variance • ANOVA

  4. Key question • Where is there more variability – between groups or within groups? • If the null hypothesis were true, these would be equal • If there is more variability between groups than within groups, this provides support for the research hypothesis

  5. To calculate this • Need to calculate variance between groups and variance within groups • First, though, will calculate SS • Then, will divide by df to get variance

  6. How to calculate this • Total SS = SS between groups + SS within groups • Total SS = computing SS for all data, regardless of group • Within SS = computing SS for each group of scores, and then adding those group SS’s together •  SS between groups = total SS – SS within groups

  7. Getting to df • df total = total number of participants minus 1 • df within groups = sum of df within each group • df between groups = df total – df within

  8. Putting it all together • Variance between groups = SS between/df between • AKA MS between (for mean squared between) • Variance within groups = SS within/df within • AKA MS within (for mean squared within)

  9. Figuring out where there’s more variability • MSbetween/MSwithin: AKA F ratio • If this is 1, there is the same amount of variability between groups as within groups • As this gets greater than 1, there is more variability between groups than within groups •  less likely to get by chance if the null is true

  10. How big is big enough for F? • Determined by critical F value • Found by using df for the numerator (df between) and df for the denominator (df within) • If calculated F > critical F, reject the null, since p < alpha

  11. What about effect size? • Assessed with r2: how much of the outcome variable is explained by knowing which groups someone is from? • Calculated by SSbetween/SStotal • Referred to as eta squared (h2)

  12. Telling the world in APA style • F (df numerator, df denominator) = calculated F value, p information, h2 = X

  13. Where is the difference? • The result of the ANOVA test tells you there’s a difference somewhere between groups, but not where •  post hoc (after the fact) tests are used, if there’s a significant ANOVA, to figure out which groups are different from each other • (if multiple independent samples t-tests were used instead, there would be an inflated familywise type 1 error)

  14. Post hoc option 1: Tukey • Gives a number that captures how big the difference between group means needs to be in order for that difference to be considered significant

  15. Post hoc option 2: Scheffe • Recalculates a new F value for each comparison of two groups • Uses MS between from just those two groups • Is more conservative than an ANOVA with just those two groups, since it uses MS within from all groups, and uses df from all groups

  16. Points to take away • If you’re comparing more than two independent groups, you cannot use independent samples t-tests • Must use an ANOVA • This tells if there’s a difference somewhere • To figure out where, need follow-up (post hoc) tests

More Related