1 / 107

ANOVA

ANOVA. PSY440 June 26, 2008. Clarification: Null & Alternative Hypotheses. Sometimes the null hypothesis is that some value is zero (e.g., difference between groups), or that groups are equal

jonny
Télécharger la présentation

ANOVA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ANOVA PSY440 June 26, 2008

  2. Clarification: Null & Alternative Hypotheses • Sometimes the null hypothesis is that some value is zero (e.g., difference between groups), or that groups are equal • Sometimes the null hypothesis is the opposite of what you are “hoping” or “expecting” to find based on theory • It is usually (if not always) that “nothing interesting, special, or unusual is going on here.” • If you are confused, look for the “nothing special” hypothesis, and that is usually H0

  3. Clarification: Null & Alternative Hypotheses • Examples of the “nothing special” H0 • These proportions aren’t unusual, they are what previous literature has typically found. There’s nothing unusual about my sample (could be that proportions are equal or unequal, depends on what constitutes “nothing special”) • The mean score in my sample isn’t unusual - it is no different from the mean I would expect, based on what I know about the population (could be that the expected mean is 0, could be that it is some positive or negative number - depends on what constitutes “nothing special”) • The two groups have equal means - nothing special about the experimental condition (this is the more intuitive scenario) • Sometimes you “want” to reject H0, other times you are more interested in “ruling out” unexpected alternatives.

  4. Review: Assumptions of the t-test • Each of the population distributions follows a normal curve • The two populations have the same variance • If the variance is not equal, but the sample sizes are equal or very close, you can still use a t-test • If the variance is not equal and the samples are very different in size, use the corrected degrees of freedom provided after Levene’s test (see spss output)

  5. Using spss to conduct t-tests • One-sample t-test: Analyze =>Compare Means =>One sample t-test. Select the variable you want to analyze, and type in the expected mean based on your null hypothesis. • Paired or related samples t-test: Analyze =>Compare Means =>Paired samples t-test. Select the variables you want to compare and drag them into the “pair 1” boxes labeled “variable 1” and “variable 2” • Independent samples t-test: Analyze =>Compare Means =>Independent samples t-test. Specify test variable and grouping variable, and click on define groups to specify how grouping variable will identify groups.

  6. Using excel to compute t-tests • =t-test(array1,array2,tails,type) • Select the arrays that you want to compare, specify number of tails (1 or 2) and type of t-test (1=dependent, 2=independent w/equal variance assumed, 3=independent w/unequal variance assumed). • Returns the p-value associated with the t-test.

  7. t-tests and the General Linear Model • Think of grouping variable as x and “test variable” as y in a regression analysis. Does knowing what group a person is in help you predict their score on y? • If you code the grouping variable as a binary numeric variable (e.g., group 1=0 and group 2=1), and run a regression analysis, you will get similar results as you would get in an independent samples t-test! (try it and see for yourself)

  8. Conceptual Preview of ANOVA • Thinking in terms of the GLM, the t-test is telling you how big the variance or difference between the two groups is, compared to the variance in your y variable (between vs. within group variance). • In terms of regression, how much can you reduce “error” (or random variability) by looking at scores within groups rather than scores for the entire sample?

  9. Effect Size for t Test for Independent Means • Estimated effect size after a completed study

  10. Statistical Tests Summary (Estimated) Standard error Design Statistical test One sample,  known One sample,  unknown Two related samples,  unknown Two independent samples,  unknown

  11. New Topic: Analysis of Variance (ANOVA) • Basics of ANOVA • Why • Computations • ANOVA in SPSS • Post-hoc and planned comparisons • Assumptions • The structural model in ANOVA

  12. Example • Effect of knowledge of prior behavior on jury decisions • Dependent variable: rate how innocent/guilty • Independent variable: 3 levels • Criminal record • Clean record • No information (no mention of a record)

  13. More than two • Independent & One score per subject • 1 independent variable • The 1 factor between groups ANOVA: Statistical analysis follows design

  14. XA XC XB Analysis of Variance Generic test statistic • More than two groups • Now we can’t just compute a simple difference score since there are more than one difference

  15. Observed variance F-ratio = Variance from chance XA XC XB Analysis of Variance Test statistic • Need a measure that describes several difference scores • Variance • Variance is essentially an average squared difference • More than two groups

  16. Testing Hypotheses with ANOVA • Hypothesis testing: a five step program • Null hypothesis (H0) • All of the populations all have same mean • Step 1: State your hypotheses • Alternative hypotheses (HA) • Not all of the populations all have same mean • There are several alternative hypotheses • We will return to this issue later

  17. Testing Hypotheses with ANOVA • Hypothesis testing: a five step program • Step 1: State your hypotheses • Step 2: Set your decision criteria • Step 3: Collect your data • Step 4: Compute your test statistics • Compute your estimated variances • Compute your F-ratio • Compute your degrees of freedom (there are several) • Step 5: Make a decision about your null hypothesis • Additional tests • Reconciling our multiple alternative hypotheses

  18. XA XC XB Step 4: Computing the F-ratio • Analyzing the sources of variance • Describe the total variance in the dependent measure • Why are these scores different? • Two sources of variability • Within groups • Between groups

  19. XA XC XB Step 4: Computing the F-ratio • Within-groups estimate of the population variance • Estimating population variance from variation from within each sample • Not affected by whether the null hypothesis is true Different people within each group give different ratings

  20. XA XC XB Step 4: Computing the F-ratio • Between-groups estimate of the population variance • Estimating population variance from variation between the means of the samples • Is affected by the null hypothesis is true There is an effect of the IV, so the people in different groups give different ratings

  21. Partitioning the variance Note: we will start with SS, but will get to variance Total variance Stage 1 Between groups variance Within groups variance

  22. Partitioning the variance Total variance • Basically forgetting about separate groups • Compute the Grand Mean (GM) • Compute squared deviations from the Grand Mean

  23. Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance

  24. Partitioning the variance Within groups variance • Basically the variability in each group • Add up of the SS from all of the groups

  25. Partitioning the variance Total variance Stage 1 Within groups variance Between groups variance

  26. Partitioning the variance Between groups variance • Basically how much each group differs from the Grand Mean • Subtract the GM from each group mean • Square the diffs • Weight by number of scores

  27. Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance

  28. Partitioning the variance Now we return to variance. But, we call it Means Square (MS) Total variance Recall: Stage 1 Between groups variance Within groups variance

  29. Partitioning the variance Mean Squares (Variance) Between groups variance Within groups variance

  30. Observed variance F-ratio = Variance from chance Step 4: Computing the F-ratio • The F ratio • Ratio of the between-groups to the within-groups population variance estimate • The F distribution • The F table Do we reject or fail to reject the H0?

  31. Carrying out an ANOVA • The F table • Need two df’s • dfbetween (numerator) • dfwithin (denominator) • Values in the table correspond to critical F’s • Reject the H0 if your computed value is greater than or equal to the critical F • Separate tables for 0.05 & 0.01 • The F distribution

  32. Carrying out an ANOVA • The F table • Need two df’s • dfbetween (numerator) • dfwithin (denominator) • Values in the table correspond to critical F’s • Reject the H0 if your computed value is greater than or equal to the critical F • Separate tables for 0.05 & 0.01 Do we reject or fail to reject the H0? • From the table (assuming 0.05) with 2 and 12 degrees of freedom the critical F = 3.89. • So we reject H0 and conclude that not all groups are the same

  33. Assumptions in ANOVA • Populations follow a normal curve • Populations have equal variances

  34. Planned Comparisons • Reject null hypothesis • Population means are not all the same • Planned comparisons • Within-groups population variance estimate • Between-groups population variance estimate • Use the two means of interest • Figure F in usual way

  35. The ANOVA tests this one!! XA = XB = XC XA ≠ XB ≠ XC XA ≠ XB = XC XA = XB ≠ XC XA = XC ≠ XB XA XC XB 1 factor ANOVA Null hypothesis: H0: all the groups are equal Alternative hypotheses • HA: not all the groups are equal

  36. XA ≠ XB ≠ XC XA ≠ XB = XC XA = XB ≠ XC XA = XC ≠ XB 1 factor ANOVA • Planned contrasts and post-hoc tests: • - Further tests used to rule out the different Alternative hypotheses Test 1: A ≠ B Test 2: A ≠ C Test 3: B = C

  37. Planned Comparisons • Simple comparisons • Complex comparisons • Bonferroni procedure • Use more stringent significance level for each comparison

  38. Controversies and Limitations • Omnibus test versus planned comparisons • Conduct specific planned comparisons to examine • Theoretical questions • Practical questions • Controversial approach

  39. ANOVA in Research Articles • F(3, 67) = 5.81, p < .01 • Means given in a table or in the text • Follow-up analyses • Planned comparisons • Using t tests

  40. 1 factor ANOVA • Reporting your results • The observed difference • Kind of test • Computed F-ratio • Degrees of freedom for the test • The “p-value” of the test • Any post-hoc or planned comparison results • “The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1-way ANOVA was conducted and the results yielded a significant difference, F(2,25) = 5.67, p < 0.05. Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5.67, p < 0.05 & t(1) = 6.02, p <0.05). Groups B and C did not differ significantly from one another”

  41. Group’s mean’s deviation from grand mean (M-GM) Score’s deviation from group mean (X-M) Score’s deviation from grand mean (X-GM) The structural model and ANOVA • The structural model is all about deviations Score (X) Group mean (M) Grand mean (GM)

  42. Why do the ANOVA? • What’s the big deal? Why not just run a bunch of t-tests instead of doing an ANOVA? • Experiment-wise error • The type I error rate of the family (the entire set) of comparisons • EW = 1 - (1 - )c where c = # of comparisons • e.g., If you conduct two t-tests, each with an alpha level of 0.05, the combined chance of making a type I error is nearly 10 in 100 (rather than 5 in 100) • Planned comparisons and post hoc tests are procedures designed to reduce experiment-wise error

  43. Which follow-up test? • Planned comparisons • A set of specific comparisons that you “planned” to do in advance of conducting the overall ANOVA • General rule of thumb, don’t exceed the number of conditions that you have (or even stick with one fewer) • Post-hoc tests • A set of comparisons that you decided to examine only after you find a significant (reject H0) ANOVA

  44. Planned Comparisons • Different types • Simple comparisons - testing two groups • Complex comparisons - testing combined groups • Bonferroni procedure • Use more stringent significance level for each comparison • Basic procedure: • Within-groups population variance estimate (denominator) • Between-groups population variance estimate of the two groups of interest (numerator) • Figure F in usual way

  45. Post-hoc tests • Generally, you are testing all of the possible comparisons (rather than just a specific few) • Different types • Tukey’s HSD test • Scheffe test • Others (Fisher’s LSD, Neuman-Keuls test, Duncan test) • Generally they differ with respect to how conservative they are.

  46. Planned Comparisons & Post-Hoc Tests as Contrasts • A contrast is basically a way of assigning numeric values to your grouping variable in a manner that allows you to test a specific difference between two means (or between one mean and a weighted average of two or more other means). • Discuss numbers • Discuss “orthogonal” contrasts

  47. Fixed vs. Random Factors in ANOVA • One-way ANOVAs can use grouping variables that are fixed or random. • Fixed: All levels of the variable of interest are represented by the variable (e.g., treatment and control, male and female). • Random: The grouping variable represents a random selection of levels of that variable, sampled from a population of levels (e.g., observers). • For one-way ANOVA, the math is the same either way, but the logic of the test is a little different. (Testing either that means are equal or that the between group variance is 0)

  48. Effect sizes in ANOVA • The effect size for ANOVA is r2 • Sometimes called 2 (“eta squared”) • The percent of the variance in the dependent variable that is accounted for by the independent variable • Size of effect depends, in part, on degrees of freedom

  49. ANOVA in SPSS • Let’s see how to do a between groups 1-factor ANOVA in SPSS (and the other tests too)

  50. Within groups (repeated measures) ANOVA • Basics of within groups ANOVA • Repeated measures • Matched samples • Computations • Within groups ANOVA in SPSS

More Related