800 likes | 805 Vues
Why is this important?. Requirement Understand research articles Do research for yourself Real world. The Three Goals of this Course. 1) Teach a new way of thinking 2) Teach “factoids”. Mean. r =. What you have learned!. Describing and Exploring Data / The Normal Distribution
E N D
Why is this important? • Requirement • Understand research articles • Do research for yourself • Real world
The Three Goals of this Course • 1) Teach a new way of thinking • 2) Teach “factoids”
What you have learned! • Describing and Exploring Data / The Normal Distribution • Scales of measurement • Populations vs. Samples • Learned how to organize scores of one variable using: • frequency distributions • graphs
What you have learned! • Measures of central tendency • Mean • Median • Mode • Variability • Range • IQR • Standard Deviation • Variance
What you have learned! • Z Scores • Find the percentile of a give score • Find the score for a given percentile
What you have learned! • Sampling Distributions & Hypothesis Testing • Is this quarter fair? • Sampling distribution • CLT • The probability of a given score occurring
What you have learned! • Basic Concepts of Probability • Joint probabilities • Conditional probabilities • Different ways events can occur • Permutations • Combinations • The probability of winning the lottery • Binomial Distributions • Probability of winning the next 4 out of 10 games of Blingoo
What you have learned! • Categorical Data and Chi-Square • Chi square as a measure of independence • Phi coefficient • Chi square as a measure of goodness of fit
What you have learned! • Hypothesis Testing Applied to Means • One Sample t-tests • Two Sample t-tests • Equal N • Unequal N • Dependent samples
What you have learned! • Correlation and Regression • Correlation • Regression
What you have learned! • Alternative Correlational Techniques • Pearson Formulas • Point-Biserial • Phi Coefficent • Spearman’s rho • Non-Pearson Formulas • Kendall’s Tau
What you have learned! • Multiple Regression • Multiple Regression • Causal Models • Standardized vs. unstandarized • Multiple R • Semipartical correlations • Common applications • Mediator Models • Moderator Mordels
What you have learned! • Simple Analysis of Variance • ANOVA • Computation of ANOVA • Logic of ANOVA • Variance • Expected Mean Square • Sum of Squares
What you have learned! • Multiple Comparisons Among Treatment Means • What to do with an omnibus ANOVA • Multiple t-tests • Linear Contrasts • Orthogonal Contrasts • Trend Analysis • Controlling for Type I errors • Bonferroni t • Fisher Least Significance Difference • Studentized Range Statistic • Dunnett’s Test
What you have learned! • Factorial Analysis of Variance • Factorial ANOVA • Computation and logic of Factorial ANOVA • Interpreting Results • Main Effects • Interactions
What you have learned! • Factorial Analysis of Variance and Repeated Measures • Factorial ANOVA • Computation and logic of Factorial ANOVA • Interpreting Results • Main Effects • Interactions • Repeated measures ANOVA
The Three Goals of this Course • 1) Teach a new way of thinking • 2) Teach “factoids” • 3) Self-confidence in statistics
Remember • You just invented a “magic math pill” that will increase test scores. • On the day of the first test you give the pill to 4 subjects. When these same subjects take the second test they do not get a pill • Did the pill increase their test scores?
What if. . . • You just invented a “magic math pill” that will increase test scores. • On the day of the first test you give a full pill to 4 subjects. When these same subjects take the second test they get a placebo. When these same subjects that the third test they get no pill.
Note • You have more than 2 groups • You have a repeated measures design • You need to conduct a Repeated Measures ANOVA
What if. . . • You just invented a “magic math pill” that will increase test scores. • On the day of the first test you give a full pill to 4 subjects. When these same subjects take the second test they get a placebo. When these same subjects that the third test they get no pill.
Notice – the within variability of a group can be predicted by the other groups
Notice – the within variability of a group can be predicted by the other groups Pill and Placebo r = .99; Pill and No Pill r = .99; Placebo and No Pill r = .99
These scores are correlated because, in general, some subjects tend to do very well and others tended to do very poorly
Repeated ANOVA • Some of the variability of the scores within a group occurs due to the mean differences between subjects. • Want to calculate and then discard the variability that comes from the differences between the subjects.
Sum of Squares • SS Total • The total deviation in the observed scores • Computed the same way as before
SStotal = (57-75.66)2+ (71-75.66)2+ . . . . (96-75.66)2 = 908 *What makes this value get larger?
SStotal = (57-75.66)2+ (71-75.66)2+ . . . . (96-75.66)2 = 908 *What makes this value get larger? *The variability of the scores!
Sum of Squares • SS Subjects • Represents the SS deviations of the subject means around the grand mean • Its multiplied by k to give an estimate of the population variance (Central limit theorem)
SSSubjects = 3((60.33-75.66)2+ (72.33-75.66)2+ . . . . (93.66-75.66)2) = 1712 *What makes this value get larger?
SSSubjects = 3((60.33-75.66)2+ (72.33-75.66)2+ . . . . (93.66-75.66)2) = 1712 *What makes this value get larger? *Differences between subjects
Sum of Squares • SS Treatment • Represents the SS deviations of the treatment means around the grand mean • Its multiplied by n to give an estimate of the population variance (Central limit theorem)
SSTreatment = 4((74-75.66)2+ (75-75.66)2+(78-75.66)2) = 34.66 *What makes this value get larger?
SSTreatment = 4((74-75.66)2+ (75-75.66)2+(78-75.66)2) = 34.66 *What makes this value get larger? *Differences between treatment groups
Sum of Squares • Have a measure of how much all scores differ • SSTotal • Have a measure of how much this difference is due to subjects • SSSubjects • Have a measure of how much this difference is due to the treatment condition • SSTreatment • To compute error simply subtract!
Sum of Squares • SSError = SSTotal - SSSubjects – SSTreatment 8.0 = 1754.66 - 1712.00 - 34.66
Compute df df total = N -1
Compute df df total = N -1 df subjects = n – 1
Compute df df total = N -1 df subjects = n – 1 df treatment = k-1