1 / 34

PSY 1950 Meta-analysis December 3, 2008

PSY 1950 Meta-analysis December 3, 2008. Definition.

Télécharger la présentation

PSY 1950 Meta-analysis December 3, 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PSY 1950 Meta-analysis December 3, 2008

  2. Definition “the analysis of analyses . . . the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings. It connotes a rigorous alternative to the casual, narrative discussions of research studies which typify our attempts to make sense of the rapidly expanding research literature.” • Glass (1976) “Mega-silliness” • Eysenck (1977)

  3. History • Pre-history • Pearson (1904), Fisher (1948), Cochran’s (1955) • The Great Debate • 1952: Eysenck concluded that psychotherapy was bunk • 20 years of research did not settle debate • 1978: Glass & Smith statistically aggregated findings from 375 studies, concluding that psychotherapy works • Necessity is the mother of invention • Psychology abounds!

  4. Rationale • Meta-analyses avoid the limitations of qualitative/narrative/traditional reviews: • Weak effects overloooked • Meta-analyses are more powerful • Differences between studies over-interpreted • In meta-analysis, heterogeneity assessed statistically • Moderating variables overestimated or overlooked • In meta-analysis moderators assessed statistically • Limited, subjective sampling of studies • In meta-analysis, exhaustive search and defined inclusion/exclusion criteria • Overwhelmed by large database • Meta-analyses can summarize hundreds of effects • Subjective assessment • In meta-analysis, any subjectivity is discernible

  5. Example: Finding Weak Effects • Cooper, H. M., & Rosenthal, R. (1980). Statistical versus traditional procedures for summarizing research findings. Psychological Bulletin, 87, 442-449. • 32 grad students, 9 faculty members • Randomly assigned to statistical and traditional review technique • Given 7 studies that examined sex differences in persistence • For 2 studies, females more persistent than males (ps = .005, .001) • For other studies, no significant difference • Does evidence presented support the conclusions that females are more persistent?

  6. actual p = .016

  7. Example: Assessing Moderators Statistically

  8. Criticisms • Weak • Apples and oranges • Flat Earth society • Garbage in, garbage out • File-drawer problem • Strong • Post-hoc

  9. Apples and Oranges • Critique • Meta-analyses add together apples and oranges • Response • Glass: “in the study of fruit, nothing else is sensible” • Analogy with single experiments • Empirical question resolved through examination of moderating variables

  10. Flat Earth Society • Critique • Cronbach: "...some of our colleagues are beginning to sound like a kind of Flat Earth Society. They tell us that the world is essentially simple: most social phenomena are adequately described by linear relations; one-parameter scaling can discover coherent variables independent of culture and population; and inconsistences among studies of the same kind will vanish if we but amalgamate a sufficient number of studies.... The Flat Earth folk seek to bury any complex hypothesis with an empirical bulldozer.” • Response • Code and analyze moderating variables

  11. Garbage In, Garbage Out • Critique • The inclusion of flawed studies “dirties” the database, obscures the truth, and invalidates meta-analytic conclusions • Response • Glass: “I remain staunchly committed to the idea that meta-analyses must deal with all studies, good bad and indifferent, and that their results are only properly understood in the context of each other, not after having been censored by some a priori set of prejudices.” • Empirical question: Study quality (or better yet, related variables) can be coded and analyzed as moderators

  12. File-drawer Problem • Critique • Meta-analytic database is biased sampling of studies • Significant findings are more likely to be published than nonsignificant findings • Response • Less bias than narrative reviews • File-drawer analyses (e.g., funnel plots) can empirically address the presence and influence of missing studies

  13. Post-hoc • Criticism • By definition, meta-analysis is a post-hoc endeavor, i.e., an observational study • Moderating variables may be confounded, sometimes extremely so • Effects may be correlational • Response • Confounding may be interested in its own right • Statistical control • Hypothesis generation versus hypothesis testing

  14. Steps of a Meta-analysis • Define question • Search literature • Determine inclusion/exclusion criteria • Code moderating variables • Analyze data • This is an iterative process!

  15. Defining Meta-analytic Question • Interestingness • Establish presence of effect • Determine magnitude of effect • Resolve differences in literature • Test competing theories • e.g., psychotherapy, imagery v1

  16. Inclusion/Exclusion Criteria • Theoretical considerations • Scope/generalizability • Quality • Practical considerations • Power • Missing data • Time

  17. Studies were included if they • had written published or unpublished reports in English available by March 1, 2008 • presented original data from between-participants, within-participants (i.e., single-group pretest-posttest, or PP), or mixed design (i.e., independent-groups pretest-posttest, or IG-PP) experiments or quasi-experiments • objectively, quantitatively evaluated performance on at least one cognitive task as a function of meditative experience or state • Studies were excluded if they • used a psychopathologically or neurologically disordered population • confounded meditation with other mental training (e.g., education), maturation, or practice and used measures susceptible to such confounding (e.g., academic achievement test) • did not report data on or contained data that allow estimation of participants’ age or meditative experience • did not contain basic methodological information (e.g. type of task administered)

  18. Literature Search • Types of searches • Keyword • Ancestor • Descendent • Available Resources • Electronic • e.g., PsychInfo, SCI, Google scholar • Physical • Conference proceedings • Bibliographies • Key journals • Mental • Experts in the field

  19. Harvard’s Electronic Resources • SSCI/SCI (Social/Science Citations Index) • http://nrs.harvard.edu/urn-3:hul.eresource:scicitin • PsychInfo • http://nrs.harvard.edu/urn-3:hul.eresource:psycinfo • Google Scholar • http://scholar.google.com/ • E-Journals @ Harvard • http://sfx.hul.harvard.edu/sfx_local/az/ • HOLLIS • http://holliscatalog.harvard.edu/ • Interlibrary Loan • https://illiad.hcl.harvard.edu/ • Dissertations (Proquest) • http://nrs.harvard.edu/urn-3:hul.eresource:psycinfo

  20. Coding • What to code • Anything possibly interesting • e.g., control group/condition, participant variables • Anything possibly confounding • e.g., publication year, journal impact factor • How you coded effect sizes • How to code • Using explicit coding scheme • Set measurement scale • Multiple coders • Calculate reliability

  21. Analysis • Calculate effect size • Weight effect size • Describe • Infer • Univariate analyses • Multivariate analyses

  22. Calculating Effect Size • Only one ES per construct per study • Balance between dependency and thoroughness • Typically d or r • Can be calculated in lots of way (from raw data to graphs) • Effect size calculator

  23. Weighting Effect Size • Why weight? • Studies vary significantly in size • Studies with large n have more reliable effect sizes than studies with small n • How weight? • Simple approach: weight by sample size • Better approach: weight by precision • What is precision weighting? • Each effect size has associated SE • Hedges showed that best meta-analytic estimate of precision is weight by inverse sampling variance

  24. Describing Distribution • Central tendency • Spread • Shape

  25. Inferencial Statistics • Select a model • Fixed effects • Random effects • Univariate analyses • Analogous to one-way ANOVA • Examine how much variation in effect sizes is explained by one (categorical) variable • Multivariate analyses • Analogous to multivariate regression • Examine how much variation in effect sizes is explained by set of (categorical or continuous) variables • Examine how much unique variation in effect sizes is explained by one (categorical or continuous) variable

More Related