120 likes | 212 Vues
This meta-analysis critique examines the limitations and complexities in research methodologies, focusing on predictor-criterion models in various populations. Key topics include the 75 percent rule, unknown error rates, the 25 percent "junk" assumption, validity generalization, situation specificity, and the relevance of REVC in explaining effect sizes. The text delves into the significance of Situational Specificity, the File Drawer problem, and considerations for identifying the best evidence in meta-analytic studies. Discussions on study bias detection methods, publication quality, and the ongoing debates on defining 'best' practices are thoroughly explored.
E N D
Algera et al. • Definition of rho • Predictor • Criterion • Population • Model is for a single combination in a single population. Applied to multiple predictors, criteria, unspecified population(s). • JAP, a meta-analysis
Criterion Measures • Homogeneity of predictors and criteria • Supervisory ratings mostly • Multidimensionality of criteria
Test of Situational Specificity • 75 percent rule • Unknown type I and type II error rates. • Depends heavily on N/study • Assumption that 25 percent is due to junk • Q (chi-square) test • Power depends on k • Not worked out for corrected effect sizes
SS vs. VG • Situational Specificity rejected if V(rho)=0. • Validity Generalizes if V(rho) >0 and CRLow > some value. • What test (predictor)? • What criterion? • What population?
Meanings of Situation • Outside the individual e.g., working conditions, pay for performance • Nature of job performance, dimensionality, criterion factor structure (considered SS by SnH) • Research design, e.g., time between measurements, reliability, range restriction, etc.
REVC is unsatisfactory • REVC represents unexplained variability in effect sizes • Theory is all about explanation • A good theory of, e.g., Situation, will result (ultimately) in a single estimate of rho.
Sharpe • Apples & Oranges • File Drawer • GIGO, study rigor
Apples & Oranges • Inclusion criteria • Homogeneity test • Not really helpful • The problem of moderators • May be sig moderator even if overall Q is n.s. • Quickly exhaust studies with multiple moderators
File Drawer • Explain search for studies • Include published & unpublished studies, depending on study purpose • Report correlation between sample size and effect size • Calculate fail-safe N • May not be very meaninful, tho, assume ES=0, but ES could be negative • Use sophisticated bias detection methods, e.g., trim & fill
GIGO • Are published studies really better? • “Best-evidence” synthesis • Meta-analyze only the best studies • Major disagreements about what ‘best’ means • Code for features, e.g., random assignment, blind to condition
Other issues • Conclusions of meta-analyses disagree • Premature closure of research areas