1 / 14

A “Dose-Response” Strategy for Assessing Program Impact in Naturalistic Contexts

A “Dose-Response” Strategy for Assessing Program Impact in Naturalistic Contexts. Megan Phillips George Tremblay Antioch University Antioch University New England New England Michael Duffin Program Evaluation and Educational

neci
Télécharger la présentation

A “Dose-Response” Strategy for Assessing Program Impact in Naturalistic Contexts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A “Dose-Response” Strategy for Assessing Program Impact in Naturalistic Contexts Megan PhillipsGeorge Tremblay Antioch University Antioch University New England New England Michael Duffin Program Evaluation and Educational Research Associates, Inc. Presented at the Annual Convention of the Association for Behavioral and Cognitive Therapies, November 18, 2006

  2. RATIONALE • With increasing accountability pressures in virtually all service sectors(American Psychological Association, 2005), we need evaluation strategies that can be utilized outside of highly controlled research contexts(cf. Strosahl et al., 1998, on the “manipulated training research method”). • Evaluation of dose-response relationships, while not a “strong” form of causal evidence(McCabe, 2004),has nevertheless been recognized as providing some support for a causal relationship(O’Neill, 2002).

  3. REQUIREMENTS • Variability in exposure to intervention • Identification and measurement of targeted outcomes

  4. ADVANTAGES • Provides a relatively efficient probe for active program effects, which can warrant further and more rigorous controlled analyses. • Uses a single measurement event while allowing for the collection of a wide range of dose values. • Data can be readily aggregated across time or settings. • Successfully detects small effects that are statistically significant.

  5. LIMITATIONS • Measurement of dose may be somewhat indirect (e.g., estimates of time exposed to intervention). Evaluators must be open to site-specific operationalization of the dose measure, which may complicate comparison across programs. • Requires a deeper level of understanding of statistics than users of the evaluation data may be accustomed to. • Evaluators need to provide users of the data with some benchmark for interpreting the significance of observed effect sizes.

  6. AN ILLUSTRATION • The Place-based Education Evaluation Collaborative (PEEC): • represents several innovative educational programs that share common themes, such as: • Enhanced community-school connections • Increased understanding of and connection to local place • Increased civic participation • maintains an ongoing, cross-program, multi-method evaluation effort.

  7. EVALUATION QUESTION: • Is variability in dose (independent variable) of a place-based education program associated with variability in levels of behaviors and attitudes that the program is attempting to impact, i.e. a larger response (dependent variable)? • Sample: • 338 educator and 721 student surveys from 55 schools, collected over one year. • Representative of wide range of demographic characteristics, grade ranges, & program intensities.

  8. METHOD: Measures • Dose measures • Composite dose was calculated from survey items including: • extent of program implementation: measured on a scale of 0 to 4 • total # of hours of exposure to program elements: raw scores in hours scaled to 0 to 4 metric comparable to “program implementation” • Distribution of composite dose scores across sample covered entire range from 0 to 4, offering suitable variability in the independent variable for dose-response calculations. • Response measures • Broad conceptual categories (modules) were developed that matched desired program outcomes. Each module was composed of indices designed to capture specific dimensions of the modules. • Individual survey questions were developed for each index, using items from existing surveys when possible to maximize validity of comparing current and future results to previously collected data.

  9. METHOD: Analysis Dose-response analysis: • Multiple regression analyses were used to explore the percent variance of outcome variables (modules & indices) that could be accounted for by the predictor variable (program dose).

  10. RESULTS • Statistically significant relationships (p<.01) were found between program dose and all outcome measures except two student-level indices and one educator-level index. • This analysis allowed for the identification of more and less active ingredients of the program (Figures 1 & 2).

  11. Indication of an active ingredient Figure 1. Overall educator practice was analyzed at the super-ordinate level by combining average Likert scale responses for 12 items. The best fit multiple regression line shows that 19% of the variability in survey response is predicted by program dose.

  12. Indication of a less active ingredient Figure 2. Student attachment to place was analyzed at the super-ordinate level by combining average Likert scale responses for 15 student survey items. The best fit multiple regression line shows that 6% of the variability in survey response is predicted by dose.

  13. DISCUSSION • Benefits of the dose-response strategy in the PEEC evaluation context: • Data set can now be cumulative year-to-year. • Once an initial investment in survey instrument design & administration was made, future evaluation costs should decline. • Limitationsof the dose-response strategy in the PEEC evaluation context: • Relied on self-report data as opposed to more empirically verifiable observations. • Psychometric properties of the survey instruments have yet to be validated.

  14. REFERENCES American Psychological Association. (2005, August). Policy statement on evidence-based practice in psychology. Retrieved February 19, 2006, from http://www2.apa.org/practice/ebstatement.pdf McCabe, O.L. (2004). Crossing the quality chasm in behavioral health care: The role of evidence-based practice. Professional Psychology: Research and Practice, 35, 571-579. Strosahl, K. D., Hayes, S. C., Bergan, J., & Romano, P. (1998). Assessing the field effectiveness of acceptance and commitment therapy: An example of the manipulated training research model. Behavior Therapy, 29, 35-64. O’Neill, R.T. (2002, June). A perspective on exposure-response relationships. Paper presented at the annual meeting of the American Association of Pharmaceutical Scientists, Arlington, VA. Retrieved October 25, 2006 from http://www.fda.gov/cder/offices/biostatistics/oneill_364/oneill_364.ppt • The data presented here were collected as part of an evaluation conducted by Program Evaluation and Educational Research Associates, Inc., under the supervision of Michael Duffin. The project was undertaken with the support of the Place-Based Education Evaluation Collaborative (PEEC). For more information about PEEC go to: http://www.PEECworks.org/ • An electronic version of this poster can be downloaded from: http://www.peecworks.org/PEEC/PEEC_Reports/S0112C7E1-0112C8A6

More Related