1 / 10

Assessing Program Impact: Randomized Field Experiments

Assessing Program Impact: Randomized Field Experiments. A Very Brief Summary of: “Evaluation: A Systematic Approach 7th Ed.” by P.H. Rossi et. Al (2004), Chapter 8 (pp. 233-264) Bonnie R. Patterson. Assessing Program Impact* Chapter 8: Outline. When is an impact Assessment Appropriate?

nira
Télécharger la présentation

Assessing Program Impact: Randomized Field Experiments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assessing Program Impact: Randomized Field Experiments A Very Brief Summary of: “Evaluation: A Systematic Approach 7th Ed.” by P.H. Rossi et. Al (2004), Chapter 8 (pp. 233-264) Bonnie R. Patterson

  2. Assessing Program Impact*Chapter 8: Outline • When is an impact Assessment Appropriate? • Key Concepts in an Impact Assessment • Experimental Versus Quasi-Experimental Research Designs • “Perfect” Versus “Good Enough” Impact Assessment • Randomized Field experiments: • Using Randomization to Establish Equivalence • Units of Analysis • The Logic of Randomized Experiments • Examples of Randomized Experiments in Impact Assessment • Prerequisites for Conducting Randomized Field Experiments • Approximations to Random Assignment • Data Collection Strategies for Randomized Experiments • Complex Randomized Experiments • Analyzing Randomized Experiments • Outline Continued…. *This material is adapted from the text: “Evaluation: A Systematic Approach 7th Ed.” by P.H. Rossi et. Al (2004), Chapter 8, pp. 233-264

  3. Assessing Program Impact*Chapter 8: Outline Continued… • Limitations on the Use of Randomized Experiments • Programs in Early Stages of Implementations • Ethical Considerations • Differences Between Experimental and Actual Intervention Delivery • Time and Costs • Integrity of Experiments *This material is adapted from the text: “Evaluation: A Systematic Approach 7th Ed.” by P.H. Rossi et. Al (2004), Chapter 8, pp. 233-264

  4. Assessing Program Impact: Randomized Field Experiments * A program impact study is and assessment of what effects programs are having on their intended targets and to determine if there are any unintended effects. (p. 234) • When is an impact Assessment Appropriate?: At any point in the life of the program. (p. 235) • Key Concepts in an Impact Assessment • Experimental Versus Quasi-Experimental Research Designs: A randomized field experiment is the “gold standard” of research designs-it is as close as you can get to a scientific study. (pp. 237-248) • “Perfect” Versus “Good Enough” Impact Assessment: Perfect (“gold standard”) experimental designs are difficult and costly to accomplish in the real world. Often these are not required, and a more relaxed (“good enough”) alternate design can be used. (p. 238) *This material is adapted from the text: “Evaluation: A Systematic Approach 7th Ed.” by P.H. Rossi et. Al (2004), Chapter 8, pp. 233-264

  5. Assessing Program Impact *Continued… • Randomized Field experiments: Obtaining a control group is critical to the process of conducting this type of experiment. When this is not possible there are several techniques that can be utilized to approximate an appropriate control group. (p. 239) • Using Randomization to Establish Equivalence: There is an equal probability for units to be assigned to each group (experimental and control group) (pp.239-241); • Units of Analysis: Units on which outcome measures are taken in an impact assessment (pp.241-242); • The Logic of Randomized Experiments: Average on both experimental and control groups is calculated before and after the experiment: Differences are assumed to be due to the experimental factor. (pp. 242-243) • Examples of Randomized Experiments in Impact Assessment: See text pp. 243-246 for several examples of this experimental design. Continued… *This material is adapted from the text: “Evaluation: A Systematic Approach 7th Ed.” by P.H. Rossi et. Al (2004), Chapter 8, pp.233-234

  6. Assessing Program Impact *Continued… * • Prerequisites for Conducting Randomized Field Experiments • The present practice must need improvement • The efficacy of the proposed intervention must be uncertain under field conditions; • There should be no simpler alternatives for evaluating the intervention; • The results must be potentially important for policy; • The Design must e able to meet the ethical standards of both the researchers and the service providers. (p. 248) • Approximations to Random Assignment: Ways of approximating the unbiased assignment of participants in non-randomized experiments: There are two ways of doing this. (p. 248-249) Continued… *This material is adapted from the text: “Evaluation: A Systematic Approach 7th Ed.” by P.H. Rossi et. Al (2004), Chapter 8, pp. 233-264

  7. Assessing Program Impact *Continued…* • Data Collection Strategies for Randomized Experiments: • Conduct the experiment more than once and average the results (mean of means)—a good approximation of a true mean/value results over time. • Take samples (collect results) periodically during the course of the intervention, giving researchers a feel for how the program is functioning over time. (pp. 249-250) • Complex Randomized Experiments: An impact assessment can measure several factors/variants of an intervention or several distinct interventions in a complex design. These are appropriate for new policies that are not clear in advance about the correct form that the new policy should take. (pp.250-252) Continued… *This material is adapted from the text: “Evaluation: A Systematic Approach 7th Ed.” by P.H. Rossi et. Al (2004), Chapter 8, pp. 233-264

  8. Assessing Program Impact *Continued…* • Analyzing Randomized Experiments: Randomized experiments were originally designed for laboratory or agricultural experimentation, but they can be used for: • Programs that are in the early stages of implementation as long as these programs have not changed too much during the measurements. (pp.252-259) • Limitations on the Use of Randomized Experiments: • Programs in Early Stages of Implementations: (see above) • Ethical Considerations: It is possibly unethical to the control group to be deprived of the possible benefits of the intervention. Harm could potentially occur to either group. (pp.259-260) Continued… *This material is adapted from the text: “Evaluation: A Systematic Approach 7th Ed.” by P.H. Rossi et. Al (2004), Chapter8, pp. 233-264

  9. Assessing Program Impact *Continued…* • Differences Between Experimental and Actual Intervention Delivery: Experimental design may affect the way in which the treatments are actually delivered to the participants. (pp.260-261) • Time and Costs: Randomized experiments are costly and time consuming (p.261) • Integrity of Experiments: These randomized experiments are sensitive to decays in the process (environmental as well as from within the experiment). (pp. 261-262) *This material is adapted from the text: “Evaluation: A Systematic Approach 7th Ed.” by P.H. Rossi et. Al (2004), Chapter8, pp. 233-264

  10. Some Additional References with Links: • Johnson (n.d.) Assessing the need for a program RLF, Chapter 5, Accessed Feb. 13, 2009 at: http://www.southalabama.edu/coe/bset/johnson/660lectures/rlf4.doc • Johnson (n.d.) Expressing and assessing program theory, RLF, Chapter 4, Accessed Feb. 13, 2009 at: http://209.85.173.132/search?q=cache:9h3cQyAUmPUJ:www.southalabama.edu/coe/bset/johnson/660lectures/rlf5.doc+assessing+program+impact&hl=en&ct=clnk&cd=1&gl=us • Robert Wood Johnson Foundation (2008) WJF Assessment report 2008, Retrieved on Feb. 13, 2009 at: http://www.rwjf.org/files/research/3632rwjf.publicscorecard081211.pdf • Nichols, T. C. (2004) Assessing program effects or impact in enterprise development programs, Southern Rural Sociology (Vol. 20, No. 2, 2004, pp. 110-120), Retrieved Feb. 13, 2009 at: http://www.ag.auburn.edu/auxiliary/srsa/pages/Articles/SRS%202004%2020%202%20110-120.pdf • Rowan-Szal, G.A. et. Al (2007) Assessing program needs and planning change, J Subst Abuse Treat. Author manuscript; available in PMC 2008, Sept. 1, Accessed Feb. 13, 2009 at: http://alumnus.caltech.edu/~rouda/T2_NA.html • Rouda, R. H., Kusy, M. E. Jr. (1995) Needs assessment: the first step, Journal of Substance abuse, Sept. 2007, pp. 121-129, Accessed Feb. 13, 2009 at: http://www.sciencedirect.com/science/journal/07405472

More Related