1 / 31

Designing the Evaluation

Designing the Evaluation . FCS 5470 Summer 2005. An early thought…. “If we don’t know where we are going, we will likely end up somewhere else.”. Steps in Designing the Evaluation Plan. Assess the evaluation needs Focus the evaluation Select an evaluation design

sibley
Télécharger la présentation

Designing the Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Designing the Evaluation FCS 5470 Summer 2005

  2. An early thought… • “If we don’t know where we are going, we will likely end up somewhere else.”

  3. Steps in Designing the Evaluation Plan • Assess the evaluation needs • Focus the evaluation • Select an evaluation design • Determine data collection methods • Determine data analysis methods • Collect and analyze data • Communicate and report findings Russ-Eft & Presill, 2001; Posavac & Carey, 2003

  4. Step 1: Assess the evaluation needs • Who wants it? • Why is it wanted? • When? • Who is going to do it? • Resources available? • Evaluability? • Focus of it? Posavac & Carey, 2003

  5. Step 2: Focus the Evaluation • Develop a complete description of the evaluand • Why important? • Background and history of the program • How to learn more? Weiss, 1998

  6. Step 2: Focus the Evaluation • Rationale and Purpose of Evaluation • Builds from history with the problems identified early on • Ends with a clear purpose that includes a statement of how the results will be utilized Russ-Eft & Preskill, 2001

  7. Step 2: Focus the Evaluation • Identify the stakeholders • Anyone who has a direct or indirect interest (stake) in the evaluand or its evaluation (Weiss, 1998) • Primary, secondary, and tertiary levels • Critical to identify all, categorizing isn’t as critical Russ-Eft & Preskill, 2001

  8. Primary Funding agencies, designers, implementers, staff Secondary Managers, administrators, students, participants, customers/clients, trainers, parents Tertiary Potential users or adopters, professional colleagues, professional organizations, community members, governing boards, legislators Stakeholder examples Russ-Eft & Preskill, 2001

  9. Step 2: Focus the Evaluation • Develop key evaluation questions • Form the boundary and scope • May need to prioritize or group questions into themes • Need to know v nice to know • Types of questions asked Russ-Eft & Preskill, 2001; Weiss, 1998

  10. Types of Evaluation Questions • Program process • Program outcomes • Attributing outcomes to the program • Links between processes and outcomes • Explanations Weiss, 1998

  11. Step 3: Select an Evaluation Design • One-shot design • Retrospective pre-test design • One-group pretest-posttest design • Posttest-only control group design • Pretest-posttest control-group design • Time series design • Case study design Russ-Eft & Preskill, 2001

  12. One shot design Posttest Intervention Sample Russ-Eft & Preskill, 2001

  13. Retrospective Pretest Design Posttest, Including Retrospective KAB Intervention Sample Russ-Eft & Preskill, 2001

  14. One-group pretest-posttest Posttest Sample Pretest Intervention Russ-Eft & Preskill, 2001

  15. Posttest-only Control Group Russ-Eft & Preskill, 2001

  16. Pretest-Posttest Control-Group Russ-Eft & Preskill, 2001

  17. Time Series • Similar to a course • Several points of assessment within the intervention • Advantages (provides evidence over time) • Disadvantages (costly) Russ-Eft & Preskill, 2001

  18. Case study • In-depth descriptive data collection and analysis of individuals • Most useful when you want to answer “how” and “why” Russ-Eft & Preskill, 2001

  19. Triangulation • Data • Methods • Investigator • Theory Russ-Eft & Preskill, 2001

  20. Step 4: Data Collection Methods • Archival Data • Observation Data • Surveys and Questionnaires • Knowledge tests • Individual interviews • Focus groups

  21. Factors influencing selection • Evaluation key questions • Evaluator skill • Available resources • Stakeholders’ preferences • Level of acceptable intrusiveness • Availability of data Russ-Eft & Preskill, 2001

  22. Factors influencing selection • Objectivity • Timeliness • Degree of desired structure • Validity and reliability issues Russ-Eft & Preskill, 2001

  23. Validity • Accuracy of data collection • Measures what it claims to measure • Types of validity • Content, construct, face, criterion, and predictive Leddy & Ormrod, 2001

  24. Time or history Maturation Effects on testing Statistical regression Instrumentation Mortality Selection Diffusion/imitation of treatments Bias Threats to validity Leddy & Ormrod, 2001

  25. Sources of potential bias • Sample selection • Concealment of the truth • Lack of knowledge • Non-response • Processing errors • Conceptual problems Leddy & Ormrod, 2001

  26. Ensuring Validity • Random assignment in comparison groups • Ample number of survey items • Reduce response bias • Control over confounding variables • Use of multiple measures Russ-Eft & Preskill, 2001

  27. Reliability • An instrument that gives approximately the same results during subsequent measures • Results can be replicated • Types of reliability • Inter-rater, internal consistency, equivalent forms, and test-retest Leddy & Ormrod, 2001

  28. Threats to reliability • Fluctuations in mental alertness of participants • Variations in conditions under which instrumentation occurred • Differences in interpreting the results • Personal motivation of participants • Length of instrument Leddy & Ormrod, 2001

  29. Techniques for ensuring validity and reliability • Pilot testing • Checking different types of validity • Make repeated and persistent observations • Triangulation • Use of three different sources Russ-Eft & Preskill, 2001

  30. Qualitative Ethnographic or naturalistic Evaluators can’t separate themselves Purpose is to understand Inductive analysis Quantitative Empirical Independent and apart from what is evaluated Statistics commonly used Deductive analysis Mixed Methods Russ-Eft & Preskill, 2001

  31. Steps in Designing the Evaluation Plan • Assess the evaluation needs • Focus the evaluation • Select an evaluation design • Determine data collection methods • Determine data analysis methods • Collect and analyze data • Communicate and report findings Russ-Eft & Presill, 2001; Posavac & Carey, 2003

More Related