1 / 21

Program Evaluation

Program Evaluation. Using qualitative & qualitative methods . Program evaluations measure:. Program effectiveness, efficiency, quality, and participant satisfaction with the program. Program evaluation can also measure: . How or why a program is effective or is not effective.

hedva
Télécharger la présentation

Program Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Program Evaluation Using qualitative & qualitative methods

  2. Program evaluations measure: Program effectiveness, efficiency, quality, and participant satisfaction with the program.

  3. Program evaluation can also measure: How or why a program is effective or is not effective

  4. Program evaluation looks at the program or component of a program. It is not used to measure the performance of individual workers or teams. Consequently it differs from performance evaluation.

  5. The program’s goals & objectives serve as the starting place for program evaluations. • Objectives must be measurable, time-limited, and contain an evaluation mechanism. • Be developed in relation to a specific program or intervention plan. • Specify processes and tasks to be completed. • Incorporate the program’s theory of action – describe how the program works and what it is expected to do (outcomes). • To start an evaluation, the evaluator must find out what program participants identify as the goal (evaluability assessment).

  6. Theory of action for a hunger program might be:

  7. Evaluations can measure process or outcomes • Qualitative methods are used to answer how and why questions (process) • Quantitative methods are used to answer what questions- what outcomes were produced; was the program successful, effective, or efficient. (outcome)

  8. Differences between the two methods:

  9. Experimental Designs Quasi-Experimental Designs Pre & Post test studies Time Series Analysis Social Indicator Analysis Longitudinal Study Survey Client Satisfaction Survey Goal Attainment Program Monitoring Ethnographic Study Feminist Research Constructivist Evaluation Process Analysis Implementation Analysis Focus Groups Quantitative & Qualitative approaches include:

  10. Most common types: • Outcome evaluation (quantitative - may or may not use control groups to measure effectiveness). • Goal attainment (have objectives been achieved). • Process evaluation (qualitative - looks at how or why a program works or doesn’t work). • Implementation analysis (mixed methods – was the program implemented in the manner intended). • Program monitoring (mixed methods – is the program meeting its goals – conducted while the program is in progress).

  11. Outcome Evaluations can include: • Random Experimental Designs • Comparisons of the pre and post-test scores for each participant on one or more outcome indicators. • Using all members of pre-existing groups to serve as experimental and control groups. • Using social indicator data collected by government agencies (for example, using U.S. Census data on poverty rates in a specific community to determine if an economic development program has been successful in increasing the income of neighborhood residents). • Time series analysis, using repeated measures over a number of time periods to track social indicators or caseload data) • Using statistical controls to hold constant the effects of confounding variables (for example, such as cross-tabulation or regression analysis). • Using a quasi-experimental design in which participants are separated into groups and different levels of the intervention are compared(Chambers et al., 1992; Royce & Thyer, 1996).

  12. Time Series Analysis Examines Data Trends: School Breakfast Program

  13. Client satisfaction surveys are often used as one component of a program evaluation. • Can provide valuable information about how clientele perceive the program and may suggest how the program can be changed to make it more effective or accessible. • Client satisfaction surveys also have methodological limitations.

  14. Limitations include: • It is difficult to define and measure “satisfaction.” • Few standardized satisfaction instruments, that have been tested for validity and reliability exist. • Most surveys find that 80-90% of participants are satisfied with the program. Most researchers are skeptical that such levels of satisfaction exist. Hence, most satisfaction surveys are believed to be unreliable. • Since agencies want to believe their programs are good, the wording may be biased. • Clients who are dependent on the program for services or who fear retaliation may not provide accurate responses.

  15. Problems with client satisfaction surveys can be addressed. • Pre-testing to ensure face validity and reliability. • Asking respondents to indicate their satisfaction level with various components of the program. • Ensuring that administration of the survey is separated from service delivery and that confidentiality of clients/consumers is protected.

  16. Process and Most Implementation Evaluations • Assume that the program is a “black box” – with input- throughput – and output. • Use some mixture of interviews, document analysis, observations, or semi-structured surveys. • Gather information from a variety of organization participants: administrators, front-line staff, and clients. These evaluations also examine communication patterns, program policies, and the interaction about individuals, groups, programs, or organizations in the external environment.

  17. Use the following criteria to determine type of evaluation • Research question to be addressed. • Amount of resources and time that can be allocated for research. • Ethics (can you reasonably construct control groups or hold confounding variables constant) • Will the evaluation be conducted by an internal or external evaluator? • Who is audience for the evaluation? • How will the data be used? • Who will be involved in the evaluation?

  18. Types of evaluation approaches that involve organization constituents • Participatory Action Research • Empowerment Evaluation • Self-evaluation

  19. Differences in approaches are:

  20. Advantages of Methods • Increases feelings of participant ownership of process/programs. • Increases likelihood that data will be used. • Increases likelihood that the resulting program or intervention will meet needs of stakeholders and be culturally appropriate. • Participants develop skills and confidence. They gain knowledge and information and thus become empowered.

  21. Disadvantages of Method • Distrust and conflict among participants. • Length of time needed to develop consensus around goals, mission, and methods. • The need for training around research methods, data collection, and analysis. • The need for skilled facilitation, coordination, and follow-up on task completion. • Money and an organizational structure are needed to do all these things. • The group must be able to apply findings in order to achieve an outcome

More Related