1 / 13

PPA 502 – Program Evaluation

PPA 502 – Program Evaluation. Lecture 3b – Outcome Monitoring. Introduction. The routine and periodic monitoring of outcomes is an important development in the evolution of performance monitoring systems.

bellini
Télécharger la présentation

PPA 502 – Program Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PPA 502 – Program Evaluation Lecture 3b – Outcome Monitoring

  2. Introduction • The routine and periodic monitoring of outcomes is an important development in the evolution of performance monitoring systems. • Outcome monitoring requires the routine measurement and reporting of important indicators of outcome-oriented results.

  3. What Is Outcome Monitoring? • Outcome monitoring is the regular (periodic, frequent) reporting of program results in ways that stakeholders can use to understand and judge those results. • The indicators measured should have some validity, some meaning that is closely tied to performance expectations. • The ways in which they are reported should also have utility, that is, they must be easily interpreted and focus attention on the key points.

  4. Other Forms of Monitoring • Program monitoring – Site visits by experts for compliance-focused reviews of program operations. Designed to remedy procedural deficiencies. • Outcome monitoring is outcome-focused or results-oriented. • Built into the routines of data reporting within program operations. • Provides frequent and public feedback on performance. • Outcome monitoring is also not impact assessment, which measures in what ways the program produced the outcomes.

  5. Why Do Outcome Monitoring • The accountability mandate. • Modern demands for accountability require proof. • Examples: local government (north Carolina), human services (Florida). • http://www.iog.unc.edu/programs/perfmeas/. • http://www.oppaga.state.fl.us/reports/pdf/HealthHS_2007.pdf. • Government performance and results act. • USGAO.

  6. Why Do Outcome Monitoring • Directed performance improvements. • A tool for making more efficient use of resources. • The essence of continuous quality improvement is the focused diagnosis of barriers to better performance, followed by the design of alternatives to remove or circumvent the barriers, the implementation of trials to test those alternatives, and finally the expansion of successful efforts to raise performance levels while shrinking variability in performance. • Florida example. • http://www.oppaga.state.fl.us/default.asp.

  7. Why Do Outcome Monitoring • Commitment to continuous performance improvement. • Comparative snapshot of performance for all those who are responsible for outcomes. • Stimulates competition and unleashes creativity. • More efficient use of support resources. • Performance assessment focuses diagnostic skills on specific, underperforming elements of the program. • Increases efficiencies in the conduct of program evaluations. • Raw data for evaluation • Focus evaluator’s attention on programs most relevant to stakeholders.

  8. Why Do Outcome Monitoring? • Growing confidence in organizational performance. • No system creates a PR nirvana. Critics will always find ammunition. • But a good outcome monitoring system can limit the damage by underscoring ongoing improvement efforts. • Internally, outcome monitoring provides perspective to officials burdened with program details.

  9. Design Issues for Outcome Monitoring • What to measure? • Measures must be appropriate • Measures must sufficiently cover the range of intended outcomes. • Stakeholders should be involved in the identification of outcome measures. • How many measures? • A small number of highly relevant measures for upper management. • A more comprehensive set of measures to supplement the key indicators.

  10. Design Issues for Outcome Monitoring • How (and how often) should performance be measured? • Automated measures allow more frequent assessment than labor-intensive data collection and reporting systems. • Some measures cannot be determined from automated systems. • They should be built into program operations. • But, cannot but too many burdens on program staff. • Sampling, but affected by sample size. • Contract requirements. • Mobilization of outside groups. • Final answer: whatever it takes.

  11. Design Issues for Outcome Monitoring • How to present the results. • Varies by message, sender, and receiver. • Different levels of aggregation and different emphases. • Graphics. • Data should be comparative. • Review presentation standards periodically.

  12. Pitfalls in Outcome Monitoring • Unrealistic expectations. • Not a panacea. • Data collection is not easy. • Size and scope often underestimated. • Avoiding a clear focus on outcomes. • Easier to measure inputs, processes, and outputs than outcomes. • Some may not be measurable directly. • Persistence, good communications, and group facilitation skills can overcome resistance.

  13. Pitfalls in Outcome Monitoring • Irrelevance. • Measures far removed from program reality. • Changes in policy priorities without requisite changes in performance measures. • Unwarranted conclusions. • Program targeting rather than performance improvement.

More Related