1 / 42

Use of routinely collected service delivery and M&E indicator data for timely feedback

Use of routinely collected service delivery and M&E indicator data for timely feedback. Denis Nash, PhD, MPH Associate Professor of Epidemiology Director, ICAP M&E Unit Mailman School of Public Health, Columbia University, NYC, USA dn2145@columbia.edu.

frye
Télécharger la présentation

Use of routinely collected service delivery and M&E indicator data for timely feedback

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Use of routinely collected service delivery and M&E indicator data for timely feedback Denis Nash, PhD, MPH Associate Professor of Epidemiology Director, ICAP M&E Unit Mailman School of Public Health, Columbia University, NYC, USA dn2145@columbia.edu

  2. Common M&E Challenges in scale-up (1) • Large number of sites with relevant info residing with multiple individuals • e.g., sites, districts, partner country teams, partner HQ , etc. • Increasingly complex array of services to report on/evaluate • Collection, management and use of indicator data within country • Traditionally siloed areas of reporting for program activities that are integrated at the site level • e.g., care and treatment, PMTCT, TB/HIV, testing & counseling • Separate M&E reports for each program area • Comprehensive program evaluation? Triangulation? • MOH vs. donor reporting requirements • Many important aspects of implementation and program quality not captured in conventional, routinely collected M&E indicators • Generally M&E systems do not take context into account

  3. Common M&E Challenges in scale-up (2) • Providing timely data processing and feedback of information to implementation staff for program improvement • National-level (i.e., technical and management staff, IPs) • District-level • Site-level (and below) • Program improvement ultimately happens and most often starts at the site level • Integrated data management • Adequate database to house M&E indicator data is essential • Capture/store/process/utilize reported data in a streamlined and efficient way • Dynamic and flexible to accommodate changes in indicators • Data quality • Missing or incomplete data • Incorrect data • Demand for indicators that reflect quality of care/program • M&E indicators do not typically measure quality of care/program

  4. Feeding data back to programs in the form of information • Scale-up and sheer number of sites and geographic spread makes regular and timely feedback challenging, especially at site level • Need for information at multiple levels • For implementation teams at national and district-levels • Which sites to focus scarce mentoring and implementation support resources? • Are efforts to maximize quality of care having an impact? • For site staff • How is our site doing? Where can we improve? • Are our efforts to improve things working? • Difficult to do without some form of automation (e.g., reports) and decentralization of information (e.g., web-based)

  5. Number ofsites by country, as of March 31, 2010 Total number of sites supported by ICAP : 1,219 Number of sites Source: ICAP Site Census, March 2010

  6. Feeding data back to programs in the form of information • Scale-up and sheer number of sites and geographic spread makes regular and timely feedback challenging, especially at site level • Need for information at multiple levels • For implementation teams at national and district-levels • Which sites to focus scarce mentoring and implementation support resources? • Are efforts to maximize quality of care having an impact? • For site staff • How is our site doing? Where can we improve? • Are our efforts to improve things working? • Difficult to do without some form of automation (e.g., reports) and decentralization of information (e.g., web-based)

  7. Priority indicators by site

  8. Examples of feedback tools used by ICAP • Mainly aimed at providing feedback from ICAP-NY to ICAP country teams on reported data • But some tools can also be used to feedback data to district and sites Examples • ICAP URS dashboards and reports • Maps (static and interactive) • PFaCTS reports • Quarterly eUpdate • Patient-level data reports

  9. Patient-level data

  10. ICAP patient-level data warehouse elements • Enrollment Table • Basic demographic information • Age • Sex • enrollment date • Prior ARV use • Point of entry • Transfer Visit Table: Visit date, WHO stage, height,weight, Hb, ALT, next scheduled visit date CD4 Table: CD4 test date, CD4 count, CD4 percent ART Table: ART regimen, regimen start & end date, reason(s) for switching ART regimen Medication Table: TB screening date and result, TB medication reason (treatment or prophylaxis) and dates, CTX & fluconazole Pregnancy Table: Visit date, weeks gestation at visit, due date, actual pregnancy end date Status Table: Patient disposition status (dead, transferred, withdrew, LTF, stopped ART, etc) and status date Follow-up data: 1 row per measure per patient Baseline: 1 row Per patient *measures at key points of interest (e.g., enrollment, ART initiation) calculated based on visit dates Databases are anonymized using an automated tool. Data use governed by MOH approved protocols.

  11. Patient-level data feedback reports • Multi-site feedback reports • Combines and compares data across multiple sites • One for adult patients and one for pediatrics patients • Site-specific feedback reports • General feedback report • Summary of information on currently enrolled patients • Standards of care (SOC) report • Quality of care indicators • Reports are: • 100% automated and are in PDF format • generated and shared with sites within two weeks of submission of database • Currently generated in NYC at ICAP HQ • Report generation tools can be deployed, owned, and maintained by MOHs where capacity exists or where it can be developed

  12. Multi-site report

  13. Site-specific general feedback report

  14. PDF format, 100% automated

  15. Site-specific SOC report

  16. Dissemination of patient-level data reports

  17. M&E Indicator data

  18. Integrated data at site level

  19. Filterable home page and program area dashboards

  20. Example of care and treatment dashboard table

  21. Filterable home page and program area dashboards

  22. Conclusions • Timely feedback and dissemination of routinely collected service data and M&E data is an increasing challenge, especially as the number of sites increases (i.e., scale-up) • National, district, site, IPs • Database tools, automation, and decentralization of information are critical • Improves data quality and utility of information! • Capacity building on interpreting and applying disseminated data to program improvement is needed

  23. Thank you!

More Related