1 / 63

Dr. Ruta Grigiene, Dr. Egle Jonaitiene 2016 10 23 – 10 28

Breast cancer screening evaluation. Main monitoring indicators for breast cancer screening according to EU guidelines. Dr. Ruta Grigiene, Dr. Egle Jonaitiene 2016 10 23 – 10 28.

jhymel
Télécharger la présentation

Dr. Ruta Grigiene, Dr. Egle Jonaitiene 2016 10 23 – 10 28

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Breast cancer screening evaluation. Main monitoring indicators for breast cancer screeningaccording to EU guidelines Dr. Ruta Grigiene, Dr. Egle Jonaitiene 2016 10 23 – 10 28

  2. In the evaluation and monitoring of screening for cancer, the design of a programme can not be separated from the analysis • The programme should be designed in such a way that it can be evauated

  3. Definition of Program Evaluation Evaluation is one of the essential steps of the screening programme that cover all of the main components of the program: • Recruitment • Screening and Diagnostics Services • Data Management • Professional Development • Partnerships • Program Management • Quality Assurance and Improvement

  4. Definition of Program Evaluation Evaluation is defined as the systematic documentation of the operations and outcomes of a program, compared to a set of explicit standards, objectives, or expectations. • Systematic implies that evaluation is carefully planned and implemented to ensure that its results are credible, useful and are used. • Information represents all the evaluation data that are collected about the program to help make the “judgments” or decisions about program activities. • Activities and outcomes identify actions of the program and effects of the program. It is required that programs have an evaluation plan for all essential program components in order to determine if the componentsare reaching the desired outcomes.

  5. Why is Evaluation Important? The purpose of program evaluation is to assess the program implementation (processes), outcomes (effectiveness) and costs (efficiency).It gathers useful information to aid in planning, decision-making, and improvement. Evaluation aims to better serve programme participants, program partners, and your own program by : • maximizing your ability to improve the health status of women in your community through the program’s activities. • demonstrating accountability and sound management of resources.

  6. Evaluation tells you:Is our program working? In short, evaluation helps you explain that you are achieving outcomes. It helps your communities, funders, and partners see that your programme get results. Best of all, it helps you demonstrate what aspects of your program works.

  7. Evaluating screening programme components helps you to: • Monitor compliance with EU guidance. • Learn about your program’s strengths and successes. • Identify program needs and weaknesses. • Identify effective and ineffective activities. • Improve the quality, effectiveness, and efficiencyof your program. • Improve the program operations and outcomes. • Recognize gaps in the overall program. • Demonstrate program effectiveness to stakeholders. • Use findings for program planning, monitoring, and decision making.

  8. Types of Evaluation There are three main types of evaluation that you can conduct: 1) Process (evaluation of activities) 2) Impact (evaluation of direct effects of the program) 3) Outcome (evaluation of longer lasting benefits of the program) Ideally, any program will conduct a combination of all types of evaluation in order to get a well-rounded picture of what is actually going on. The use of a specific type of evaluation will depend on the purpose of the evaluation, your program’s stage of development, and available resources. More established activities will have less need for basic process evaluation and may focus more on impact and outcome evaluation.

  9. Types of Evaluation: Process Evaluation A process evaluation focuses on: • how a program works to attain specific goals and objectives • how well implementation is going, and • what barriers to implementation exist. Process evaluation is known as the “counting” or “documenting” of activities and should answer the question, are we doing what we said we would do? It can also help identify problems with program implementation earlier. Process evaluation is most appropriate when your program is already being implemented or maintained, and you want to measure how well the program process is being conducted. These process data often provide insight into why outcomes are not reached.

  10. Types of Evaluation: Impact Evaluation An impact evaluation is implemented in order to determine what direct effects the program actually has on those directly and indirectly experiencing it, not only participants, but also program partners and the community. The impact evaluation also provides information about whether the program has been able to meet its short-term goals and objectives. Impacts on the program often focus on changes in awareness of breast cancer and need for screening, attitudes toward preventive screening, and screening behaviors on the part of the participants.

  11. Types of Evaluation: Outcome Evaluation An outcome evaluation is implemented in order to discern whether a program has been able to meet its long-term goals and objectives. Outcome evaluation can track the maintenance of program effects (i.e., screening over time). It also documents longer term effects on morbidity and cancer mortality. • Sample outcome evaluation questions include: • Have the number of mammograms provided increased over time? • Has the program maintained enrollment of women in priority populations over time? • Has the program continued to detect breast cancer in earlier stages?

  12. Review of Types of Evaluation

  13. Using Program Outcomes in Evaluation Process • For the BC, there are major outcome measures for every program component. • It is important to keep your program’s outcome measures in mind throughout the evaluation process: • development stage of your plan, • during data collection, • interpreting your findings.

  14. Epidemiological program implementation quality indicators • Target population coverage • Number and share of responded women • Number of women referred for further investigation (BI-RADS 4, 5, 0) in relation to all imaged women • Share of women who conducted further investigations in relation to those referred • Rate of invasive investigations (puncture cytology, core biopsy)* • Share of malignant lesions* • Share of image guided cytology procedures with negative results*

  15. Epidemiological program implementation quality indicators • Positive predictive value (PPV) of mammography test, puncture cytology and core biopsy* • Ratio of benign and malignant biopsy lesions* • Time interval between mammography and mailed result* • Time interval between mammography and first day of further investigations* • Time interval between mammography and surgical procedure* • Share of women receiving the invitation in the next round within the defined screening interval (2 years ± 2 months) • Share of women receiving the invitation in the next round within the defined screening interval (2 years ± 6 months) • * Indicators which we will be able to monitor after the new IT health system will be implemented.

  16. Epidemiological program efficacy indicators • Number and share of women diagnosed with breast cancer • Rate of newly detected carcinoma/1000 examinations • Increase in number of newly detected breast carcinoma within the Screening Program in relation to incidence before the screening was introduced • Stage of cancer spread at the moment of setting the diagnosis* • Number and share of women with whom invasive carcinoma has been detected* • Rate of newly detective invasive carcinoma/1000 examinations*

  17. Epidemiological program efficacy indicators • Number and share of women with whom in situ breast cancer was detected* • Rate of newly diagnosed ca in situ/1000 examinations* • Number and rate of interval carcinomas* • Share of newly detected invasive carcinoma with no metastases into lymph nodes • Share of newly detected carcinoma of stage II or higher* • Share of newly detected invasive carcinoma sized 10 mm or <* • Share of newly detected invasive carcinoma < than 15 mm*

  18. Interval cancer • Cancers detected/presenting within 12 months after a mammographic screening in which findings are considered normal. • The term is a statistical benchmark used in conjunction with other parameters to assess the efficacy of breast imaging programmes and the statistics of mammogram readers. • It is a strong indicator of how successful your screening programme

  19. Interval cancer • The definition is potentially confusing because the implication suggests that an imaging error was made and also because interval cancers appear on the statistics of the most recent previous mammogram reader, by convention. The fact is that true negative interval cancers actually developed in the time period between the last mammogram (read as "normal") and the one on which the cancer was detected

  20. Interval cancers might be divided into a number of categories: • true negative interval cancer: no sign of disease may be detected on previous screening mammogram; the lesion is new. • interpreted as benign interval cancer: a lesion that proves to be malignant showed benign morphological characteristics on the previous mammogram.  • retrospectively visible interval cancer: a now known lesion is seen on the previous screen mammogram; this is an interpretive error on the part of the reader.

  21. Interval cancers might be divided into a number of categories: • single reader interval cancer: a second reader would have discovered the lesion. second reads in screening programmes yield up to 10% more cancers. • technical failure interval cancer: a technically poor image hampered the reader to discover the abnormality; in theory suboptimal images will not be submitted for interpretation and if they are, should not be read.

  22. Using Program Outcomes in Evaluation Process Your program activities lead to outcomesthat help reach programmes goals. If you want to assess whether outcomes were met, your evaluation will focus on how they were achieved. From the evaluation results, you may learn that activities need to be revisited, revised, or added to meet program goals and objectives. BCSP Component Program Activities Program Outcome Measures Short-term Goals Intermediate Goals Long-term Goals

  23. Screening & Diagnostic Services Outcomes • Access to program services for eligible women. • Access to breast cancer screening, diagnostic and treatment services. • Provision of services according to clinical guidelines approved by the grantee medical advisory board or consultants. • Use of current data for effective case management.

  24. Data Management Outcomes • Existence of data systems to collect, edit, manage, and continuously improve tracking of services provided. • Reduction or elimination of program data errors. • Existence of mechanisms for reviewing and assessing data quality.

  25. Quality Assurance & Quality Improvement Outcomes • Providers’ use of current standards, accepted clinical guidelines, and program policies as assessed by program staff. • Existence and maintenance of a continuous quality improvement committee. • Monitoring, assessing, and improving clinical services to meet or exceed BCS performance benchmarks for quality. • Program-eligible women’s satisfaction with program services provided. • Program providers’ satisfaction with the program.

  26. Evaluation Outcomes • Quality of evaluation plans for each program component. • Availability and quality of program data. • Conduct of evaluation activities in order to assess program effectiveness. • Use of evaluation results to inform program decision-making.

  27. Recruitment Outcomes • Implementation of evidence-based strategies to increase recruitment. • Recruitment of program-eligible women in priority populations. • Conduct of program activities to increase awareness about breast cancer. • Program-eligible women’s attitudes toward screening. • General public’s knowledge about the need for breast cancer screening.

  28. Partnerships Outcomes • Use of partnershipsto recruit and retain providers. • Use of partnerships to educate and increase awareness of breast and cervical cancer • Use of partnerships to promote and facilitate breast screening. • Use of partnerships to promote professional development activities • Engagement of community partners in activities that promote breast cancer screening services.

  29. Engage Stakeholders The first step in the program evaluation process is to engage stakeholders. Stakeholders are those persons or organizations that have an investment in what will be learned from an evaluation and what will be done with the knowledge. It is important to involve stakeholders in the planning and implementation stages of evaluation to ensure that their perspectives are understood and that the evaluation reflects their areas of interest. Involving stakeholders increases awareness of different perspectives, integrates knowledge of diverse groups, increases the likelihood of utilization of findings, and reduces suspicion and concerns related to evaluation.

  30. Here are different types of stakeholders who are important to engage in all parts of the evaluation. Stakeholders can fall into different categories: Decision-makers Implementers Program Partners Participants The table provides examples of each category of stakeholders. Engage Stakeholders

  31. Case Study You are interested in assessing how your program is doing with its screening activities. Screening activities include: 1) women with abnormal screening test results receive timely diagnostic examinations, and 2) women will receive a final diagnosis after an abnormal screening result. Stakeholders who may be interested in this question are: • Program director, screening coordinator, case management coordinator, or quality assurance coordinator • Provider networks that screen and provide services for the women

  32. Describe the Program The second step in the program evaluation process is to describe the program. Without an agreed-upon program definition and purpose, it will be difficult to focus the evaluation efforts and results will be of limited use. Important aspects of the program description are: 1) need for the program, 2) resources, 3) component activities, and 4) expected outcomes.

  33. Short term Outcomes Intermediate Outcomes Long-tem Outcomes Outputs • Program services accessible to eligible women. • Access to breast cancer screening and diagnostic and treatment services. • Recruitment of program eligible women priority populations. • Breast cancer screening among program eligible women, with an emphasis on those aged 50 – 64. A reduction in breast cancer related morbidity and mortality. • Evidence-based strategies implemented to increase recruitment. • Services provided according to clinical guidelines approved by the medical advisory committee. • Number of women diagnosed in the program who have an early stage of the disease. • Eligible women re-screened at appropriate intervals. • Timely and adequate service delivery and case management to women with abnormal screening results (or diagnosis of cancer). • Treatment service delivery for women with cancer. • Case management provided to women with abnormal screening results. • Program eligible women’s and public’s awareness, attitude and knowledge of screening. • Use of current data for effective case management. • Improvement of rates of breast cancer rescreening per clinical guidelines. • Providers are knowledgeable about breast cancer screening. • Evidence-based clinical guidelines adopted by providers to improve services. • Assessment of needs of providers. • Current standards, guidelines, and policies assessed by the medical advisory consultants. • Management • Staff hired • Number of staff meetings • Evaluation • Existence of an evaluation workplan • Breast and cervical cancer early detection issues are addressed by sustained and effective partnerships. • Evidence-based practices used in service delivery. • Use of program data for program planning and decision making. • QA data provided for the clinical practice QA program. • Collaborations are used to recruit providers and conduct professional development activities. • Sustainability and effectiveness of partnerships. • Reduction or elimination of program data errors.

  34. Focus the Evaluation Once the key players are involved, and there is a clear understanding of the program, it is important to focus the evaluation design and determine the purpose, users, uses, evaluation questions, and develop an evaluation plan. Focusing the evaluation design is important because it helps identify useful questions that you want to ask about your program.

  35. Due to the large size, complexity, and finances of the program, it is not possible to evaluate every aspect of the program. You should first examine the system that provide valuable monitoring data to help identify problem areas for further investigation and the program performance as noted in progress reports. Then, focus evaluation efforts on the areas of the program that are not working optimally. It is, however, important to look at the program as a whole and think about how you would evaluate each aspect of the program. You should first begin addressing: Any new initiative with resources allocated to it. Any activity that consumes a high amount of resources. Activities that are not successful at meeting their measures of success. Program inconsistencies to explore why they exist. Any unevaluated activity (e.g., recruitment strategies, screening) that is employed frequently by the program. Focus the Evaluation

  36. Focus the Evaluation Case Study Think about what your outcome measure would be for the provision of timely and appropriate diagnostic services to women receiving abnormal breast or cervical cancer screening results (follow-up). Here are activities that your program does to help track this outcome: • Reviewing weekly all abnormal breast and cervical screening results to determine appropriate referral and follow-up status. • Logging the name and total number of patient records for women with abnormal results. • Assigning case management for every client with an abnormal screening. • Case managing women to ensure appropriate tracking, referral and follow-up.

  37. Gather Credible Evidence Gathering credible and relevant data is important because it strengthens evaluation findings and the recommendations that follow from them, thereby enhancing the evaluation’s overall utility and accuracy. When selecting which processes or outcomes to measure, keep the evaluation purpose and uses of the data findings in mind. An evaluation’s overall credibility and relevancy can be improved by: 1) using multiple purposeful or systematic procedures for gathering, analyzing, and interpreting data, and 2) encouraging participation by stakeholders.

  38. Gather Credible Evidence In gathering data, you should start with the evaluation questions listed in the evaluation plan. The questions should identify the data indicator of interest. The data collection process should identify the source(s) to collect information on the indicator. The sources could include persons, documents, or observation. If evaluation was included as part of the planning process, your program will already include data collection activities. If existing data systems cannot answer your evaluation questions, you might consider developing your own system that will monitor or track what you are interested in evaluating.

  39. Gather Credible Evidence Qualitative Versus Quantitative Data Quantitative data are information expressed as numbers. Surveys, patient logs, and a MDEs database are examples of quantitative data collection methods. However, not all data collection methods are quantitative. Some issues are better addressed through qualitative data, or information described in words or narratives. Qualitativedata collection methods include observations, interviews, focus groups and written program documents. Often a combination of the two methods provides the most accurate representation of the program. For example, you may find that interviews with providers in your provider network or site visits to clinics may give you more information about service delivery than surveys can.

  40. Gather Credible Evidence Here are some examples of possible sources for both quantitative and qualitative data collection methods. Original information collected directly by you or your program is considered primary data Primary Data QuantitativeData Collection: • Enrollment, screening, and diagnosis service forms • Minimum Data Elements (MDEs) • Staff surveys and interviews • Site visit report • Tracking of media • Provider surveys or interviews • Surveys of intended audience • Logs • Chart audits • Progress reports • Needs assessment

  41. Gather Credible Evidence Qualitative Data Collection: • Exit interviews of participants • Focus groups • Topic expert interviews • Staff meeting minutes • Site visits • Observations of activities, staff, and clients

  42. Gather Credible EvidenceOther options for collecting data to use in evaluating your program include sources of information produced by others, also known as secondary data. Secondary Data Quantitative Data Collection: • Cancer Registries • Medical records • Medical claims data • Censusdata • Vital statistics • National Health Interview Survey • Vital records Qualitative Data Collection: • Administrative reports • Publications and journals

  43. Gather Credible Evidence Case Study Going back to our Screening example of tracking timeliness of follow-up after an abnormal screening for women in your program: What data source would you use to locate this information? You would most likely use program records for participants, weekly screening records of women with abnormal results, and your case management data file to locate this information.

  44. Evaluation Plan Case Study Now that you have decided on the data indicator of timeliness of follow-up and its source, here is how your evaluation plan would appear.

  45. Justify Conclusions After you have read through the evaluation findings, you should interpret and discuss preliminary evaluation findings with program staff and stakeholders. You can then develop action steps based on the evaluation findings.

  46. Justify Conclusions The major activities in justifying conclusions are to: • Analyze the evaluation data. • Interpret the results to discover what the data say about the program. Decide what the numbers, averages, and statistical tests tell you about the program activity. • Make judgments about the program data. Decide on the meaning of the data with a group of stakeholders by comparing evaluation results with EU standards or your measures of success described in the workplan. Then, make a recommendation to address the evaluation results

  47. Justify Conclusions Once data (evidence) are analyzed and summarized, conclusions can be drawn about the component activities. Interpretation is attaching meaning to the evaluation data. It is important to gather stakeholders and discuss the data. The conclusions should lead to further recommendations for program action. For example, recommendations may include continuing, redesigning, expanding, or stopping a program activity.

  48. Justify Conclusions When analyzing your data, it is helpful to examine how you have met your program outcomes, or how you justify your conclusions. But it is also important to critically think about what can be improved upon if outcomes are not met so that during the next evaluation you will be on target. For example, let’s say you analyzed screening data for this past fiscal year. Only half of the eligible women in a priority population were screened for breast cancer. Are the results similar to what you expected? If not, why do you think they are different? Why might this be? How would you go about assessing why and how outcomes were not met?

  49. Justify Conclusions The following action steps and questions may be useful to consider when you are ready to analyze and interpret your findings. Remember to go back and review your evaluation question to make sure it is answered. On what standards are you evaluating your findings on? National set standards for screening? Past program results? National averages? Review your recruitment strategies with program staff. Strengthen relationships with community organizations to build consensus for program involvement if need be. Examine data quality. Did you find any incomplete or missing data that could push your enrollment numbers down? What could you do to remedy that? Are your recommendations based on these findings? Why or why not? What are your findings limitations? Did you include various stakeholder viewpoints to analyze and interpret the findings? How can this help you or others improve evaluation plans in the future?

More Related