1 / 13

Performance indicators: good, bad, and ugly

Performance indicators: good, bad, and ugly. The report of the Royal Statistical Society working party on performance monitoring in the public services, chaired by Professor Sheila Bird.

Télécharger la présentation

Performance indicators: good, bad, and ugly

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance indicators: good, bad, and ugly The report of the Royal Statistical Society working party on performance monitoring in the public services, chaired by Professor Sheila Bird

  2. “Performance monitoring done well is broadly productive for those concerned. Done badly, it can be very costly and not merely ineffective but harmful and indeed destructive - of morale, reputations and the public services.”

  3. Methodological rigour in selecting indicators • Sample surveys should be designed, conducted and analysed in accordance with statistical theory and best practice • Admin data should be fully auditable • Concepts, questions, etc should be comparable and harmonised where possible – conforming to national or international standards as appropriate • Indicators should be precise and accurate enough to show reliably when change has occurred

  4. Definitions should be precise • Definitions of both indicators and targets should be • Precise but practicable • useful definitions should be given for all the key concepts in the indicator or target • Consistent over time • any changes to definitions or methods should be fully documented • Unambiguous • there should be no possibility of disagreement about whether progress is the indicator going up or down

  5. Practitioners involved should have input • For targets to be ambitious but achievable, a good understanding of both the practicalities of delivery on the ground, and of the data, is needed • To understand the practicalities of delivery, practitioners should be consulted • Motivational but irrational targets may demoralise

  6. Monitor for perverse outcomes • Targets can lead to practitioners playing the system rather than improving performance to meet badly thought through targets • An example from the report: • An indicator for prisons is the number of “serious” assaults on prisoners • “Serious” = proven prisoner-on-prisoner assault • The indicator would improve if prisons reduced their investigations into assaults

  7. Do not ignore uncertainty or variability • Insistence on single numbers as answers to complex questions is to be resisted • Natural variability, outliers, recording errors, statistical error (i.e. confidence intervals around sample estimates), all need to be considered • All need to be clearly presented

  8. Do not set 100% targets • 100% targets can lead to perverse outcomes, demoralise when failure inevitably occurs, and lead to disproportionate resources being used • An example from the report: • “No patient shall wait in A&E for more than 4 hours” • This becomes irrelevant as soon as one patient does wait more than 4 hours • A&E staff may have very sound reasons for making a small number of people wait longer

  9. Do not ignore the distribution • Performance Indicators are 1 number • Single number summaries of data can be misleading • An example from the report: • “Number of patients waiting more than 4 hours” • The whole distribution needs viewing to understand the indicator e.g. has progress been achieved by getting most people seen in 3 hours 59 minutes but some not for 10 hours?

  10. Do not mistake statistical significance for practical importance • There can not be a difference of practical importance if the difference is not statistically significant (because the difference might not be genuine – it could just be chance) BUT • A difference could be statistically significant but not practically important (because statistical significance can be achieved by getting a huge sample size)

  11. Consider not setting a target until data are well understood • The statistical properties of an indicator will be much better understood after one or two rounds of analysis • It may therefore be sensible to wait before setting a target

  12. Document everything: Others should be able to replicate procedures • All assumptions and methods should be fully documented so that others can fully understand and replicate results • A ‘PM Protocol’ should include: • Objectives • Definitions • Survey methods / information about data • Information about context • Risks of perverse outcomes • How the data will be analysed • Components of variation • Ethical, legal and confidentiality issues • How, when and where data will be published

  13. The report is available on the RSS website here: http://www.rss.org.uk/PDF/Performance%20monitoring%20231003.pdf

More Related