1 / 36

Eliciting Subjective Priors for Health Care Decision Making

This presentation explores the rationale for subjective elicitation of priors in health care decision making, previous applications, and the utilization of existing protocols. It also discusses principles, considerations, and areas for further research in determining a protocol for subjective elicitation in health care decision making.

danyelle
Télécharger la présentation

Eliciting Subjective Priors for Health Care Decision Making

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Eliciting Subjective Priors for Health Care Decision Making Laura Bojke Centre for Health Economics University of York, UK

  2. What this presentation will cover • Rationale for SEE in health care • Previous applications • Utilising existing protocols • Determining what is appropriate for HCDM • Principles • Things to consider • Determining a protocol for SEE in HCDM • Areas for further research

  3. Decision making in health care • Health economics (economic evaluation) is concerned with the allocation of scarce resources to improve health • What happens when a new technology comes along? • Fund from a fixed budget • Displacement • The new health outcome generated from the new technology is to some extent offset by the lost health outcome from the displaced spending

  4. Informing health care decision making • Health care decision problems are generally complex • Typically involve a number of alternative courses of action (e.g. tests and treatments) • Range of health outcomes and cost implications

  5. Modelling in economic evaluation • Trials alone often not sufficient for analytical decision making • Selective inclusion of comparators • May need to synthesise evidence from different sources • In particular costs and utilities (outcomes) • Insufficient time horizon • Extrapolate intermediate (observed) outcomes to long term QALYs • Decision models used to represent the disease process and capture any differences in costs and outcomes between competing interventions

  6. Uncertainty in HCDM • Specifically when developing decision models… • Evidence to populate models is often uncertain • Empirical data may be limited e.g., a cancer product licensed on the basis of progression-free survival, with limited evidence on survival impacts • Empirical data may be missing entirely e.g., when assessing the value of a future clinical trial for a medical technology  • Assessment much closer to launch has led to increased uncertainty in the evidence base • Uncertainty can have consequences • Wrong decision can lead to loss of outcomes, increased costs • Need to quantify uncertainty in cost-effectiveness

  7. Using experts priors to characterise uncertainties • Additional information, often in the form of expert judgments, reported as a distribution, is needed to reach a decision  • To improve the accountability of the decision-making process, the procedure used to derive these judgments should be transparent • Expert elicitation (SEE) can help to characterise uncertainties in cost-effectiveness decisions • Generating initial estimates of data - informing future data collection • Weights for alternative structural assumptions • Extrapolation beyond observed evidence • Bias correction weights for RCT/observational evidence (unobserved heterogeneity)

  8. Eliciting experts priors in HCDM • Less formal procedures historically used in HCDM • Asking for opinions, e.g. lower and upper ranges, without a structured process • SEE has been used in various disciplines including  weather forecasting and food safety risk assessments • Existing methodological research on elicitation, both generic and discipline-specific, is inconsistent and noncommittal on many elements  • No standard protocols for the conduct of elicitation to inform HCDM

  9. Existing protocols • Generic protocols (which have been applied in HCDM): • the Sheffield elicitation framework (SHELF) • Cooke’s classical method (Cooke R.M 1991) • Many elements of these protocols may be relevant in CEA • Other elements may be domain specific • Unclear how consistently these protocols have not been applied in HCDM

  10. Experiences of elicitation in CE • Review to identify applied SEEs (eliciting uncertainty) in cost-effectiveness modelling • 21 studies found • Elicitation conducted mostly when evidence is absent • Generally poor reporting of methods • Main aims: • summarise the basis for methodological choices made in each application • record the difficulties and challenges reported by the authors in the design, conduct, and analyses.

  11. Summary of applied studies • 4/21 applications were applied in an early modeling context, where there may not be direct clinical experience with the technology of interest • 8 evaluated a diagnostic or screening strategy • Majority sought to elicit event probabilities and/or relative effectiveness • A few elicited diagnostic accuracy and one time to event

  12. Summary of methods used

  13. Experiences of elicitation in CE • Variation in methods applied, only 3 used existing protocols • Judgements required on a large number of parameters • Substantive experts are not expected to have very strong quantitative skills • Between-expert variation is warranted and expected • Expectation that decision makers will seek for assurance on validity • No integration with findings of behavioural psychology research • Further methodological research is important to define best practice, particularly for less normative experts • Guidance required to warrant consistency across evaluations

  14. Determining what might be most appropriate for HCDM • Consider the following principles: • 1: SEE to inform decision-making in healthcare should be transparent and reproducible • 2: Elicited information should be fit-for-purpose to be used as an input to further modelling • 3: The SEE needs to reflect the practical and logistic constraints faced by different contexts/decision-making bodies • 4: The SEE should reflect uncertainty at the individual expert level • 5: The SEE should recognise common biases in elicitation

  15. …cont • 6: The SEE should be suitable for experts who possess substantive skills, which in HCDM are less likely to be normative • 7: Recognise where adaptive skills are required of experts • 8: Recognise between-expert variation, explore its implications, and attempt to understand why it is present • 9: Enable experts to express their beliefs and promote high performance • Choices within SEE that may not be supported by the principles

  16. Key issues for SEE in HCDM • Some elements may be driven by the context • Who to elicit from • What to elicit • Methods to encode judgements • Validation

  17. Who to elicit from • Substantive and normative skills are the most frequently referenced • Adaptive skills may be relevant • New or emerging technologies in which they do not have a great deal of experience • Heterogeneity in level of normative skills • Cannot accurately measure these skills • There are no validated strategies on how experts should be sourced and recruited • Smaller samples are recommended for face-to-face group consensus (5 to 10) • Unsure of sample size for individual elicitation • Experts may display significant heterogeneity depending on clinical experience • Causes and implications of between-expert variation are poorly understood

  18. Implications for elicitation method • Skills of experts may influence elicitation method used • Training imperative • Constraints in HCDM may impede extensive training • Elicitation method should capture any heterogeneity between experts • Understanding how experts’ formulate their beliefs and why experts present heterogeneous beliefs, can potentially improve the validity of the SEE

  19. What to elicit? • Many different parameters to inform a CEA • Probabilities, costs, utilities, treatment effects • What quantities to express these parameters • Non observables may be challenging for experts with lower levels of normative skills • ‘what to elicit’ becomes an important aspect of the design of elicitation exercises • Different quantities can be elicited that provide information on any single parameter of interest

  20. Example: Negative Pressure Wound Therapy • NPWT associated with limited and sparse evidence • Substantial practical experience • Model to assess cost-effectiveness of NPWT and inform a future trial Closure surgery Unhealed Complications Treatment discontinuation healed dead Secondary healing

  21. Alternatives to elicit • To inform transitions to healing we may elicit: • Direct transition probability • time to x % patients heal, or • % of patients healed after y amount of time • Judgements defining a Markov process • probability of finding individuals in each of the set of N mutually exclusive states at any time point should sum to one • either elicit Dirichlet (non observable), • or N-1 conditionally independent Binomial variables

  22. Methods to encode judgements • Judgements should reflect epistemic uncertainties (imperfect knowledge) • Aim to represent the degree of belief experts have over an uncertain quantity • When reflecting on their own experiences, experts may instead include some level of variability in their priors • Methods based on probabilities • Variable interval method (VIM) • Fixed interval method (FIM) • Lack of evidence on which best enables experts’ to express their uncertainty

  23. Methods to elicit distributions Variable interval method (quantile elicitation or CDF) The facilitator specifies a probability p and asks the expert for a value x such that P[X≤x]= p • The simplest example is eliciting credible/confidence intervals . 95% CI - x1 andx2 : P[X≤x1]=0.025andP[X≤x2]= 0.975 . Overconfidence in eliciting extremes (tails) of the distribution • Bisection most commonly applied x

  24. Methods to elicit distributions Fixed interval method The facilitator specifies intervals of the random variable (xlowerand xupper) and asks the expert for how much of the probability, p, should be allocated P[xlower≤ X ≤ xupper]=p Leal et al, Value in Health, 2007

  25. Methods to elicit distributions Derivation of the fixed interval method Bins and chips method in Johnson 2010

  26. VIM or FIM in HCDM? • Experiment to compare two methods of elicitation (bisection vs. chips and bins) • Based on a simulated learning process • The individual’s knowledge is determined by simulated observations which are recorded • Participants were students at the University of York with clinical background/ clinical training • Outcomes: • Bias, difference in the means of the true and elicited (and fitted) distributions • Uncertainty - ratio of the SDs of the two distributions • Kullback-Leibler divergence (KL) – information lost when one distribution is approximated by another • Participants preference for alternative methods

  27. Experiment results • Participants generally did not exhibit bias • Did not seem to change the estimates of uncertainty sufficiently according to the level of uncertainty in the information provided. • Both methods performed similarly, only showing a small difference favouring C&B when precision was higher

  28. Validation • Aim to promote high performance and distinguish between high/low-performing experts • Calibration is most commonly applied to assess adequacy (performance) • Degree of accordance between assessments and actual observed outcomes • Assumes recall of known parameters can be used to discriminate between experts • Identifying known parameters is challenging • Issue of multiple quantities to elicit • Further research is needed to support the use of calibration in HCDM • Can adequacy be determined using characteristics of experts?

  29. Case study: weighting using experts experience • 18 experienced radiologists ranked tumor characteristics in terms of their importance to detect malignancies • With reference to MRI, estimated the true positives and negatives of PAM using the variable interval method • Linear opinion pooling, weighted for individual experts’ experience Haakma W, Steuten L, Bojke L, Ijzermann M. Belief Elicitation to Populate Health Economic Models of Medical Diagnostic Devices in Development. Appl Health Econ Health Policy (2014)

  30. Validation in HCDM • Are such methods to calibrate experts appropriate in health care decision making? • Focus on substantiveness of experts • Makes assumptions regarding the relationships between characteristics and performance • How to assess adequacy of adaptive skills? • These skills may be more relevant to inform many cost-effectiveness models

  31. Protocol for SEE in HCDM? • There are many choices in SEE, for which there is no empirical support. • Principles may be unable to provide sufficient justification for discounting particular choices and/or preferring choices above others, e.g. to minimise bias, multiple approaches are available. • A lack of empirical comparison of the techniques, in the context of HCDM, makes it difficult to say conclusively which techniques may be most appropriate. • The specific application and constraints may be a driving factor in defining the choices for the SEE

  32. Excluded choices (selected) • Quantities – odds ratios, credible ranges, descriptions without decomposition/disaggregation, non-observables • Validation – calibration, informativeness scoring • Expert selection – generalists, requirement of normative skills, lack of piloting • Aggregation – not fitting distributions to beliefs • Documentation – lack of thorough documentation & rationales

  33. How the evidence can be used to generate a protocol • Important to recognise where there are choices that are emerging as ‘best practice’ in HCDM, and how these contribute towards the development of a protocol in this context. • Consider how decision makers determine the suitability of the choices in developing a protocol for their needs • Some areas in which there is too much uncertainty - recommendations on further research

  34. Important considerations in developing a protocol for HCDM

  35. *Some* areas for further research  • Sample size for individual elicitation • It is possible to employ more strategic expert recruitment methods in HCDM, for example profile matrix? • Develop methods to assess the level of skills experts possess that are appropriate for the SEE task. In particular adaptive skills • What training strategies can be utilised to minimise bias? • Can calibration improve the validity of resulting distributions? • Which dependence methods work best for non-normative experts? • Where do consensus approaches work best – procedurally, practically • Further research on fitting complex distributions • Which synthesis approaches are appropriate where there is significant expert variation • *More applied examples*

  36. Acknowledgements • MRC elicitation project team • Marta Soares, Dina Jankovic, Aimee Fox, Karl Claxton (York) • Alec Morton, Abigail Coulson (Strathclyde) • Andrea Taylor (Leeds) • Linda Sharples (LSHTM) • Chris Jackson (Cambridge)

More Related