1 / 15

Paul Tappenden Jim Chilcott Health Economics and Decision Science (HEDS)

Consensus working group on the use of evidence in economic decision models “Ideal” versus practical - issues of using all available evidence to inform decision model parameters. Paul Tappenden Jim Chilcott Health Economics and Decision Science (HEDS)

nevan
Télécharger la présentation

Paul Tappenden Jim Chilcott Health Economics and Decision Science (HEDS)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Consensus working group on the use of evidence in economic decision models“Ideal” versus practical - issues of using all available evidence to inform decision model parameters Paul Tappenden Jim Chilcott Health Economics and Decision Science (HEDS) School of Health and Related Research (ScHARR) 25th July 2005

  2. Presentation focus • Key questions • “How is evidence currently identified and used to inform model parameters within NICE assessments?” • “What would be the implications of implementing NICE’s recommendation regarding the use of all relevant evidence?” • “How should we search for, select and use evidence to inform model parameters?” • Illustrative case study – irinotecan, oxaliplatin and raltitrexed for advanced colorectal cancer • Update of 2000 assessment

  3. Key stages in assessment process • Systematic review of clinical effectiveness • Identification, selection and critical appraisal of existing economic analyses • Model structuring • Identification, selection and ‘appraisal’ of evidence to inform model parameters • Use of evidence extends beyond model parameterisation alone

  4. Systematic review of clinical effectiveness • Process • Identification, selection and critical appraisal of evidence relating to clinical effectiveness of novel therapies vs. standard treatment • Systematic searches to identify available evidence relating to intervention, disease domain, (comparator), RCTs. Routine searches of grey literature • Broader inclusion of non-RCTs in the absence of strong RCTs • Methodological quality appraised using accepted checklists • ACRC case study • Updated systematic review of clinical effectiveness of irinotecan, oxaliplatin and raltitrexed compared to 5-FU/FA in patients with advanced colorectal cancer • Phase 2 trials excluded • 17 RCTs of varying methodological quality • Randomisation to first-line / second-line / sequence

  5. Issues for systematic reviews • Issues • Definitions of “strong” RCTs? • Potentially relevant but weaker studies RCTs may be excluded from the decision model due to internal and external biases • Should we really use these to inform parameters in the model? If so how should we account for bias and confounding? • Broader inclusion criteria may lead to substantial increases in time required for systematic reviewers and modellers

  6. Identification and use of existing economic studies • Process • Existing economic evidence is typically used by TAGs for 3 purposes • 1. Evaluating the need for independent economic assessment • 2. Informing structural assumptions within the model • 3. Informing model parameters • Identified by replacing RCT filter with economic filter • ACRC case study • 11 cost-effectiveness studies of irinotecan, oxaliplatin and raltitrexed in ACRC • Cost-effectiveness analyses all based on trials in which crossovers following progression were unplanned and unrecorded. • Effectiveness and cost-effectiveness estimates based on overall survival were confounded • Problematic outcomes e.g. cost per progression-free life year gained • Existing economic studies flawed but used to inform • types of parameter to be included • parameter values

  7. Issues for economic reviews • Issues • Systematic searches, and hence, cost-effectiveness reviews may be restricted to intervention under appraisal • Broader economic (or clinical) studies may inform structural assumptions and suggest data to inform model parameters • Additional searches → additional time and resource requirements

  8. Model structuring • Process • Parameters used depend on model structure • No guidance on the use of relevant evidence to inform model structure or types of parameters to be included • In practice, model structure guided by • What others have done before • What you’ve done before • What clinical experts say • ACRC case study • Economic model framework based upon criticisms of previous analyses • Led by economic outcomes to be evaluated • Suggested methodology formed from review of earlier studies • Survival modelling using indirect comparison to estimate cost per life year gained & cost per QALY gained

  9. Issues for model structuring • Issues • May be guided by access to data • “Relevant evidence” for this stage is typically clinical opinion • Whose opinion(s)? • Dealing with lack of clinical consensus between stakeholders • Absence of formal methods for problem-structuring in health economic modelling

  10. Model parameterisation • Process • Evidence used to inform model parameters is typically obtained from diverse range of sources • Parameters for treatment effect obtained directly from systematic review of clinical effectiveness • Other parameters may identified by systematic/topic searches • Typical sources include • literature • clinical opinion • cross sectional studies & surveys • registry data • audit studies etc. • Sources usually reported • Search methods not usually reported • The use of this evidence is usually informally appraised • Selection criteria for such evidence is not usually fully reported

  11. ACRC model parameters • Chemotherapies for ACRC • Effectiveness parameters • Overall survival Kaplan Meier curves for baseline treatment (weibull parameters) • Hazard ratios for other treatments • Utility scores • Resource use & cost parameters • Mean dosage per cycle / RDI • Number of cycles of each chemotherapy regimen actually received • Acquisition costs • Administration resource use & costs

  12. Effectiveness parameters • 17 Phase 3 RCTs included in review • “Best available evidence” identified from systematic searches and review → 15 RCTs not used in model due to confounding of overall survival benefits and absence of resource use data • Overall survival and progression-free survival data made available to ScHARR-TAG by MRC CTU, and from trial publication (Tournigand et al)

  13. Resource use and cost parameters • Chemotherapy resource use (no. cycles, RDI) obtained directly from MRC presentations, Tournigand paper, personal communication • Evidence relating to costs of line insertion, AEs, consultation, diagnostic tests, pharmacy preparation and dispensing etc. identified from systematic search of cost and cost-effectiveness studies • Parameter estimates were usually hidden away in tables or text – not the subject of the study • High cost assumption for multiple estimates • Evidence on inpatient/outpatient administration obtained from sponsor submission. Checked with clinical advisors • Pharmacy cost revised following peer review comments • Drug costs from BNF • Administration resource use assumptions from advisors, costed using PSSRU • Results and review of systematic searches not formally reported

  14. Issues for model parameterisation • Issues • Limited of methods for selection of non-clinical evidence • Difficult to gauge reliability • Subjective judgement • How well written the paper is • How recent the evidence is • Internal/external validity (?) • Consistency between multiple sources • Identifying all available evidence doesn’t necessarily solve the problem • Systematic searches and review of all model parameters could have considerable impact on time and resource requirements for model population process

  15. Summary issues • Decreasing marginal return from additional review effort • At what point do we stop looking for evidence… • When we have found multiple sources? • When we have found nothing? • Parsimony • Definition of ‘relevant’ evidence is highly subjective • Data access • Identifying, selecting and appraising all relevant evidence is not a panacea. There is always a gap between • the parameters that the data tell us about; and • the parameters that we need to populate the model • need to allow for additional uncertainty

More Related