1 / 146

Appraisal, Extraction and Pooling of Quantitative Data for Reviews of Effects

Appraisal, Extraction and Pooling of Quantitative Data for Reviews of Effects . - from experimental, observational and descriptive studies. Introduction. Recap of Introductory module Developing a question (PICO) Inclusion Criteria Search Strategy Selecting Studies for Retrieval

tasya
Télécharger la présentation

Appraisal, Extraction and Pooling of Quantitative Data for Reviews of Effects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Appraisal, Extraction and Pooling of Quantitative Data for Reviews of Effects - from experimental, observational and descriptive studies

  2. Introduction Recap of Introductory module Developing a question (PICO) Inclusion Criteria Search Strategy Selecting Studies for Retrieval This Module considers how to appraise, extract and synthesize evidence from experimental, observational and descriptive studies.

  3. Program Overview

  4. Program Overview

  5. Session 1: The Critical Appraisal of Studies

  6. Why Critically Appraise? Combining results of poor quality research may lead to biased or misleading estimates of effectiveness 1004 references 172 duplicates 832 references Scanned Ti/Ab 715 do not meet Incl. criteria 117 studies retrieved 82 do not meet Incl. criteria 35 studies for Critical Appraisal

  7. The Aims of Critical Appraisal • To establish validity • to establish the risk of bias

  8. Used locally? External Validity Relationship between IV and EV? Internal Validity Internal & External Validity

  9. Strength & Magnitude How large is the effect? How internally valid is the study? Strength Magnitude & Precision

  10. Clinical Significance and Magnitude of Effect • Pooling of homogeneous studies of effect or harm • Weigh the effect with cost/resource of change • Determine precision of estimate

  11. Assessing the Risk of Bias • Numerous tools are available for assessing methodological quality of clinical trials and observational studies. • JBI requires the use of a specific tool for assessing risk of bias in each included study. • ‘High quality’ research methods can still leave a study at important risk of bias. (e.g. when blinding is impossible) • Some markers of quality are unlikely to have direct implications for risk of bias (e.g ethical approval, sample size calculation)

  12. Sources of Bias • Selection • Performance • Detection • Attrition

  13. Selection Bias • Systematic differences between participant characteristics at the start of a trial • Systematic differences occur during allocation to groups • Can be avoided by concealment of allocation of participants to groups

  14. Performance Bias • Systematic differences in the intervention of interest, or the influence of concurrent interventions • Systematic differences occur during the intervention phase of a trial • Can be avoided by blinding of investigators and/or participants to group

  15. Detection Bias • Systematic differences in how the outcome is assessed between groups • Systematic differences occur at measurement points during the trial • Can be avoided by blinding of outcome assessor

  16. Attrition Bias • Systematic differences in withdrawals and exclusions between groups • Can be avoided by: • Accurate reporting of losses and reasons for withdrawal • Use of ITT analysis

  17. Ranking the “Quality” of Evidence of Effectiveness • To what extent does the study design minimize bias/demonstrate validity • Generally linked to actual study design in ranking evidence of effectiveness • Thus, a “hierarchy” of evidence is most often used, with levels of quality equated with specific study designs

  18. Hierarchy of Evidence-EffectivenessEXAMPLE 1 • Grade I - systematic reviews of all relevant RCTs. • Grade II - at least one properly designed RCT • Grade III-1 - controlled trials without randomisation • Grade III-2 - cohort or case control studies • Grade III-3 - multiple time series, or dramatic results from uncontrolled studies • Grade IV - opinions of respected authorities & descriptive studies. (NH&MRC 1995)

  19. Hierarchy of Evidence-EffectivenessEXAMPLE 2 • Grade I - systematic review of all relevant RCTs • Grade II - at least one properly designed RCT • Grade III-1 - well designed pseudo-randomised controlled trials • Grade III-2 - cohort studies, case control studies, interrupted time series with a control group • Grade III-3 - comparative studies with historical control, two or more single-arm studies, or interrupted time series without control group • Grade IV - case series (NH&MRC 2001)

  20. JBI Levels of Evidence - Effectiveness

  21. The Critical Appraisal Process • Every review must set out to use an explicit appraisal process. Essentially, • A good understanding of research design is required in appraisers; and • The use of an agreed checklist is usual.

  22. Session 2: Appraising RCTs and experimental studies

  23. RCTs RCTs and quasi (pseudo) RCTs provide the most robust form of evidence for effects Ideal design for experimental studies They focus on establishing certainty through measurable attributes They provide evidence related to: whether or not a causal relationship exists between a stated intervention, and a specific, measurable outcome, and the direction and strength of the relationship These characteristics are associated with the reliability and generalizability of experimental studies

  24. Randomised Controlled Trials • Evaluate effectiveness of a treatment/therapy/intervention • Randomization critical • Properly performed RCTs reduce bias, confounding factors, and results by chance

  25. Experimental studies • Three essential elements • Randomisation (where possible) • Researcher-controlled manipulation of the independent variable • Researcher control of the experimental situation

  26. Other experimental studies • Quasi-experiments without a true method of randomization to treatment groups • Quasi experiments • Quasi-experimental designs without control groups • Quasi-experimental designs that use control groups but not pre-tests • Quasi-experimental designs that use control groups and pre-tests

  27. Sampling • Selecting participants from population • Inclusion/exclusion criteria • Sample should represent the population

  28. Sampling Methods • Probabilistic (Random) sampling • Consecutive • Systematic • Convenience

  29. Randomization

  30. Randomization Issues • Simple methods may result in unequal group sizes • Tossing a coin or rolling a dice • Block randomization • Confounding factors due to chance imbalances • stratification – prior to randomization • ensures that important baseline characteristics are even in both groups

  31. Block Randomization • All possible combinations ignoring unequal allocation 1 AABB 4 BABA 2 ABAB 5 BAAB 3 ABBA 6 BBAA • Use table of random numbers and generate allocation from sequence e.g. 533 2871 • Minimize bias by changing block size

  32. Stratified Randomization

  33. Blinding • Method to eliminate bias from human behaviour • Applies to participants, investigators, assessors etc • Blinding of allocation • Single, double and triple blinded

  34. Blinding Schulz, 2002

  35. Intention to Treat • ITT analysis is an analysis based on the initial treatment intent, not on the treatment eventually administered. • Avoids various misleading artifacts that can arise in intervention research. • E.g. if people who have a more serious problem tend to drop out at a higher rate, even a completely ineffective treatment may appear to be providing benefits if one merely compares those who finish the treatment with those who were enrolled in it. • Everyone who begins the treatment is considered to be part of the trial, whether they finish it or not.

  36. Minimizing Risk of Bias • Randomization • Allocation • Blinding • Intention to treat (ITT) analysis

  37. Appraising RCTs/quasi experimental studies JBI-MAStARI Instrument

  38. high quality Included studies cut off point Excluded studies poor quality Assessing Study Quality as a Basis for Inclusion in a Review

  39. Group Work 1 • Working in pairs, critically appraise the two papers in your workbook • Reporting Back

  40. Session 3: Appraising Observational Studies

  41. Rationale and potential of observational studies as evidence • Account for majority of published research studies • Need to clarify what designs to include • Need appropriate critical appraisal/quality assessment tools • Concerns about methodological issues inherent to observational studies • Confounding, biases, differences in design • Precise but spurious results

  42. Appraisal of Observational Studies • Critical appraisal and assessment of quality is often more difficult than RCTs. • Using scales/checklists developed for RCTs may not be appropriate • Methods and tools are still being developed and validated • Some published tools are available

  43. Confounding • The apparent effect is not the true effect • May be other factors relevant to outcome in question • Can be important threat to validity of results • Adjustments for confounding factors can be made - multivariate analysis • Authors often look for plausible explanation for results

  44. Bias • Selection bias • differ from population with same condition • Follow up bias • attrition may be due to differences in outcome • Measurement/detection bias • knowledge of outcome may influence assessment of exposure and vice versa

  45. Observational Studies - Types • Cohort studies • Case-control studies • Case series/case report • Cross-sectional studies

  46. Cohort Studies • Group of people who share common characteristic • Useful to determine natural history and incidence of disorder or exposure • Two types • prospective (longitudinal) • retrospective (historic) • Aid in studying causal associations

More Related