1 / 61

Randomized Controlled Trials, Systematic Reviews and Meta-analysis

CORE Jun 11 2008. Randomized Controlled Trials, Systematic Reviews and Meta-analysis. Achilleas Thoma, MD, MSc, FRCS(C). Division of Plastic Surgery, Departments of Surgery & Clinical Epidemiology and Biostatistics McMaster University. Learning Objectives.

argus
Télécharger la présentation

Randomized Controlled Trials, Systematic Reviews and Meta-analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CORE Jun 11 2008 Randomized Controlled Trials, Systematic Reviews and Meta-analysis Achilleas Thoma, MD, MSc, FRCS(C) Division of Plastic Surgery, Departments of Surgery & Clinical Epidemiology and Biostatistics McMaster University

  2. Learning Objectives • What a RCT and a Systematic Review are. • Why we use them.

  3. Evidence-based Surgery Definition: • The conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients

  4. Hierarchy of Evidence Systematic Reviews of Randomized Controlled Trials (Meta-analysis) Single Randomized Controlled Trial (RCT) Systematic Review of Observational Studies Addressing Patient-Important Outcomes Single Observational Study Addressing Patient-Important Outcomes Physiologic Studies Unsystematic Clinical Observations

  5. Hierarchy of Evidence • A well-conducted systematic review or meta-analysis of well executed randomized controlled trials provides the highest level of evidence to support an answer to a surgical question

  6. Randomized Controlled Trial (RCT) Definition: • An experiment in which individuals are randomly allocated to receive or not receive an experimental preventative, therapeutic, or diagnostic procedure and then followed to determine the effect of the intervention

  7. RCTs in Surgery • Most scientifically rigorous study design to evaluate the effect of a new surgical intervention • Offers the maximum protection against biases • Balances both known and unknown prognostic factors across treatment groups

  8. RCTs in Surgery • Not all of the questions we face in surgery can be answered by the RCT design • Must consider the plausibility and feasibility of the research question

  9. Plausibility • Is the question answerable or not? For example: Population = congenitally absent ear children Intervention = Nagata technique Comparative = “genetic engineering method” Outcome = “new” ear What is the problem here?

  10. Feasibility • Can the study design we choose answer the research question? For Example: Population = cosmetic abdominoplasty patients Intervention = intermittent lower extremity pump Comparative = low molecular heparin Outcome = prevention of fatal pulmonary embolism What do you think of this RCT?

  11. Questions of Harm • For questions of harm, appropriate study designs include case-control studies and cohort studies For Example: Population = patients with replanted digits Intervention = continue smoking Comparative = non-smoking Outcome = short-term survival of replanted digits What is the problem here?

  12. Conducting a RCT Considerations: • Surgical equipoise • Surgical learning curve • Differential care • Randomization • Concealment • Expertise-based design • Blinding • Intention to treat analysis • Loss to follow-up • Treatment effects and implications for sample size calculations.

  13. Surgical Equipoise • Equipoise = a state of genuine uncertainty RE: benefits, harms that may result from each of two or more surgical procedures. • There are no scientific or ethical concerns about Surgery A being better than Surgery B for a particular patient

  14. Surgical Learning Curve • Surgeon’s cumulative experience • Continuous refinement of patient selection, operative technique, and post-operative care • What is the problem here?

  15. Differential Care • In OR: better hemostasis, give antibiotics to A and not B, staff person does surgery as compared to resident • Outside the OR: more frequent follow-up, physiotherapy to A but not to B • What is the problem here?

  16. Randomization • Even or odd number birth date • Flip a coin • Alternate chart numbers • Random number tables • Automated telephone system • Internet randomization system What are appropriate randomization techniques?

  17. Randomization • Randomize as close to surgery as possible • For example: Breast reduction RCT

  18. Concealment • Randomization by itself is not adequate. We need to conceal the randomization process. • In some trials envelopes are used carrying the randomization arm • What is the problem here?

  19. Expertise-based Design • Patients are randomized to surgeons who do their operation of preference, as opposed to randomizing patients to a treatment group. • Each participating centre must have some surgeons doing each type of operation

  20. Blinding There are 6 potential levels of blinding: • The patients • The clinicians who administer the treatment • The clinicians care for the patients during the trial • The individuals who assess the patients throughout the trial and collect the data • The data analyst • The investigators who interpret and write the results of the trial

  21. Blinding • Surgeons ? • Patients ?

  22. Blinding Placebo Effect • If a patient knows he/she received a treatment that he/she believes is better, than he/she may feel better even if the there is no underlying benefit. • The same holds true for the surgeon or designated assessor for the study if they are not blinded

  23. Intention to Treat Analysis • The analysis of the outcomes is based on the treatment arm to which patients were randomized, and not on which surgical treatment they received. • Includes all patients, regardless of whether they actually satisfied the entry criteria, received the treatment to which they were randomly allocated, or deviated from the protocol.

  24. Loss to Follow-Up • Failure to account for all patients at end of the study may invalidate the RCT and reduce study power • Researchers suggest that < 5% loss probably leads to little bias, whereas > 20% loss potentially threatens validity

  25. Treatment Effect and Sample Size Treatment Effect: • Effect size: The size of the difference that the study is designed to detect. The minimum clinically important difference (MCID) is the smallest difference between 2 groups that would be clinically worth detecting.

  26. Treatment Effect and Sample Size Sample Size: • Power varies directly in proportion to the number of participants. The larger the sample size, the greater is the power and more information about the true difference is obtained.

  27. Hulley SB, Cummungs SR, Browner WS, Grady D, Hearst N, Newman TB. Designing Clinical Research, 2nd Edition. Lippincott Williams & Wilkins, Philadelphia, 2001

  28. Power Analysis • To make a statistical inference, we need to set two hypotheses: • Null hypothesis (there is no difference) • Alternate hypothesis (there is a difference).

  29. Power Analysis Type I Error: alpha () Type II Error: beta ()

  30. Understanding Power Analysis • Typically  is = 0.05 and  = 0.2 and the power = 1 -  =.80

  31. Sample Size Calculation

  32. Conducting a RCT in Surgery • In order to conduct high quality, large RCTs multi-centre collaboration is required. • Collaboration with biostatisticians, health economists, epidemiologists, and clinical trialists

  33. Beware! • Not all published RCTs are reported well or are of good methodological quality. • Guidelines exist for: • The Reporting of RCTs (i.e. CONSORT) and • The appraisal of RCT methodological quality (i.e. Detsky Quality Assessment Scale)

  34. Quality of Reporting • Consolidated Standards of Reporting Trials (CONSORT) • Checklist of essential items that should be included in reports of RCTs and a diagram for documenting the flow of participants through a trial

  35. CONSORT Checklist 1. How participants were allocated to interventions 2. Scientific background and explanation of rationale. 3a. Eligibility criteria for participants. 3b. The settings & locations where the data were collected. 4. Precise details of the interventions 5. Specific objectives and hypotheses 6a. Defined primary and secondary outcome measures. 6b. Methods used to enhance the quality of measurements 7a. How sample size was determined. 7b. Explanation of any interim analyses 8a. Method used to generate the random allocation sequence. 8b. Details of any restriction [of randomization] 9. Method used to implement the random allocation sequence 10. Who generated the allocation sequence, who enrolled participants, and who assigned participants to their groups.

  36. Assessing Methodological Quality • In a quality assessment scale, the responses to the individual items are summed to create an overall summary score representing trial quality • i.e. Detsky Quality Scale, Jadad Scale

  37. Jadad Scale Please read the article and try to answer the following questions (see attached instructions): • Was the study described as randomized (this includes the use of words such as randomly, random, and randomization)? • Was the study described as double blind? • Was there a description of withdrawals and dropouts?

  38. Jadad Scale cont. Scoring the items: • Either give a score of 1 point for each “yes” or 0 points for each “no.” There are no in-between marks. • Give 1 additional point if: For question 1, the method to generate the sequence of randomization was described and it was appropriate (table of random numbers, computer generated, etc.) And/or: If for question 2 the method of double blinding was described and it was appropriate (identical placebo, active placebo, dummy, etc.) • Deduct 1 point if: For question 1, the method to generate the sequence of randomization was described and it was inappropriate (patients were allocated alternately, or according to date of birth, hospital number, etc.) And/or For question 2, the study was described as double blind but the method of blinding was inappropriate (e.g., comparison of tablet versus injection with no double dummy)

  39. Jadad Scale cont. Thoma et al (Plast. Reconstr. Surg. 114: 1137, 2004.)

  40. Systematic Reviews Definition: • “The application of scientific strategies that limit bias to the systematic assembly, critical appraisal, and synthesis of all relevant studies on a specific topic”

  41. Systematic Reviews Aim to: • Summarize the existing literature • Resolve conflicts or controversies in the literature • Clarify the results of multiple studies • Evaluate the need for further studies

  42. Conducting Systematic Reviews • Before embarking on a RCT, you must be familiar with the “cutting edge” of knowledge for a health care problem • It is important to summarize this “cutting edge evidence” in the form of a systematic review and apply it to your clinical practice

  43. Narrative Vs. Systematic Reviews Cook DJ, Mulrow CD, Haynes RB. Ann Intern Med 1997;126(5):376-380

  44. Meta-Analysis • The results of the primary studies that meet the standards for inclusion in a review are mathematically pooled to give a result that is more precise because of the overall increase in numbers of study participants contributing data.

  45. Meta-analysis Aim to: • Resolve controversy over whether a true effect exists, when results have been variable in single studies • Validate a statistically non-significant but clinically important result in a small study

More Related