html5-img
1 / 56

Evaluating Medical Literature

Evaluating Medical Literature. Dr. Shounak Das Feb. 28, 2006. Objectives. Gain a basic framework for ways to use the medical literature. (make sure I sign in so I don’t get in trouble). Understand the basic steps behind analyzing any medical study. (get a free lunch).

amie
Télécharger la présentation

Evaluating Medical Literature

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Medical Literature Dr. Shounak Das Feb. 28, 2006

  2. Objectives • Gain a basic framework for ways to use the medical literature (make sure I sign in so I don’t get in trouble) • Understand the basic steps behind analyzing any medical • study (get a free lunch) • Have a step-by-step approach to analyze papers specifically • geared towards therapy (catch up on the latest gossip) • Have a resource for further exploring effective use of • medical literature (catch a few zzz’s) • Show pictures of Daniel (catch some more zzz’s)

  3. Outline I. Overview II. Evaluation of Primary Studies Assessing Therapies • User’s Guides to the Medical Literature • Pictures of Daniel

  4. I. Overview I. Overview • Ways of using the medical literature • Problem-solving using the medical literature

  5. Ia. Ways of Using the Medical Literature • Background and foreground questions • Background questions = background medical knowledge • “medical student” learning • textbooks • Foreground questions = patient management questions • “clinician” learning • literature searches

  6. Ia. Ways of Using the Medical Literature • Browsing and problem-solving • Browsing = subscribe, skim, and read • <10% of published studies are high quality + clinically relevant • pile of journal grows until it is too intimidating to deal with and is then thrown out or shelved for display after which the cycle restarts • consider secondary journals (i.e. ACP Journal Club) • Problem-solving = specific questions regarding patient management

  7. Ib. Problem-Solving Using the Medical Literature • Frame the question: • the population • who are the relevant patients? • the intervention or exposure • diagnostic test, drug, surgical intervention • the outcome • patient-relevant consequences we are interested in

  8. Ib. Problem-Solving Using the Medical Literature • Searching for the answer – 4 types of clinical questions: • Therapy • determine the effect of a treatment • Harm • ascertain the effect of exposure to a harmful agent • Diagnosis • establish the power of an intervention to distinguish those with and without the target condition • Prognosis • estimating the future course of a patient’s disease

  9. Ib. Problem-Solving Using the Medical Literature • Sources of evidence: • Prefiltered • authors have already accumulated best of published +/- unpublished evidence (i.e. a systematic review) • examples = Best Evidence, Cochrane Library, UpToDate, and Clinical Evidence • Unfiltered • MEDLINE = U.S. National Library of Medicine database (contains primary studies and reviews) • world wide web

  10. II. Evaluation of Primary Studies Assessing Therapies • When using the medical literature to answer a clinical question, approach the study using three discrete steps* : • Are the results of the study valid? • What are the results? • How can I apply these results to patient care? *these 3 steps are applicable to primary studies assessing harm, diagnosis, and prognosis as well as therapies; they are also applicable for the evaluation of systematic reviews

  11. IIa. Are the Results of the Study Valid? • Assessing validity requires answering 2 further questions: • Did experimental and control groups begin the study with a similar prognosis? • Did experimental and control groups retain a similar prognosis after the study started?

  12. IIai. Did Experimental and Control Groups Begin the Study with a Similar Prognosis? • To determine whether or not the experimental and control groups began the study with similar prognoses, we ask 4 questions: • Were patients randomized? • Was randomization concealed (blinded or masked)? • Were patients analyzed in the groups to which they were randomized? • Were patients in the treatment and control groups similar with respect to known prognostic factors?

  13. IIai. Did Experimental and Control Groups Begin the Study with a Similar Prognosis? • Were patients randomized? • randomization is important so that the experimental and control groups are matched for both known and unknown prognostic factors

  14. IIai. Did Experimental and Control Groups Begin the Study with a Similar Prognosis? • There are many examples of RCT’s contradicting the findings of observational studies, i.e.: Clinical Question Observational Study Findings RCT Findings

  15. IIai. Did Experimental and Control Groups Begin the Study with a Similar Prognosis? • Was randomization concealed (blinded or masked)? • Some years ago, a group of Australian investigators undertook a randomized trial of open vs laparoscopic appendectomy. • At night, the attending surgeon's presence was required for the laparoscopic procedure but not the open one. • When an eligible patient appeared, the residents checked the attending staff and the lineup for the operating room and, depending on the personality of the attending surgeon and the length of the lineup, held the translucent envelopes containing orders up to the light. As soon as they found one that dictated an open procedure, they opened that envelope. • If patients who presented at night were sicker than those who presented during the day, the residents' behavior would bias the results against the open procedure.

  16. IIai. Did Experimental and Control Groups Begin the Study with a Similar Prognosis? • Were patients analyzed in the groups to which they were randomized? • This principle of attributing all patients to the group to which they were randomized results in an intention-to-treat analysis, which is analysis of outcomes based on the treatment arm to which patients were randomized, rather than which treatment they actually received. This strategy preserves the value of randomization: prognostic factors that we know about--and those we do not know about -- will be, on average, equally distributed in the two groups; and the effect we see will result simply from the treatment assigned.

  17. Let us assume that the treatment is ineffective and that the true underlying event rate in both treatment and control patients is 20%. If the 20 nonadherent patients are sicker and their event rate (60%) is higher, the nonadherent patients will suffer 12 of the 20 events destined to occur in the treated patients. If one compares only the adherent patients (with an event rate of 8/80, or 10%) with the control group (event rate 20/100, or 20%), one will mistakenly conclude that treatment cuts the event rate in half. IIai. Did Experimental and Control Groups Begin the Study with a Similar Prognosis? 20 events 12 events 8 events 8/80 = 10% vs. 20/100 = 20% 50% RRR in treatment arm 20 events 20/100 = 20% vs. 20/100 = 20% 0% RRR in treatment arm

  18. IIai. Did Experimental and Control Groups Begin the Study with a Similar Prognosis? • Were patients in the treatment and control groups similar with respect to known prognostic factors? • Clinicians should look for a display of prognostic features of the treatment and control patients at the study's commencement -- the baseline or entry prognostic features. Although we never will know whether similarity exists for the unknown prognostic factors, we are reassured when the known prognostic factors are well balanced.

  19. IIa. Are the Results of the Study Valid? • Assessing validity requires answering 2 further questions: • Did experimental and control groups begin the study with a similar prognosis? • Did experimental and control groups retain a similar prognosis after the study started?

  20. IIaii. Did Experimental and Control Groups Retain a Similar Prognosis After the Study Started? • To determine whether or not the experimental and control groups retain a similar prognosis, we ask 4 questions: • Were patients aware of group allocations? • Were clinicians aware of group allocations? • Were outcome assessors aware of group allocations? • Was follow-up complete?

  21. IIaii. Did Experimental and Control Groups Retain a Similar Prognosis After the Study Started? • Were patients aware of group allocations? • Patients who take a treatment that they believe is efficacious may feel and perform better than those who do not, even if the treatment has no biologic action – the placebo effect. • The best way to avoid the placebo effect skewing the results is to ensure that patients are unaware of whether they are receiving the experimental treatment.

  22. IIaii. Did Experimental and Control Groups Retain a Similar Prognosis After the Study Started? • Were clinicians aware of group allocations? • If randomization succeeds, treatment and control groups in a study begin with a very similar prognosis. However, randomization provides no guarantees that the two groups will remain prognostically balanced. Differences in patient care other than the intervention (cointerventions) under study can bias the results. • Effective blinding eliminates the possibility of either conscious or unconscious differential administration of effective (co)interventions to treatment and control groups.

  23. IIaii. Did Experimental and Control Groups Retain a Similar Prognosis After the Study Started? • Were outcome assessors aware of group allocations? • Unblinded study personnel who are measuring or recording outcomes such as physiologic tests, clinical status, or quality of life may provide different interpretations of marginal findings or may offer differential encouragement during performance tests, either one of which can distort results.

  24. IIaii. Did Experimental and Control Groups Retain a Similar Prognosis After the Study Started? • Was follow-up complete? • The greater the number of patients who are lost to follow-up, the more a study's validity is potentially compromised. The reason is that patients who are lost often have different prognoses from those who are retained. The situation is completely analogous to the reason for the necessity for an intention-to-treat analysis.

  25. IIaii. Did Experimental and Control Groups Retain a Similar Prognosis After the Study Started?

  26. IIa. Are the Results of the Study Valid? • The final assessment of validity is never a "yes" or "no" decision. Rather, think of validity as a continuum ranging from strong studies that are very likely to yield an accurate estimate of the treatment effect to weak studies that are very likely to yield a biased estimate of effect.

  27. II. Evaluation of Primary Studies Assessing Therapies • When using the medical literature to answer a clinical question, approach the study using three discrete steps : • Are the results of the study valid? • What are the results? • How can I apply these results to patient care?

  28. IIb. What are the Results? • Once validity of a study is assessed, the next step is to analyze the results. This involves looking at: • How large was the treatment effect? • How precise was the estimate of the treatment effect?

  29. IIbi. How Large Was the Treatment Effect? • Most frequently, randomized clinical trials carefully monitor how often patients experience some adverse event or outcome. Examples of these dichotomous outcomes ("yes" or "no" outcomes -- ones that either happen or do not happen) include cancer recurrence, myocardial infarction, and death. Patients either do or do not suffer an event, and the article reports the proportion of patients who develop such events.

  30. IIbi. How Large Was the Treatment Effect? • Even if the outcome is not one of these dichotomous variables, investigators sometimes elect to present the results as if this were the case. For example, in a study of the use of forced expiratory volume in 1 second (FEV1) in the assessment of the efficacy of oral corticosteroids in patients with chronic stable airflow limitation, investigators defined an event as an improvement in FEV1 over baseline of more than 20%. • The investigators' choice of the magnitude of change required to designate an improvement as "important" can affect the apparent effectiveness of the treatment

  31. When clinicians consider the results of clinical trials, they are interested in the association between a treatment and an outcome. Consider three different treatments that reduce mortality administered to three different populations. Although all three treatments reduce the risk of dying by a third, this piece of information is not adequate to fully capture the impact of treatment. Expressing the strength of the association as a relative risk (RR), a relative risk reduction (RRR), an absolute risk reduction (ARR) or risk difference (RD), an odds ratio (OR), or a number needed to treat (NNT) or number needed to harm (NNH) conveys a variety of different information. IIbi. How Large Was the Treatment Effect?

  32. IIbi. How Large Was the Treatment Effect?

  33. IIbi. How Large Was the Treatment Effect? • In this study, the relative risk (RR) = 28% ÷ 45% = 63% • the RR tells us the proportion of the original risk (in this case, the risk of death with sclerotherapy) that is still present when patients receive the experimental treatment (in this case, ligation). • Relative risk reduction (RRR) = (45% - 28%) ÷ 45% = 37% • the RRR tells us the proportion of baseline risk that is removed by the therapy • using nontechnical language, we would say that ligation decreases the relative risk of death by 37% compared to sclerotherapy. (28%) (45%)

  34. IIbi. How Large Was the Treatment Effect? • The absolute risk reduction (ARR) = 45% – 28% = 16% • the ARR tells us what proportion of patients are spared the adverse outcome if they receive the experimental therapy, rather than the control therapy • The number needed to treat (NNT) = 1/ARR = 6 • the number of patients one would need to treat to prevent an adverse event (28%) (45%)

  35. IIbi. How Large Was the Treatment Effect? • The odds ratio (OR) = 18/46 ÷ 29/36 = 0.49 • the (OR) represents the proportion of patients with the target event divided by the proportion without the target event • in most instances in medical investigation, odds and risks are approximately equal

  36. IIbi. How Large Was the Treatment Effect? • Relative Risk and Odds Ratio vs Absolute Risk Reduction: Why the Fuss? • distinguishing between OR and RR will seldom have major importance • we must pay much more attention to distinguishing between the OR and RR vs the ARR (or its reciprocal the NNT)

  37. IIbi. How Large Was the Treatment Effect? • Forrow and colleagues demonstrated that clinicians were less inclined to treat patients after presentation of trial results as the absolute change in the outcome compared with the relative change in the outcome. • The pharmaceutical industry's awareness of this phenomenon may be responsible for their propensity to present physicians with treatment-associated relative risk reductions. (ARR)

  38. IIb. What are the Results? • Once validity of a study is assessed, the next step is to analyze the results. This involves looking at: • How large was the treatment effect? • How precise was the estimate of the treatment effect?

  39. IIbii. How Precise Was the Estimate of the Treatment Effect? • Realistically, the true risk reduction can never be known. The best we have is the estimate provided by rigorous controlled trials, and the best estimate of the true treatment effect is that observed in the trial. This estimate is called a point estimate, a single value calculated from observations of the sample that is used to estimate a population value or parameter.

  40. IIbii. How Precise Was the Estimate of the Treatment Effect? • Investigators often tell us the neighborhood within which the true effect likely lies by calculating confidence intervals. • We usually (though arbitrarily) use the 95% confidence interval. • You can consider the 95% confidence interval as defining the range that includes the true relative risk reduction 95% of the time.

  41. IIbii. How Precise Was the Estimate of the Treatment Effect? • In this hypothetical situation, 2 different-sized studies with the same RRR of 25% have widely varying confidence intervals. • It is evident that the larger the sample size, the narrower the confidence interval. This study enrolled 1000 patients (RRR = 0.25 95% confidence intervals 0.09 – 0.41) (RRR = 0.25 95% confidence intervals -0.38 – 0.59) This study enrolled 100 patients

  42. IIbii. How Precise Was the Estimate of the Treatment Effect? • When is the sample size big enough? • In a positive study -- a study in which the authors conclude that the treatment is effective -- if the lowest RRR that is consistent with the study results is still important (that is, it is large enough for you to recommend the treatment to the patient), then the investigators have enrolled sufficient patients. • In a negative study -- in which the authors have concluded that the experimental treatment is no better than control therapy -- if the RRR at the upper boundary would, if true, be clinically important, the study has failed to exclude an important treatment effect (i.e. has not enrolled sufficient patients).

  43. II. Evaluation of Primary Studies Assessing Therapies • When using the medical literature to answer a clinical question, approach the study using three discrete steps : • Are the results of the study valid? • What are the results? • How can I apply these results to patient care?

  44. IIc. How Can I Apply These Results to Patient Care? • Before embarking on therapy based on a study for a particular patient, ask 3 questions: • Were the study patients similar to the patient in my practice? • Were all clinically important outcomes considered? • Are the likely treatment benefits worth the potential harm and costs?

  45. IIc. How Can I Apply These Results to Patient Care? • Were the study patients similar to the patient in my practice? • If the patient had qualified for enrollment in the study -- that is, if she had met all inclusion criteria and had violated none of the exclusion criteria -- you can apply the results with considerable confidence. Even here, however, there is a limitation. Treatments are not uniformly effective in every individual. Conventional randomized trials estimate average treatment effects. Applying these average effects means that the clinician will likely be exposing some patients to the cost and toxicity of the treatment without benefit. • A better approach than rigidly applying the study's inclusion and exclusion criteria is to ask whether there is some compelling reason why the results should not be applied to the patient.

  46. IIc. How Can I Apply These Results to Patient Care? • Were all clinically important outcomes considered? • Treatments are indicated when they provide important benefits. Demonstrating that a bronchodilator produces small increments in forced expired volume in patients with chronic airflow limitation does not provide a sufficient reason for administering this drug. What is required is evidence that the treatment improves outcomes that are important to patients, such as reducing shortness of breath during the activities required for daily living. We can consider forced expired volume as a substitute or surrogate outcome. Investigators choose to substitute these variables for those that patients would consider important, usually because they would have had to enroll many more patients and follow them for far longer periods of time.

  47. IIc. How Can I Apply These Results to Patient Care? • Were all clinically important outcomes considered?

  48. IIc. How Can I Apply These Results to Patient Care? • Were all clinically important outcomes considered? • Even when investigators report favorable effects of treatment on one clinically important outcome, you must consider whether there may be deleterious effects on other outcomes. For example a cancer chemotherapeutic agent may lengthen life but decreases its quality.

  49. IIc. How Can I Apply These Results to Patient Care? • Are the likely treatment benefits worth the potential harm and costs? • If you can apply the study's results to a patient, and its outcomes are important, the next question concerns whether the probable treatment benefits are worth the effort that you and the patient must put into the enterprise. • the NNT can help clinicians judge the degree of benefit and the degree of harm patients can expect from therapy.

  50. IIc. How Can I Apply These Results to Patient Care? • Are the likely treatment benefits worth the potential harm and costs? • As a result of taking aspirin, patients with hypertension without known coronary artery disease can expect a reduction of approximately 15% in their relative risk of cardiovascular related events. • For an otherwise low-risk woman with hypertension and a baseline risk of cardiovascular related event of between 2.5% and 5%, this translates into an NNT of approximately 200 during a 5-year period. • For every 161 patients treated with aspirin, one would experience a major hemorrhage (NNH = 161). • Thus, in 1000 of these patients, aspirin would be responsible for preventing five cardiovascular events, but it would also be responsible for causing approximately six serious bleeding episodes.

More Related