1 / 33

Extra References

Effectiveness: Overview of Current Approaches and Emerging Trial Designs Doug Taylor, PhD Director of Biostatistics Family Health International. Extra References. Fleming and Richardson (2004). Some Design Issues in Trials of Microbicides for the Prevention of HIV Infection. JID.

guri
Télécharger la présentation

Extra References

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Effectiveness: Overview of Current Approaches and Emerging Trial DesignsDoug Taylor, PhDDirector of BiostatisticsFamily Health International

  2. Extra References Fleming and Richardson (2004). Some Design Issues in Trials of Microbicides for the Prevention of HIV Infection. JID. Trussell and Dominik (2005). Will Microbicide Trials Yield Unbiased Estimates of Microbicide Efficacy? Contraception. Pocock and Abdalla (1998). The Hope and the Hazards of Using Compliance Data in Randomized Controlled Trials. Statistics in Medicine, 17(3): 303-317.

  3. OUTLINE • Efficacy vs. Effectiveness • Basic Design of Effectiveness Trials • Choice of Control Group • Study Populations • Strength of Evidence • Power, Study Size and Adherence • Future Directions

  4. Efficacy vs. Effectiveness Efficacy: reduction in risk of acquiring HIV when the microbicide is used as intended for • Every act during the study (coitally dependent) • Every day during the study (daily product) • Following any other specified regimen (twice-daily, etc.)

  5. Efficacy vs. Effectiveness Effectiveness: reduction in risk, recognizing that • Participants don’t use microbicide for all acts/days • Microbicide may be withdrawn for AE or pregnancy • Condom use not independent of microbicide use • Possibility of infection due to other exposure routes

  6. Basic Design Phase 2b or 3 randomized, controlled trial powered to make a conclusive statement about effectiveness Chance of concluding effectiveness depends on…. • What we mean by “conclusive statement” • The efficacy of the microbicide • Adherence to product use • How much information (events) observed

  7. Basic Design • HIV-negative participants randomized to either the microbicide or control • Study staff and participants blind to treatment assignment (where possible) • Intensive counseling on HIV risk-reduction and use of product; condom promotion; treatment of STIs • Monitoring of AEs • Independent Data Monitoring Committee (IDMC)

  8. Basic Design • Testing for HIV at regular intervals for a year or more of follow-up • Primary Outcome: estimated time to HIV infection • Primary Analysis: compare the rate of infection between treatment groups using Intent-to-Treat principle note on ITT: all participants/outcomes included, regardless of whether they continue to use product (e.g., even if withdrawn due to pregnancy)

  9. Choice of Control Group Placebo • Ideally an ‘inert’ microbicide • Blinding possible: risk-taking behaviors may be comparable between groups Arguments against relying on a placebo control • Placebo may not be inert • Fail to capture effect of condom migration or other behavior changes that occur in real world

  10. Choice of Control Group Condom-only • Attempts to evaluate the effect of adding a microbicide to real world settings • Risk-taking behaviors (including condom use) will almost certainly differ between groups Arguments against relying on a condom-only arm • Differences in behaviors may overwhelm any effect of microbicide on HIV • Clinical trials are not the real world

  11. Study Populations Minimal Requirements • Risk of HIV exposure • Minimal exposure to HIV via routes not protected by microbicide • Willingness to use product

  12. Study Populations Participants recruited from • General, sexually active populations • STI clinics • Discordant couples • Sex workers

  13. Strength of Evidence Refers to how confident we are that an apparent effect observed in a clinical trial is real and not due to chance Related to concepts of • Type I error (‘α level’) of a statistical hypothesis test when planning a trial, and • Observed p-value or confidence interval for relative risk when interpreting trial results

  14. Strength of Evidence - One Trial Typical trials are designed to have no more than a 2.5% chance of concluding treatment is effective, if in reality there is no effect (type I error) e.g. “the one-sided p-value for test of effect must be less than 0.025,” or “the upper 95% confidence bound for Relative Risk of HIV must be less than 1.0” A study designed to meet this requirement can be thought of as targeting the strength of evidence of one trial

  15. Strength of Evidence of Single Trial Estimated RR 0.3 1.0 3.0 0.7 2.0 RR 0.40 0.99 95% CI p-value < 0.025 (one-sided)

  16. Strength of Evidence - Two Trials • FDA has traditionally required two trials, each demonstrating that there is a protective effective at the one-sided α=0.025 level • Trials could be conducted sequentially or in parallel • Protects against a spurious result • Helps to ensure that product will be effective in different settings/populations

  17. Strength of evidence - Two Trials RR1 RR2 0.3 1.0 3.0 0.7 2.0 RR 0.20 0.80 95% CI (p-value < .025) 0.40 0.94 95% CI (p-value < .025)

  18. Strength of Evidence Could be unethical to conduct second trial. A single microbicide study could suffice if • Well conducted • Multi-site • Large strength of evidence (p-value < 0.005, equivalent to 1.5-2 trials) • Results consistent across sites and within important subgroups (e.g. age) • Strong safety data

  19. Single Study with Larger Strength of Evidence Estimated RR 0.3 1.0 3.0 0.7 2.0 RR 0.60 0.99 99.9% CI p-value < 0.005 (one-sided)

  20. Resource Management • Achieving the strength of evidence of more than one trial could require 5000-10000+ participants, 50+ million dollars • Difficult to commit those types of resources without plans for stopping trial early if product appears unlikely to achieve desired effect → futility analysis

  21. Futility Analysis - example Phase 3 study designed to have 90% chance of detecting a 40% reduction in risk with α=0.005 Need to observe ~240 events Half-way through the trial (120 events), the IDMC finds that the estimated treatment effect is zero Even if the product is truly 40% effective (and the interim result is simply really bad luck), the chance of observing final p-value < 0.005 is only 20% (but was interim result due to poor adherence?!)

  22. Resource Management Alternatively, could consider a “2b screening trial” (HPTN/MTN 035; MTN 003 VOICE): • One-third the size of the single Phase 3 trial • Clear rules for when/how to proceed to second trial

  23. Strength of evidence (alpha level) Power No. of Events Effectiveness Adherence Efficacy Incidence Follow-up Number enrolled

  24. Number of Events and Power • In treatment trials, every participant may have an outcome measure (e.g. viral load, CD4 count) and contribute to power to detect an effect of treatment • In a prevention trial, very few participants have the outcome (HIV) and so very few contribute directly to the precision of the estimate of effectiveness • Prevention trials are “event driven”

  25. Events Required to Achieve 90% Power(Strength of Evidence of One Trial, one-sided α =0.025)

  26. Estimated Study Size for 90% Power(one-sided α=.025)

  27. Adherence and Power When designing a study we state an effectiveness level that would be important to detect For example, we might design a study to have 90% power to detect a 40% effectiveness level, assuming participants use it for 80% of acts/days Such a study would require 160 events to achieve strength of evidence of one trial

  28. Effect of Non-Adherence on Power (160 Event trial)

  29. Challenges • Poor adherence will doom an otherwise efficacious microbicide • Condom use and risk-reduction counseling may significantly reduce incidence of HIV in control arm • Do not know the infection status of partners • Do not know whether microbicide was used for acts with exposure to HIV, or the route of exposure for individual acts leading to infection • N9: Heterogeneity of effect across sites and studies; the same is possible for other products

  30. Future Directions - Adherence • Target something closer to efficacy by monitoring of adherence (IPM) • Better identify participants who will use the product • Develop products or delivery systems (e.g. rings) that people will use • Develop products that take adherence largely out of the hands of participants (e.g. injectables) or that participants are less likely to forget (e.g. rings)

  31. Future Directions – Design & Analysis • Still have to perform the study • Adaptive designs for screening out products, re-estimating power, futility and safety analyses • Bayesian methods for combining phase 2 and phase 3 evidence • Non-inferiority trials (when we have an effective microbicide)

  32. Discussion Questions?

More Related