1 / 75

Clinical Trials

Clinical Trials. Jean Bourbeau, MD Respiratory Epidemiology and Clinical Research Unit McGill University Clinical Epidemiology (679) June 17, 2005. Clinical Trial Objectives. Define an experimental study and distinguish the major types

efrat
Télécharger la présentation

Clinical Trials

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Clinical Trials Jean Bourbeau, MD Respiratory Epidemiology and Clinical Research Unit McGill University Clinical Epidemiology (679) June 17, 2005

  2. Clinical Trial Objectives • Define an experimental study and distinguish the major types • Define and distinguish the population involved in planning and conducting experimental studies, and the impact that it might have on generalization and recruitment • Discuss the role of randomization in experimental studies, and distinguish individual vs group randomization • Define unblinded and blinded studies, distinguish the types of blinded studies, their advantages and disadvantages

  3. Clinical Trial Objectives • Define and discuss the consequence of withdrawal • Discuss various issues in data analysis • Discuss ethical considerations • Define and distinguish : efficacy and effectiveness • Describe the factors that might influence the response to a treatment or an intervention

  4. Reading • Fletcher, Chapter 7

  5. Study Design • Fundamental point • Sound scientific clinical investigation almost always demands that a control group be used against which the new intervention can be compared. Randomization is the preferred way of assigning participants to control and intervention groups

  6. Experimental/Clinical trial • Definition: • Prospective study comparing the effect and value of intervention technique(s) against a control in human subjects

  7. Experimental/Clinical trial • Employ one or more intervention techniques (prophylactic, diagnostic or therapeutic agents, devices, regimens, procedures, etc.); • Contain a control group against which the intervention group is compared.

  8. Randomized clinical trial RCT remains the research methodology of choice whenever randomization is feasible, but a study’s use of this methodology does not necessarily confer certainty on its conclusion

  9. Randomized clinical trial RCT have become the sine qua non for the proof of efficacy the Food and Drug Administration requires for marketing new drugs

  10. Clinical trial phases (drugs) Phase I Studies: Pharmaco/Toxicity • Participants have already tried and failed to improve on the existing standard intervention. • Maximally tolerated dose (MTD). Phase II Studies: Treatment effect • Once the MTD is established, the next goal is to evaluate whether the drug has any biologic activity or effect and to estimate the rate of adverse events.

  11. Clinical trial phases (drugs) Phase III/IV Studies: Full-scale evaluation/Postmarketing surveillance • Clinical trial (Phase III): generally designed to assess the effectiveness of the new intervention, and thereby, its role in clinical practice. • Long term studies, which do not involve control groups, are referred to as Phase IV Studies

  12. Types of Clinical Trial • Randomized • Control • Non randomized • Concurrent control • Historical • Others • Cross-over • Withdrawal • Factorial • Group allocation design • Studies of equivalency

  13. Def :Comparative study with an intervention and a control groups; the assignment of the subject to a group is determined by formal procedure of randomization. Randomized Control Trial

  14. Def : Non-Randomized ControlTrial Comparative study with an intervention and a control group where subjects of either groups are treated at approximately the same time; the assignment is not done by a random process.

  15. Def : Historical ControlTrial Comparative study with an intervention and a control group where a new intervention is used in aseries of subjects and the results are compared to the outcome in a previous series of comparable subjects; this type of study is non-randomized and non-concurrent.

  16. Def : Cross-Over Design Special case of a RCT; it allow each subject to serve as his own control. In the two period cross-over design, each subject will receive either intervention or control in the 1st period and the alternative in the succeeding period; the order in which the intervention and the control treatments are given is randomized

  17. Def : Withdrawal Studies Withdrawal studies have been conducted in which the subjects on a particular treatment for chronic disease are taken off or have the dosage reduced;

  18. Def : Factorial Design Factorial design attempts to evaluate two interventions compared to control in a single experiment.

  19. Def : Group Allocation Design In group allocation or cluster randomization design, a group of individuals, a clinic, or a community is randomized to a particular intervention or control.

  20. Def : Study of Equivalency In study of equivalency or trial with positive control, the objective is to test whether a new intervention is as good as an established one; the investigator must specify what is meant by equivalence.

  21. Study population • Fundamental point • The study population should be defined in advance, stating unambiguous inclusion (eligibility) criteria. The impact that these criteria will have on study design, ability to generalize, and participant recruitment must be considered.

  22. Study population Defining the study population is an integral part of posing the primary question • It is not enough to claim that a treatment is or not effective • The description requires specifications of criteria for subject eligibility

  23. Study population • Population • at large Definition of condition Population without condition Population with condition Entry criteria With condition but ineligible Study population Enrollment Eligible but not enrolled Study sample

  24. Eligibility criteria • In general, eligibility criteria relate to participant safety and anticipated effect of the intervention.

  25. Eligibility criteria • These criteria will have an impact on: • Study design • Ability to generalize • Participant recruitment

  26. Generalization • Generalize to a broader population: • Study subjects are usually non randomly chosen from the study population, which in turn is defined by eligibility criteria. • Selective participation in a trial entails the risk that the findings may not be generalizable.

  27. Generalization • It is often forgotten that participants • must agreeto enrol in a study: • What sort of person volunteers for a study ? • Why do some agree to participate while • others do not ?

  28. Randomization in Experimental Studies • Fundamental point • Randomization tends to : • produce study groups comparable with respect to known/unknown risk factors • remove investigator bias in the allocation of subject • guarantee that statistical tests will have valid significance levels

  29. Experimental bias • Two forms: • Selection bias, occurs if the allocation process is predictable • Accidental bias, can arise if the randomization procedure does not achieve balance on risk factors or prognostic covariates

  30. Types of randomization • Individual randomization • Simple • Blocked • Stratified • Group randomization

  31. Simple randomization • The most elementary form of randomization: • toss an unbiased coin each time a subject is eligible; • use a random number producing algorithm (a more convenient method for large studies).

  32. Def : Blocked randomization Blocked randomization, sometimes called permuted block randomization

  33. Blocked randomization For example, in the case of blocksize 4, there are 6 possible combinations of group assignments : AABB, ABAB, BAAB, BABA, BBAA, and ABBA.

  34. Blocked randomization For example, another method : Assignment Random number Rank   A 0.069 1 A 0.734 3 B 0.867 4 B 0.312 2

  35. Stratified randomization • Def : For any single study, especially a small study, there is no guarantee that all baseline characteristics will be similar in the 2 groups.

  36. Stratified randomization Age Sex Smoking Hx 1.     40-49 yr 1. Male 1. Current sm. 2.     50-59 yr 2. Female 2. Ex-sm. 3.     60-69 yr In this example, there will be 18 strata…

  37. Stratified randomization with block size of four

  38. Group randomization • For some interventions (psychosocial, education, etc) random assignment by individuals can be detrimental, because of the potential risk of interaction among subjects. • Group of individuals, a clinic or a community are randomized to a particular intervention or control; in this design, the basic sampling units are groups, not subjects. • Because the basic sampling units are groups, the design is not as efficient as the traditional one

  39. Blindness • Fundamental point • A clinical trial should, ideally, have a double-blind design to avoid potential problems of bias during data collection and assessment. In a study where such design is not possible, a single-blinded approach and other measures to reduce potential bias are favored

  40. Unblinded and Blinded Studies • Definition • Bias can occur at a number of places in a clinical study, and it can be caused by conscious factors, subconscious factors, or both • The general solution to the problem of bias is to keep the subject and the investigator blinded, or masked, to the identity of the assigned intervention

  41. Types of Blinded Studies • Single Blind • Only the investigator is aware of which intervention each subject is receiving. • Double Blind • Neither the subjects nor the investigators responsible for following the subjects know the identity of the intervention assignment. • Triple-Blind • An extension of the double-blind design; the committee monitoring response variables is not told the identity of the groups.

  42. Importance of Blindness in a study Example : Benefits of the ascorbic acid (vit C) in the common cold Lewis et al. Ann NY Acad Sci 1975; 258 : 505-12 Participants: medical staff, discovered whether they were on Vit C or placebo Evaluation: severity and duration self-reported by the participants

  43. Importance of Blindness in a study • Results: • Participants who claimed not to know • the identity of the RxVit C not better than placebo • Participants who claimed to know • the identity of the Rx Vit C better than placebo

  44. Sample size • Fundamental point • Clinical trials should have sufficient statistical power to detect differences between groups considered to be of clinical interest. Therefore, calculation of sample size with provision for adequate levels of significance and power is an essential part of planning a trial.

  45. Baseline assessment • Fundamental point • Relevant baseline data should be measured in all study participants before the start of the intervention

  46. Baseline assessment • Use of baseline data: • analysis of baseline comparability • stratification and subgrouping • evaluation of change • natural history analysis

  47. Data collection and quality control • Major types of problems: • missing data (one indicator of the quality of the trial) • erroneous data (error will not necessarily be recognized) • variability in the observed characteristics (reduce the opportunity to detect the real changes): random, systematic or combination of both

  48. Adverse Effects in RCT • Difficulties in using clinical trials: • Most clinical trials are • Too small • Too short duration • to detect uncommon adverse effects • Patients are very selected • (those more likely to develop AE are excluded)

  49. Issues in Data analysis • Fundamental point • Excluding randomized participants or observed outcomes from analysis on the basis of outcome or response variables can lead to biased results of unknown magnitude or direction

  50. Issues in Data analysis Exclusion are peoples who are screened as potential participants but who do not meet all of the entry criteria; this will not impact on the internal validity of the study but on the external validity (capacity to generalize) Withdrawals are participants who have been randomized but are deliberately not included in the analysis; this can bias the results of the study (consequently, the participants remaining may not be comparable)

More Related