1 / 48

Practical Methods for Pharmacy Practice Research

Practical Methods for Pharmacy Practice Research. Lee Vermeulen, R.Ph., M.S. Director, Center for Drug Policy University of Wisconsin Hospital and Clinics Clinical Associate Professor UW-Madison School of Pharmacy LC.Vermeulen@hosp.wisc.edu. Objectives.

missy
Télécharger la présentation

Practical Methods for Pharmacy Practice Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Practical Methods for Pharmacy Practice Research Lee Vermeulen, R.Ph., M.S. Director, Center for Drug Policy University of Wisconsin Hospital and Clinics Clinical Associate Professor UW-Madison School of Pharmacy LC.Vermeulen@hosp.wisc.edu

  2. Objectives • Improve your understanding of research methods • Guide your application of research methods in a practical manner • Maximize rigor with minimum resources and time

  3. Terminology Sidebar • Methods – techniques used to perform tasks, such as research • Methodology – the study of methods • Unless you’re doing a study that examines the use of methods (e.g., finding a method of more accurately measuring cost), you describe the methods of your study, not “methodology”

  4. Objectives of Research Design • Most important part of research • While a well designed study does not ensure good results, a poorly designed study guarantees poor results • A guide to answer research questions in an accurate, objective and valid manner • Design characteristics are chosen that best establish causal relationship between intervention being studied and outcome being measured, given available resources

  5. Criteria for Causality • The cause and effect relationship between intervention and observed outcome is established using the following criteria: • Concomitant variation (association) • Temporal relationship • All other possible causes are eliminated

  6. Maximizing Rigor (1) • Efficacy vs effectiveness • Efficacy = can the intervention work? max internal validity • Effectiveness = does the intervention work? max external validity • Design decisions made to maximize causality will reduce threats to the internal validity of a study, but those design components generally also limit external validity (ability to generalize findings) • Eliminate bias • Systematic errors, factors other than the intervention under study that influence outcome • Can be eliminated by careful study design

  7. Maximizing Rigor (2) • Deal with confounding • Phenomenon that cannot be designed out of studies but influence outcome measures • May not even be measurable • Dealt with most often by randomizing • Some statistical techniques to manage • Confounders are confounding… • Accept this right now… there has never been, nor will there ever be, a perfect study

  8. Steps in Designing a Study (1) • Initial conception steps: • Specify the study hypothesis and research question • Identify the intervention to be studied • Select study design to best identify causality given the time and resources available • Establish characteristics that define group membership (inclusion, exclusion criteria)

  9. Steps in Designing a Study (2) • Select a comparator group • Consider other design features • Identify outcome measures of interest (dependent variables) • Establish sample size (discussion later in stats section) • Select appropriate data sources • Plan for data analysis

  10. Study Design SummaryBasic Study Designs

  11. Descriptive “Study” • Reports of cases (individual or a series) that highlight observations • May be used to gather data • No comparisons made • No causality established • Can serve as a launching point for further investigation • Weakest rigor of all designs

  12. Observational Studies • Various “epidemiological” designs, including: • Case-control • Cohort • Cross sectional

  13. Observational Studies • Investigators are “bystanders” • No active intervention • Study natural course of disease • Gather data • Offers insights into course of disease • May utilize comparisons to examine or explain medical problems

  14. Case-control Retrospective Begin with outcome Compare with patients without outcome Evaluate risks in the past Cohort Prospective Begin with pts who do not have outcome Evaluate risk factors Follow groups and evaluate development of outcome Observational Studies

  15. Terminology Sidebar • Prevalence – proportion of existingcases of a medical condition in the population at risk at a specified point in time • Incidence – ratio of the number of new cases of a condition to the total number of persons in the population at risk for the condition during a specified period

  16. Observational StudiesCase Control

  17. Advantages Generates hypothesis Initial explanations Efficient - don’t need to wait for outcome Evaluate rare dz Ethical dilemmas avoided Disadvantages Retrospective: patient recall bias & investigator bias Must choose cases and controls carefully from same population Past documentation may be inconsistent Weak design Observational StudiesCase Control

  18. Observational StudiesCohort

  19. Observational StudiesCohort • Investigator plays a more active role - but NOT an intervention • Also known as a “follow-up” design • Longitudinal • Assesses incidence • Begin with patients who have yet to experience the disease or outcome

  20. Advantages Fewer inherent biases than case-control Selection bias slight Recall bias less Data collection defined prospectively Investigator greater control Stronger design than case-control Disadvantages May lose follow-up Risk factors change Takes time Expensive Subject selection bias Surveillance bias Habits may change Observational StudiesCohort

  21. Observational StudiesCross-Sectional • Ability to measure prevalence • Assesses a study population at a given point in time • Evaluate for presence of disease or outcome • May explore for risk factors

  22. Observational StudiesCross-Sectional

  23. Advantages Efficient Saves time & money Simple to perform Establishes prevalence Disadvantages Subjects representative of population Participation bias Demonstrate relationship (NOT cause and effect) Do not evaluate historical risks Observational StudiesCross-Sectional

  24. Experimental Studies • Distinguishing feature, active intervention by the investigator • Designed to measure the effect of a specific intervention • Comparator (control) group usually measured, treated in the same way as the intervention group with the exception of not receiving the intervention • Differences in observed measures between treatment and control groups establish causality • Explanations for observed differences, other than the intervention, excluded by design • Quasi-experimental designs are not generally randomized and are less rigorous

  25. Randomized Experimental Design

  26. Strengths of Randomized Experimental Design • Random assignment/allocation manages confounding (balancing underlying characteristics) • Blinding possible • Prospective observation • Ability to manipulate care delivery process • Measurement selection • Statistical strengths (assumptions)

  27. Common Pharmacy Practice Research Designs Quasi-Experimental • Similar to epidemiological designs, and similarly limited in establishing causality • Distinguished from epidemiological design in that interventions are actively applied • Quasi-experimental designs • Post-test only • Post-test with contemporaneous control • Pre-post • Net benefit

  28. Measurement Period Time Intervention Post-test Only Design • Intervention performed prospectively or retrospectively • Measurement occurs thereafter • No comparator group • Exceptionally common design • Exceptionally weak design

  29. Measurement Period Time Intervention Post-test with Contemporaneous Control Group 1 Group 2 {No Intervention} Measurement Period Time • Intervention performed prospectively in one of two groups • Measurement occurs in both groups • Effect size determined by comparing measures in both groups • Temporal relationships optional (can be disparate time frames for measurement, although weakens design) • Requires similar groups • Somewhat stronger design but still prone to confounding

  30. Pre Measurement Period Post Measurement Period Time Intervention Pre-post Test Design • Intervention performed prospectively • Measurement occurs before and after intervention in a single group • Effect size determined by comparing measures from pre and post periods • No comparator group • Stronger than post-test with or without control • Prone to temporal biases and other influences

  31. Pre Measurement Period 1 Post Measurement Period 1 Time Intervention Net Benefit Design (1) Group 1 Group 2 Pre Measurement Period 2 Post Measurement Period 2 {No Intervention} Time • Intervention performed prospectively in one of two groups • No intervention performed in comparator group • Measurement occurs before and after intervention in one group and simultaneously in comparator group • Effect size determined by comparing difference in measures (post vs pre) between the two groups

  32. Pre Measurement Period 1 Post Measurement Period 1 Time Intervention Net Benefit Design (2) Group 1 Group 2 Pre Measurement Period 2 Post Measurement Period 2 {No Intervention} Time • NB = (Post 1 – Pre 1) – (Post 2 – Pre 2) • Confounders assumed to influence both groups equally • Differences between groups accounted for in analysis (no direct comparison of groups) • Strongest quasi-experimental design

  33. Limitations of Observational and Quasi-Experimental Designs • Lack of randomization prevents control of confounding and bias; groups may differ in un-measurable ways (other than presence of exposure or intervention) • Acuity of illness adjustment difficult • Causality not established • Net benefit design logistically challenging

  34. Next Steps… • Selection of study groups (intervention and control) • Other key design features • Selection of measures • Research design “reality check”

  35. Establish Study Group Characteristics • Sample inclusion and exclusion criteria established a priori to best represent population • Homogeneous sample desirable • Less variability in outcome measures • Higher likelihood of statistically significant effect (demonstration of efficacy) • But, less generalizable to dissimilar patients • Heterogeneous sample less desirable but common • Greater variability in outcome measures • Less likely to find statistically significant result • But, easier to generalize to all patients (demonstration of effectiveness)

  36. Selection of Comparator Group (1) • Identification of comparator to establish internal validity • Selection of control group comparable to treatment group in as many ways as possible • Active control alternatives (if placebo not used): • Do nothing (natural disease course) • Usual or routine care • Some other non-routine active control • Historical control • Literature control

  37. Placebo appropriate Disease remits spontaneously Psychosomatic component Subjective assessments Need comparison to natural course of disease Placebo inappropriate Disease morbidity or mortality is too high Damage MD-patient trust Can not blind the placebo Outcomes research implications Selection of Comparator Group(2)

  38. Other Design Features to Consider • Setting of study • Ex: Inpatient vs outpatient clinic • Useful to limit scope of study • Other scope-limiting strategies • Prospective vs retrospective data collection • Blinding • Use of multi-center design

  39. Selection of Measurements • Patient demographics • “Table 1” descriptions • Necessary for confounder analysis • Covariates and other data of interest • Surrogate vs “ultimate” • Ideally measure ultimate outcome (fracture rate vs BMD) • In most cases, stuck with surrogate, okay as long as clearly documented association to ultimate outcome • Primary outcome categories (ECHO) • Economic outcomes • Clinical outcomes • Humanistic outcomes

  40. Examples of Economic Outcomes • Direct costs • Acquisition cost assoc. with provide care • Labor cost assoc. with provide care • Cost to treat ADR, acute & long term • Cost of treatment failure, acute & long term • Cost of emergency room, clinic visits • Indirect costs • Patient out-of-pocket • Workplace productivity

  41. Examples of Clinical Outcomes • Clinical events • Ex: AMI, stroke, ADR • Physiological measures • Ex: BP, LDL • Mortality • All cause vs cause specific • Length of stay • Hospital admission and readmission • ER or clinic visit • Important Note: Outcomes of hospitalization don’t happen in the hospital

  42. Examples of Humanistic Outcomes • Quality-of-life (QoL) measures • Index vs profile • General vs disease specific • SF-36, EuroQoL common • Functional status (easier to use in practice) • Karnofsky Performance Index, FLIC • Symptom scores • AUA BPH score • Patient satisfaction • Must be validated in the population, setting, conditions being applied in

  43. Check for Appropriate Measurement Selection • List each element selected for measurement • For each element describe: • Why do you need this data element? • Is there another measure that is easier to collect that you can select instead • How will you analyze this element? • What statistical test will you apply to it? • How will you present the result graphically or in a table?

  44. Research Design “Reality Check” • Is the scope of your study reasonable? • Do you have all the resources you need? • Design assistance? • Data collection assistance? • Information systems support? • Access to data sources? • Analytic support? • Do you need a grant?

  45. Study Design Example • Objective: Evaluate impact of antiemetic protocol implementation in ambulatory oncology clinic. • Intervention: Protocol implementation guiding antiemetic care delivery process. • Design: Net Benefit • Study Groups: Adult ambulatory oncology patients who are chemotherapy naïve

  46. Pre Measurement Period 1 Post Measurement Period 1 Time Intervention Net Benefit Design Group 1 Group 2 Pre Measurement Period 2 Post Measurement Period 2 {No Intervention} Time • Group 1: Oncology clinic at health system A in one city • Group 2: Oncology clinic at health system B in another city • Clinics are similar with respect to overall patient mix, volume, acuity of care • Measures for 3 months prior to implementation of intervention and 3 months afterwards

  47. Outcome Measures • Clinical • Proportion of patients who experience zero episodes of vomiting • Visual analog scale for nausea, recorded on a 100 mm line anchored at 0 (no nausea) and 100 (the worst nausea possible) • Economic • Cost of antiemetic therapy • Total cost of care • Humanistic • Satisfaction with care graded on a 5-point scale, anchored at 1 (totally dissatisfied) and 5 (completely satisfied)

  48. Conclusion

More Related