1 / 36

Research design: basic principles and specific designs

Learn about different types of control and validity in research design, as well as the characteristics and procedures for randomized controlled trials. Understand how to control variables through manipulation, elimination or inclusion, randomization, and statistical controls. Explore the different types of validity and the threats to internal and external validity.

jswenson
Télécharger la présentation

Research design: basic principles and specific designs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research design: basic principles and specific designs Learning outcomes • Distinguish different types of control and validity • State different threats to validity • State characteristics of and procedures for randomised controlled trials

  2. Outline • Control • Validity • Randomised controlled trial • Experimental design

  3. Control • Explanatory research: try to explain variability of a phenomenon (dependent variable) by attributing its variation to its presumed causes (independent variables) • However, there are countless other variables that may be affecting the phenomenon, potentially leading to incorrect conclusions • Researchers use various techniques to control the effect of these other variables • Explanatory variables – IVs, DV • Extraneous variables • Controlled variables – controlled by the researcher • Confounding variables – remain uncontrolled and are confounded with the explanatory variables

  4. How to control (1) - Manipulation The researcher exercises control by manipulating the IV • Only possible in experimental and quasi-experimental research • Decisions on the choice of manipulations depend on theoretical and practical considerations • Researchers may differ in the definition of the same variable or in the manipulation of an agreed variable • The chosen manipulation needs to be applied consistently • Complex instructions may result in poor or different comprehension by participants

  5. How to control (2) - Elimination or inclusion • Elimination • Confounding variables are converted to a constant/held constant • Inclusion • An extraneous variable (e.g. gender), is included in the design in order to study its effects on the DV • The separate effects of the IV and the included variable and their joint effect (‘interaction’) can be studied (‘factorial designs’) • Control through inclusion produces more generalisable results

  6. How to control (3) – Randomization • Random assignment: give each unit (participant) in the sample an equal probability of being assigned to one of the treatments/levels of an IV(This is different from random selection: the use of random samples drawn from a population) • Random-number tables can be used for random assignment • Because of random assignment, all groups are expected to be equally distributed with respect to all variables (gender, race, etc.) • However, this expectation refers to the long run and not to a specific event

  7. How to control (4) – Statistical controls in data analysis • Example of SC: partial correlation • The correlation between two variables after controlling for one or more other variables • Example of PC: height and academic achievement are correlated because of a correlation of both with maturation • Correlation between two variables may be mainly due to a common cause – ‘spurious correlation’; partial correlation can ‘remove’ a spurious correlation • Another example of SC: analysis of co-variance

  8. Validity • Statistical conclusion validity: correctness of using and interpreting results of statistical tests of significance • Construct validity: correspondence between a measure (DV) or manipulation (IV) and the construct to be measured or manipulated • Internal validity: validity of assertions regarding the effects of the IV(s) on the DV • External validity: generalisability of findings to or across, for example, target populations, settings and times

  9. Threats to internal validity • History: events that took place during a study that might affect its outcome • Maturation: changes that participants undergo with the passage of time • Testing: effect of repeatedly measuring participants on the same variable(s) • Instrumentation: differences in outcomes between different treatments as a result of aspects of the instruments used

  10. Threats to internal validity (2) Regression towards to mean: • tendency of participants to score closer to the mean on a second variable, for example re-test • occurs when two variables are not perfectly correlated, for example score on two different test occasions • due to measurement error or factors unique to each variable • in particular critical when participants are studied for their extreme score on a particular (first) variable, for example first test

  11. Threats to internal validity (3) • Selection: non-random assignment of participants to treatments • Mortality: attrition of participants during a study; self-selection process • Diffusion or imitation of treatments: participants in a particular treatment or control group learn about another treatment not meant for them and may succeed in getting (another) treatment • Compensatory rivalry or resentful demoralisation (as a result of diffusion): deliberately perform better or worse

  12. External validity • Internal validity is a necessary, but not sufficient, condition for external validity • Generalising from representative samples to populations; validity depends on sample-selection procedures used • Probability sampling: • Every element of the population of interest has a probability > 0 of being selected • Sampling by random selectionbecause of (1) and (2): • Protection against selection bias • Obtained sample statistics are valid estimates of population parameters • Generalising across populations: there is no sound basis for this

  13. Threats to external validity • Treatment-attribute interaction: the effect of a treatment depends on participant-variables (e.g. gender, education) • Treatment-setting interaction: the effect of a treatment depends on the environment in which the research takes place • Multiple-treatment interference: • Simultaneous administration of more than one treatment; the effect of one treatment on the DV may affect that of the other treatment • Subsequent administration of different treatments; carry-over effect – one treatment has an effect lasting on subsequent occasions • Over-use of special participant groups

  14. SES error MA AA MOT Single-stage models (1) • DV affected by a set of intercorrelated IVs • Exogenous variable(s): variability assumed to be determined by causes outside the model • Endogenous variable: variability explained by exogeneous variables and possibly other endogenous variables • Data analysis (1): (standard) multiple regression analysis • Data analysis (2): moderation analysis (2) error MOT AA M

  15. error error SES MA AA MOT error error SES MA AA MOT error (3) Multi-stage models • One or more exogenous variables and two or more endogenous variables • Stages: the number of endogenous variables • Data analysis (3, 4): mediation analysis (4)

  16. Experimental research Experiments test the cause-effect relationship between two (or more) variables by collecting evidence to demonstrate the effect of one (set of) variable(s) on another • Independent variables are things which are manipulated by the researcher • The experimental hypothesis proposes that the independent variable changes the behaviour being measured Example: randomised controlled trial (Matthews, 2006)

  17. Between-subjects designs • With between subjects designs it is crucial that as few differences exist as possible between the (two or more) groups • A number of techniques can be used to ensure this • RandomizationParticipants/entities are randomly allocated to conditions • MatchingParticipants/entities are matched in terms of major characteristics that are correlated with the dependent variable, for example age or experience

  18. Within-subjects designs • Problems of matching may also be overcome by using within-subjects (or repeated measures) designs • Each participant/entity acts as her/his own control • This type of design can lead to other problems of validity • Order effects • Carry-over effects • These confounds can be resolved bycounterbalancing

  19. Randomised controlled trials (RCTs) • “A study in which a number of similar people are randomly assigned to two (or more) groups to test a specific drug or treatment. • One group (the experimental group) receives the treatment being tested, the other (the comparison or control group) receives an alternative treatment, a dummy treatment (placebo) or no treatment at all. • The groups are followed up to see how effective the experimental treatment was. • Outcomes are measured at specific times and any difference in response between the groups is assessed statistically. • This method is also used to reduce bias.”(https://www.nice.org.uk/glossary?letter=r)

  20. RCT, possible outcomes (Haynes et al., 2012)

  21. RCTs, examples • The impact of text-messaging on fine repayments • The effect of three programmes on getting people back to work • The effect of remedial education on learning • Improving user-experience of websites through RCTs

  22. RCTs, advantages • We don’t necessarily know what works (without doing an RCT) • RCTs don’t have have to be costly • Ethical advantages • Don’t have to be complicated or difficult to run

  23. Procedure for conducting an RCT • Test • Identify two or more policy interventions to compare • Determine the outcome that the policy is intended to influence and how it will be measured • Decide on the randomisation unit to intervention and control groups (individual, institution, geographical area) • Determine how many units are required • Assign each unit to one of the interventions with randomisation • Introduce the interventions to the assigned groups • Learn • Measure the outcome variables and determine the impact of the intervention(s) • Adapt • Adapt policy intervention to reflect the findings • Return to Step 1 to continually improve understanding of what works

  24. 1 Identify intervention(s) • Consider using current practice as control or comparison • Analyse existing research on the effectiveness of relevant interventions • Conduct the intervention the way it would be done in the ‘real world’ • The intervention must be representative

  25. 2 Define the outcome variable(s) • Fix outcome measure(s) at design stage • Instrumentation: measure outcome(s) consistently • Select the (most) relevant outcome(s) for practice • Example: effect on re-offending takes years to establish; second-best surrogate measure: alcohol-service attendance; third-best: referrals by probationers • Decide on timing of measurement • How much time should be allowed for the intervention(s) to show an effect?

  26. 3 Decide on randomisation unit • Randomisation unit in delivering intervention • Individual people • Group of people within institution • People living within geographical area • Choose the lowest possible level • to avoid or reduce selection bias • Choose a higher level • to avoid other threats to internal validity (e.g., resentful demoralisation) • If the intervention is practically best delivered at a higher level • Participants must be recruited in advance of randomisation; avoid selection bias • Randomisation unit in measuring outcome(s) • Choose lowest possible level • for accuracy • Choose a higher level • for practicality

  27. 4 Determine the required number of units • Randomisation at a higher level normally requires more people than randomisation at the level of the individual • Power analysis to decide on sample size • Factors to consider • Larger effect sizes needs smaller sample sizes to be detected • Many (public-policy) interventions have relatively small effects! • Attrition/drop-out reduces the effective sample size • Cost of recruitment • Modest effect may be justified by low cost of intervention • High-costs intervention can be justified if larger effect is expected

  28. 5 Assign each unit to a treatment • Randomly allocate each unit to a (policy) intervention or control treatment • Avoids selection bias – the two (or more) groups are expected to be equal on major variables (e.g., gender, education, SES)

  29. 6 Introduce treatments to groups • Process evaluation: monitor each treatment (in particular the intervention[s]) to make sure it is being delivered as planned • Deliver the intervention(s) the way they would be done in ‘real life’ (irrespective of this particular RCT) – ‘le mieux est l'ennemi du bien’

  30. 7 Data analysis and evaluation • Main outcome measure(s) • E.g., re-offending • Process measures • Can provide an explanation for effect on main outcome measure(s) • Can be used to develop new hypotheses for further trials • E.g., referrals to agencies • Qualitative data (optional) • Explain findings • Support future implementation • Guide for further research • Guide for improving intervention(s)

  31. 8 Adapt intervention(s) • “Any trials that is conducted, completed, and analysed should be deemed successful.” • No effect • Harmful effect • Beneficial effect • Act on ineffective interventions by • ‘rational disinvestment’ • finding other, effective, interventions • Full report • Demonstrate the RCT was a ‘fair test’ • Document the intervention(s) for others to implement in the future • Use a standardised format for RCTs (CONSORT)http://www.consort-statement.org/consort-statement/checklist • Publish trial protocol in advance to allow for improvements to trial design before the trial starts

  32. 9 Continual improvement • RCTs as (part of) a continual process of policy innovation and improvement • Replication of results (e.g., in different population) • Identify new ways of improving outcomes • Refinement of intervention(s) – e.g., improve design of text messages to (further) increase tax revenue • Alternative intervention(s) • E.g., identify which aspects of a policy have the greatest impact

  33. RCT and experimental design • RCT with randomisation at level of the individual • (special case of) experimental design • Cause: independent variable; effect: dependent variable • RCT with randomisation at a higher level • (special case of) quasi-experimental design • Cause: independent variable; effect: dependent variable • Correlational (non-experimental) design • No manipulation • No randomisation • Therefore, difficult to establish cause and effect • Therefore, ‘cause’: predictor; ‘effect’: criterion (outcome)

  34. Preparation for next week • Study main concepts in research design • Field and Hole (2003); Pedhazur (1991): Ch. 7, 8, 9, 10 • Lecture notes • Practical exercises (research design) • Field, A., & Hole, G. (2003). How to design and report experiments. London: Sage. • Pedhazur, E.J. & Pedhazur-Schmelkin, L. (1991). Measurement, design, and analysis: An integrated approach. Hillsdale, NJ: Erlbaum. • Study Chapter 3 and do all practical exercises (Field, 2013, Chapter 3)

  35. Summary • Researchers use various techniques to control the effect of extraneous variables in order to avoid • their effects on outcomes and • incorrect conclusions • There are various types of validity of research studies • Threats to validity need to be considered and, if possible, eliminated when designing a study • Experimental research • Randomised controlled trial (RCT)

  36. @@@@@ • @@@@@

More Related