1 / 38

Selecting a Research Design

Selecting a Research Design. Research Design. Refers to the outline, plan, or strategy specifying the procedure to be used in answering research questions Determines the when (a procedural sequence) but not the how of:. 1 . Sampling Techniques and representativeness of data sources

aquarius
Télécharger la présentation

Selecting a Research Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Selecting a Research Design

  2. Research Design • Refers to the outline, plan, or strategy specifying the procedure to be used in answering research questions • Determines the when (a procedural sequence) but not the how of:

  3. 1. Sampling Techniques and representativeness of data sources 2. Data Collection • Time frame of measurement • Methods of measurement 3. Data Analysis

  4. Three Major Approaches to Research Designs 1. Descriptive Approach 2. Experimental Approach 3. Quasi-Experimental Approach

  5. 1. Descriptive Approach • Represent or provide accurate characterization of phenomenon under investigation • Provide a “picture” of a phenomenon as it naturally occurs

  6. Key Features: • Not designed to provide information on cause-effect relationships, therefore internal validity is not a concern • Because focus is on providing information regarding some population or phenomena, external validity is major concern

  7. Variations: • Exploratory • Goal: to generate ideas in field of inquiry that is relatively unknown • Least structured to gather more descriptive information • Frequently used as first in series of studies on a specific topic

  8. Process evaluation • Goal: to identify the extent to which a program (or policy) has been implemented, how that has occurred, and what barriers have emerged • Program as implemented vs. program as intended • Designed to guide development of new program (normative), summarize the structure of program prior to studying its effects, or assess feasance of pre-existing program

  9. Strengths (Descriptive): • Generally lower costs (depend upon sample size, number of data sources, and complexity of data collection methods) • Relative ease of implementation • Ability to yield results in relatively short amount of time • Data analysis straight-forward • Results easy to communicate to non-technical population

  10. Limitations (Descriptive): • Cannot answer questions of causal nature

  11. 2. Experimental Approach • Primary purpose is to empirically test the existence of causal relationship among two or more variables • Systematic variation of independent variable (IV) and measure its effects on dependent variable (DV)

  12. Kerlinger (1973) MAX-MIN-CONApproach • MAXimize systematic variance (exp cond as different as possible) • MINimize error variance (accuracy of assessment) • CONtrol extraneous systematic variance (homogeneity of conditions)

  13. Key Features: • Random assignment of individuals or entities to the levels or conditions of the study • Control biases at time of assignment • Ensure only independent variable(s) differs between conditions

  14. Key Features (cont): • Emphasis placed on maximizing internal validity by controlling possibly confounding variables • Creation of highly controlled conditions may reduce external validity

  15. Variations: 1. Between Group Designs A. Post-only design • Subjects randomly assigned to experimental\control groups • Introduction of IV in experimental condition • Measurement of DV (single or multiple instances)

  16. Post-only design Randomized IV DV Group 1 R X O Group 2 R O

  17. B. Pre and post design: • Subjects randomly assigned to experimental\control groups • Preliminary measurement of DV before before treatment (check of random assgn) • Introduction of IV in experimental condition • Measurement of DV (single or multiple instances)

  18. Pre- and Post- design Randomized DV IV DV Group 1 R O1 X O2 Group 2 R O1 O2

  19. C. Multiple Levels of single IV: • Subjects randomly assigned to experimental\control groups • Introduction of multiple levels of IV in experimental condition • Measurement of DV across different conditions

  20. Multiple Levels design Randomized IV DV Group 1 R X1 O Group 2 R X2 O Group 3 R X3 O Group 4 R O

  21. D. Multiple Experimental and Control Groups (Solomon Four-Group design): • Subjects randomly assigned to experimental\control groups • Preliminary measurement of DV in one exp\control pair • Introduction of IV in both experimental conditions • Measurement of DV (assess effects of pretest)

  22. Multiple Levels design Randomized DV IV DV Group 1 R O1 X O2 Group 2 R O1 O2 Group 3 R X O2 Group 4 R O2

  23. E. Multiple IVs (Factorial Design): • Subjects randomly assigned to experimental\control groups • Introduction of multiple levels of IVs in experimental condition • Measurement of DV across different conditions (cells)

  24. Multiple IVs design Randomized IV IV DV Group 1 R X1 Y1 O Group 2 R X2 Y2 O Group 3 R X1 Y2 O Group 4 R X2 Y1 O

  25. 2. Within-Group Designs • Repeated Measures • Each Subject is presented with two or more experimental conditions • Comparisons are made between conditions within the same group of subjects

  26. Randomized IV DV IV DV Subject 1 R X1 O1 X2 O2 Subject 2 R X1 O1 X2 O2 Subject 3 R X1 O1 X2 O2 Subject 4 R X1 O1 X2 O2 Within SS design

  27. Strengths (Expl): • Experimental control over threats to internal validity • Ability to rule out possible alternative explanations of effects

  28. Limitations (Expl): • More resemble controlled study, less resembles usual real-world intervention (decrease external validity) • Experimental Realism = engagement of subject in experimental situation • Mundane Realism = correspondence of experimental situation to ‘real-world’ or common experience

  29. Limitations (Expl): • Randomized experiments difficult to implement with integrity(practical or ethical reasons) • Attrition over time = non-equivalent designs

  30. 3. Quasi-Experimental Approach • Primary purpose is to empirically test the existence of causal relationship among two or more variables • Employed when random assignment and experimental control over IV is impossible or impractical

  31. Key Features: • Other design features are substituted for randomization process • Quasi-experimental comparison base: • Addition of non-equivalent comparison groups • Addition of pre- and post-treatment observations

  32. Variations: A. Non-equivalent Comparison Group • Post Only, Pre-Post, Multiple Treatments • No random assignment into exp\con • “Create” comparison groups • Selection criteria or eligibility protocol • Partial out confounding variance • (statistical control)

  33. B. Interrupted Time Series • Multiple observations before and after treatment or intervention is introduced • Examine changes in data trends (slope and intercept) • Investigate effects of both onset and offset of interventions

  34. 16 14 12 10 8 6 4 2 0 Intervention Interrupted Time Series

  35. C. Regression Discontinuity • Separate sample based on some criterion (pre-test) • One group administered treatment, other is control group • Examine trends in both groups; hypotheisze equivalent

  36. Regression Discontinuity No Treatment Treatment Post Test Scores Pre-test Cut-off Point

  37. Strengths (Quasi): • Approximation of experimental design, thereby allowing causal inference • Garner internal validity through statisticalcontrol, not experimental • Use where experimental designs are impractical or unethical

  38. Limitations (Quasi): • Uncertainty about comparison base: Is it biased? • Statistical control based on known factors. If unknown or unmeasureable, threat to validity • Data collection schedule and measures very important

More Related