1 / 19

Lecture 6

Experiments: Validity, Reliability and Other Design Considerations. Lecture 6. Controls. At least three different meanings: Controlled Studies Control in an Experiment Controls used in a study or analysis. Experiments and Observational Studies.

atara
Télécharger la présentation

Lecture 6

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experiments: Validity, Reliability and Other Design Considerations Lecture 6

  2. Controls • At least three different meanings: • Controlled Studies • Control in an Experiment • Controls used in a study or analysis

  3. Experiments and Observational Studies • Assigning people to groups vs. observing people who ‘assign’ themselves • Example of pitfalls in experimental assignment: • Portacaval Shunt Example • Examples of key pitfalls of observational studies: • Cervical cancer and circumcision study • Alcohol consumption and lung cancer

  4. Measurement and Design Validity • Measurement Concerns • Construct Validity • Design Concerns • Internal Validity • External Validity • Ecological Validity

  5. Construct Validity How do we know that our independent variable is reflecting the intended causal construct and nothing else? • “Face” validity deals with subjective judgement of appropriate operationalization • “Content” validity is a more direct check against relevant content domain for the given construct.

  6. Internal Validity Internal Validity deals with questions about whether changes in the dependent variable were caused by the treatment.

  7. ? ? Cause Effect ? ?

  8. Threats to Internal Validity • History • additional I.V. that occurs between pre-test and post-test • Maturation • Subjects simply get older and change during experiment • Testing • Subjects “get used” to being tested • Regression to the Mean • Issue with studies of extremes on some variable

  9. Contamination and Internal Validity • Demand Characteristics • Anything in the experiment that could guide subjects to expected outcome • Experimenter Expectancy • Researcher behavior that guides subjects to expected outcome (self-fulfilling prophecy)

  10. General Demand Characteristics • Evaluation Apprehension • “Hawthorne Effect” • Temporary improvement based on observation • Solutions • Double-blind experiments • Experiments in natural setting (i.e., subjects do not know they are in an experiment) • Cover stories • Hidden measurements

  11. Reducing the role of the experimenter: solving expectancy effects • Naïve experimenter • Those conducting study are not aware of theory or hypotheses in the experiment • Blind • Researcher is unaware of the experiment condition that he/she is administering • Standardization • Experimenter follows a script, and only the script • “Canned” Experimenter • Audio/Video/Print material gives instructions

  12. And More! • Selection Bias • Issue with non-random selection of subjects • Mortality • Departure of subjects in the experiment • Diffusion, Sharing of Treatments • Control group unexpectedly obtains treatment • Other ‘social’ threats? • Compensatory rivalry, resentful demoralization, etc.

  13. Three threats to external validity (generalizability) in experiments • Setting • Population • History External Validity– How far does the given experiment generalize to similar groups, individuals, etc?

  14. Ecological Validity

  15. The Validity Tradeoff: Truth and Myth Internal Validity External & Ecological Validity Balance is important between the types of validity, but internal validity is usually (if not always) the more important factor.

  16. True Experiments in the Field • Some experiments can be conducted in a real-world setting while maintaining random assignment and manipulation of treatments Milliman (1986) Study of music tempo and restaurant customer behavior Cheshire and Antin (2008) Study of Incentives and Contributions of Information in an Online Setting

  17. Natural Experiments • 1998 Total Solar Eclipse: testing temperature of sun’s corona

  18. Pro’s and Con’s of Experiments • Pro’s • Gives researcher tight control over independent factors • Allows researcher to test key relationships with as few confounding factors as possible • Allows for direct causal testing • Con’s • Often very small N; enough for statistical purposes but not ideal for generalizability • Sometimes give up large amounts of external validity in favor of construct validity and direct causal analysis • Require a large amount of planning, training, and time– sometimes to test relationship between only 2 factors!

  19. Additional considerations before using experiments • Cost and Effort • Is the effort worth it to test the concepts you are interested in? • Manipulation and Control • Will you actually be able to manipulate the key concept(s)? • Importance of Generalizability • Are you testing theory, or trying to establish a real-world test?

More Related