1 / 44

Experiment Basics: Variables

Experiment Basics: Variables. Psych 231: Research Methods in Psychology. Journal Summary 1 due in labs this week Don’t forget Quiz 6 (due Fri). Reminders. Scales of measurement Errors in measurement. Reliability & Validity Sampling error. Measuring your dependent variables.

wbroadway
Télécharger la présentation

Experiment Basics: Variables

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experiment Basics: Variables Psych 231: Research Methods in Psychology

  2. Journal Summary 1 due in labs this week • Don’t forget Quiz 6 (due Fri) Reminders

  3. Scales of measurement • Errors in measurement • Reliability & Validity • Sampling error Measuring your dependent variables

  4. Example: Measuring intelligence? • How do we measure the construct? • How good is our measure? • How does it compare to other measures of the construct? • Is it a self-consistent measure? Internet IQ tests: Are they valid? (The Guardian Nov. 2013) Measuring the true score

  5. In search of the “true score” • Reliability • Do you get the same value with multiple measurements? • Consistency – getting roughly the same results under similar conditions • Validity • Does your measure really measure the construct? • Is there bias in our measurement? (systematic error) Errors in measurement

  6. Bull’s eye = the “true score” for the construct e.g., a person’s Intelligence Dart Throw = a measurement e.g., trying to measure that person’s Intelligence Dartboard analogy

  7. Reliability = consistency Validity = measuring what is intended Bull’s eye = the “true score” for the construct Measurement error Estimate of true score Estimate of true score = average of all of the measurements unreliable invalid - The dots are spread out - The & are different Dartboard analogy

  8. Bull’s eye = the “true score” Reliability = consistency Validity = measuring what is intended biased reliablevalid unreliable invalid reliable invalid Dartboard analogy

  9. In search of the “true score” • Reliability • Do you get the same value with multiple measurements? • Consistency – getting roughly the same results under similar conditions • Validity • Does your measure really measure the construct? • Is there bias in our measurement? (systematic error) Errors in measurement

  10. True score + measurement error • A reliable measure will have a small amount of error • Multiple “kinds” of reliability • Test-retest • Internal consistency • Inter-rater reliability Reliability

  11. Test-restest reliability • Test the same participants more than once • Measurement from the same person at two different times • Should be consistent across different administrations Reliable Unreliable Reliability

  12. Internal consistency reliability • Multiple items testing the same construct • Extent to which scores on the items of a measure correlate with each other • Cronbach’s alpha (α) • Split-half reliability • Correlation of score on one half of the measure with the other half (randomly determined) Reliability

  13. Inter-rater reliability • At least 2 raters observe behavior • Extent to which raters agree in their observations • Are the raters consistent? • Requires some training in judgment Not very funny Funny 5:00 4:56 Reliability

  14. In search of the “true score” • Reliability • Do you get the same value with multiple measurements? • Consistency – getting roughly the same results under similar conditions • Validity • Does your measure really measure the construct? • Is there bias in our measurement? (systematic error) Errors in measurement

  15. Does your measure really measure what it is supposed to measure (the construct)? • There are many “kinds” of validity Validity

  16. VALIDITY CONSTRUCT INTERNAL EXTERNAL FACE CRITERION- ORIENTED PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT Many kinds of Validity Types of Validity (~19 mins)

  17. VALIDITY CONSTRUCT INTERNAL EXTERNAL FACE CRITERION- ORIENTED “The degree to which a study provides causal information about behavior.” “The degree to which the results of a study apply to individuals and realistic behaviors outside of the study.” PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT Many kinds of Validity

  18. At the surface level, does it look as if the measure is testing the construct? “This guy seems smart to me, and he got a high score on my IQ measure.” Face Validity

  19. Usually requires multiple studies, a large body of evidence that supports the claim that the measure really tests the construct Construct Validity Research summary - Construct validity

  20. Did the change in the DV result from the changes in the IV or does it come from something else? • The accuracy of the results Internal Validity

  21. Experimenter bias & reactivity • History – an event happens the experiment • Maturation – participants get older (and other changes) • Selection – nonrandom selection may lead to biases • Mortality (attrition) – participants drop out or can’t continue • Regression toward the mean– extreme performance is often followed by performance closer to the mean • The SI cover jinx | Madden Curse Threats to internal validity Nerd out Wednesdays: Threats to Internal Validity video series

  22. Are experiments “real life” behavioral situations, or does the process of control put too much limitation on the “way things really work?” Example: Measuring driving while distracted External Validity

  23. Variable representativeness • Relevant variables for the behavior studied along which the sample may vary • Subject representativeness • Characteristics of sample and target population along these relevant variables • Setting representativeness • Ecological validity - are the properties of the research setting similar to those outside the lab External Validity

  24. Scales of measurement • Errors in measurement • Reliability & Validity • Sampling error Measuring your dependent variables

  25. Population • Errors in measurement • Sampling error Everybody that the research is targeted to be about The subset of the population that actually participates in the research Sample Sampling

  26. Sampling to make data collection manageable Inferential statistics used to generalize back Population Sample • Allows us to quantify the Sampling error Sampling

  27. Goals of “good” sampling: • Maximize Representativeness: • To what extent do the characteristics of those in the sample reflect those in the population • Reduce Bias: • A systematic difference between those in the sample and those in the population • Key tool: Random selection Sampling

  28. Have some element of random selection Susceptible to biased selection • Probability sampling • Simple random sampling • Systematic sampling • Stratified sampling • Non-probability sampling • Convenience sampling • Quota sampling Sampling Methods

  29. Every individual has a equal and independent chance of being selected from the population Simple random sampling

  30. Selecting every nth person Systematic sampling

  31. Step 1: Identify groups (clusters) • Step 2: randomly select from each group Cluster sampling

  32. Use the participants who are easy to get Convenience sampling

  33. Step 1: identify the specific subgroups • Step 2: take from each group until desired number of individuals Quota sampling

  34. Independent variables • Dependent variables • Measurement • Scales of measurement • Errors in measurement • Extraneous variables • Control variables • Random variables • Confound variables Variables

  35. Control variables • Holding things constant - Controls for excessive random variability • Random variables – may freely vary, to spread variability equally across all experimental conditions • Randomization • A procedure that assures that each level of an extraneous variable has an equal chance of occurring in all conditions of observation. • Confound variables • Variables that haven’t been accounted for (manipulated, measured, randomized, controlled) that can impact changes in the dependent variable(s) • Co-varys with both the dependent AND an independent variable Extraneous Variables

  36. Divide into two groups: • men • women • Instructions: Read aloud the COLOR that the words are presented in. When done raise your hand. • Women first. Men please close your eyes. • Okay ready? Colors and words

  37. Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green List 1

  38. Okay, now it is the men’s turn. • Remember the instructions: Read aloud the COLOR that the words are presented in. When done raise your hand. • Okay ready?

  39. Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green List 2

  40. So why the difference between the results for men versus women? • Is this support for a theory that proposes: • “Women are good color identifiers, men are not” • Why or why not? Let’s look at the two lists. Our results

  41. List 2Men List 1Women Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green Matched Mis-Matched

  42. IV DV Co-vary together Confound • What resulted in the performance difference? • Our manipulated independent variable (men vs. women) • The other variable match/mis-match? • Because the two variables are perfectly correlated we can’t tell • This is the problem with confounds Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green

  43. What DIDN’T result in the performance difference? • Extraneous variables • Control • # of words on the list • The actual words that were printed • Random • Age of the men and women in the groups • These are not confounds, because they don’t co-vary with the IV Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green Blue Green Red Purple Yellow Green Purple Blue Red Yellow Blue Red Green

  44. Pilot studies • A trial run through • Don’t plan to publish these results, just try out the methods • Manipulation checks • An attempt to directly measure whether the IV variable really affects the DV. • Look for correlations with other measures of the desired effects. “Debugging your study”

More Related