1 / 42

Chapter 4 – Reliability

Chapter 4 – Reliability. Observed Scores and True Scores Error How We Deal with Sources of Error: Domain sampling – test items Time sampling – test occasions Internal consistency – traits Reliability in Observational Studies Using Reliability Information

nida
Télécharger la présentation

Chapter 4 – Reliability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4 – Reliability • Observed Scores and True Scores • Error • How We Deal with Sources of Error: • Domain sampling – test items • Time sampling – test occasions • Internal consistency – traits • Reliability in Observational Studies • Using Reliability Information • What To Do about Low Reliability

  2. Chapter 4 - Reliability • Measurement of human ability and knowledge is challenging because: • ability is not directly observable – we infer ability from behavior • all behaviors are influenced bymany variables, only a few of which matter to us

  3. Observed Scores O = T + e O = Observed score T = True score e = error

  4. A true score on a test does not change with repeated testing A true score would be obtained if there were no error of measurement. We assume that errors are random (equally likely to increase or decrease any test result). Reliability – the basics

  5. Because errors are random, if we test one person many times, the errors will cancel each other out (Positive errors cancel negative errors) Mean of many observed scores for one person will be the person’s true score Reliability – the basics

  6. Example: to measure Sarah’s spelling ability for English words. We can’t ask her to spell every word in the OED, so… Ask Sarah to spell a subset of English words % correct estimates her true English spelling skill But which words should be in our subset? Reliability – the basics

  7. Suppose we choose 20 words randomly… What if, by chance, we get a lot of very easy words – cat, tree, chair, stand… Or, by chance, we get a lot of very difficult words – desiccate, arteriosclerosis, numismatics Estimating Sarah’s spelling ability…

  8. Sarah’s observed score varies as the difficulty of the random sets of words varies But presumably her true score (her actual spelling ability) remains constant. Estimating Sarah’s spelling ability…

  9. Other things can produce error in our measurement E.g. on the first day that we test Sarah she’s tired But on the second day, she’s rested… This would lead to different scores on the two days Reliability – the basics

  10. Conclusion: O = T + e But e1≠ e2 ≠ e3 … The variation in Sarah’s scores is produced by measurement error. How can we measure such effects – how can we measure reliability? Estimating Sarah’s spelling ability…

  11. In what follows, we consider various sources of error in measurement. Different ways of measuring reliability are sensitive to different sources of error. Reliability – the basics

  12. Error due to test items Domain sampling error How do we deal with sources of error?

  13. Error due to test items Error due to testing occasions Time sampling error How do we deal with sources of error?

  14. Error due to test items Error due to testing occasions Error due to testing multiple traits Internal consistency error How do we deal with sources of error?

  15. A knowledge base or skill set containing many items is to be tested. E.g., the chemical properties of foods. We can’t test the entire set of items. So we select a sample of items. That produces domainsampling error, as in Sarah’s spelling test. Domain Sampling error

  16. There is a “domain” of knowledge to be tested A person’s score may vary depending upon what is included or excluded from the test. Domain Sampling error

  17. Smaller sets of items may not test entire knowledge base. Larger sets of items should do a better job of covering the whole knowledge base. As a result, reliability of a test increases with the number of items on that test Domain Sampling error

  18. Parallel Forms Reliability: choose 2 different sets of test items. these 2 sets give you “parallel forms” of the test Across all people tested, if correlation between scores on 2 parallel forms is low, then we probably have domain sampling error. Domain Sampling error

  19. Test-retest Reliability person taking test might be having a very good or very bad day – due to fatigue, emotional state, preparedness, etc. Give same test repeatedly & check correlations among scores High correlations indicate stability – less influence of bad or good days. Time Sampling error

  20. Test-retest approach is only useful for traits – characteristics that don’t change over time Not all low test-retest correlations imply a weak test Sometimes, the characteristic being measured varies with time (as in learning) Time Sampling error

  21. Interval over which correlation is measured matters E.g., for young children, use a very short period (< 1 month, in general) In general, interval should not be > 6 months Not all low test-retest correlations imply a weak test Sometimes, the characteristic being measured varies with time (as in learning) Time Sampling error

  22. Test-retest approach advantage: easy to evaluate, using correlation Disadvantage: carryover & practice effects Carryover: first testing session influences scores on next session Practice: when carryover effect involves learning Time sampling error

  23. Suppose a test includes both items on social psychology and items requiring mental rotation of abstract visual shapes. Would you expect much correlation between scores on the two parts? No – because the two ‘skills’ are unrelated. Internal Consistency error

  24. A low correlation between scores on 2 halves of a test, suggests that the test is tapping two different abilities or traits. A good test has high correlations between scores on its two halves. But how should we divide the test in two to check that correlation? Internal Consistency Approach

  25. Split-half method Kuder-Richardson formula Cronbach’s alpha All of these assess the extent to which items on a given test measure the same ability or trait. Internal Consistency error

  26. After testing, divide test items into halves A & B that are scored separately. Check for correlation of results for A with results for B. Various ways of dividing test into two – randomly, first half vs. second half, odd-even… Split-half Reliability

  27. Each half-test is smaller than the whole Smaller tests have lower reliability (domain sampling error) So, we shouldn’t use the raw split-half reliability to assess reliability for the whole test Split-half Reliability – a problem

  28. We correct reliability estimate using the Spearman-Brown formula: re = 2rc 1+ rc re = estimated reliability for the test rc = computed reliability (correlation between scores on the two halves A and B) Split-half reliability – a problem

  29. Kuder & Richardson (1937): an internal-consistency measure that doesn’t require arbitrary splitting of test into 2 halves. KR-20 avoids problems associated with splitting by simultaneously considering all possible ways of splitting a test into 2 halves. Kuder-Richardson 20

  30. The formula contains two basic terms: a measure of all the variance in the whole set of test results. Kuder-Richardson 20

  31. The formula contains two basic terms: “item variance” – when items measure the same trait, they co-vary (same people get them right or wrong). More co-variance = less “item variance” Kuder-Richardson 20

  32. KR-20 can only be used with test items scored as 1 or 0 (e.g., right or wrong, true or false). Cronbach’s α (alpha) generalizes KR-20 to tests with multiple response categories. α is a more generally-useful measure of internal consistency than KR-20 Internal Consistency – Cronbach’s α

  33. Review: How do we deal with sources of error? ApproachMeasuresIssues Test-Retest Stability of scores Carryover Parallel Forms Equivalence & Stability Effort Split-half Equivalence & Internal Shortened consistency test KR-20 & α Equivalence & Internal Difficult to consistency calculate

  34. Some psychologists collect data by observing behavior rather than by testing. This approach requires time sampling, leading to sampling error Further error due to: observer failures inter-observer differences Reliability in Observational Studies

  35. Deal with possibility of failure in the single-observer situation by having more than 1 observer. Deal with inter-observer differences using: Inter-rater reliability Kappa statistic Reliability in Observational Studies

  36. Inter-rater reliability % agreement between 2 or more observers problem: in a 2-choice case, 2 judges have a 50% chance of agreeing even if they guess! this means that % agreement may over-estimate inter-rater reliability. Reliability in Observational Studies

  37. Kappa Statistic (Cohen,1960) estimates actual inter-rater agreement as a proportion of potential inter-rater agreement after correction for chance. Reliability in Observational Studies

  38. Standard error of measurement (SEM) estimates extent to which test score misrepresents a true score. SEM = (S)(1 – r) Using Reliability Information

  39. We use SEM to compute a confidence interval for a particular test score. The interval is centered on the test score We have confidence that the true score falls in this interval E.g., 95% of the time the true score will fall within 1.96 SEM either way of the test (observed) score. Standard Error of Measurement

  40. A simple way to think of the SEM: Suppose we gave one student the same test over and over Suppose, too, that no learning took place between tests and the student did not memorize questions The standard deviation of the resulting set of test scores (for this one student) would be the standard error of measurement. Standard Error of Measurement

  41. Increase the number of items To find how many you need, use Spearman-Brown formula Using more items may introduce new sources of error such as fatigue, boredom What to do about low reliability

  42. Discriminability analysis Find correlations between each item and whole test Delete items with low correlations What to do about low reliability

More Related