1 / 33

RELIABILITY

RELIABILITY. consistency or reproducibility of a test score (or measurement). Common approaches to estimating reliability. Classical True Score Theory test-retest, alternate forms, internal consistency useful for estimating relative decisions intraclass correlation

Télécharger la présentation

RELIABILITY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RELIABILITY consistency or reproducibility of a test score (or measurement)

  2. Common approaches to estimating reliability • Classical True Score Theory • test-retest, alternate forms, internal consistency • useful for estimating relative decisions • intraclass correlation • useful for estimating absolute decisions • Generalizability Theory • can estimate both relative & absolute

  3. Reliability is a concept central to all behavioral sciences. To some extent all measures are unreliable. This is especially true with psychological measures and measurements based on human observation

  4. Sources of Error • Random • fluctuations in the measurement based purely on chance. • Systematic • Measurement error that affect a score because of some particular characteristic of the person or the test that has nothing to due with the construct being measured.

  5. CTST • X = T + E • Recognizes only two sources of variance • test -retest (stability) • alternate forms (equivalence in item sampling) • test-retest with alternate forms (stability & equivalence but these are confounded) • Cannot adequately estimate individual sources of error influencing a measurement

  6. ICC • Uses ANOVA to partition variance due to between subjects and within subjects • Has some ability to accommodate multiple sources of variance • Does not provide an integrated approach to estimating reliability under multiple conditions

  7. Generalizability Theory The Dependability of Behavioral Measures, (1972) Cronbach, Glaser, Nanda, & Rajaratnam

  8. Dependability The accuracy of generalizing from a person’s observed score on a measure to the average score that person would have received under all possible testing conditions the tester would be willing to accept.

  9. The Decision Maker • The score on which the decision is to be based is only one of many scores that might serve the same purpose. The decision maker is almost never interested in the response given to the particular moment of testing. • Ideally the decision should be based on that person’s mean score over all possible measurement occasions.

  10. Universe of Generalization • Definition & establishment of the universe admissible observations: • observations that the decision maker is willing to treat as interchangeable. • all sources of influence acting on the measurement of the trait under study. • What are the sources of ERROR influencing your measurement?

  11. Facet of Generalization raters, trials, days, clinics, therapists Facet of Determination usually people, but can vary (e.g. raters) Generalizability Issues

  12. Types of Studies • Generalizability Study (G-Study) • Decision Study (D-Study)

  13. G-Study • Purpose is to anticipate the multiple uses of a measurement. • To provide as much information as possible about the sources of variation in the measurement. • The G-Study should attempt to identify and incorporate into its design as many potential sources of variation as possible.

  14. D-Study • Makes use of the information provided by the G-Study to design the best possible application of the measurement for a particular purpose. • Planning a D-Study: • defines the Universe of Generalization • specifies the proposed interpretation of the measurement. • uses G-Study information to evaluate the effectiveness of alternative designs for minimizing error and maximizing reliability.

  15. Design Considerations • Fixed Facets • Random Facets

  16. Fixed Facet • When the levels of the facet exhaust all possible conditions in the universe to which the investigator wants to generalize. • When the level of the facet represent a convenient sub-sample of all possible conditions in the universe.

  17. Random Facets • When it is assumed that the levels of the facet represent a random sample of all possible levels described by the facet. • If you are willing to EXCHANGE the conditions (levels) under study for any other set of conditions of the same size from the universe.

  18. Types of Decisions • Relative • establish a rank order of individuals (or groups). • the comparison of a subject’s performance against others in the group. • Absolute • to index an individual’s (or group’s) absolute level of measurement. • measurement results are to be made independent from the performance of others in the group.

  19. Statistical Modeling ANOVA • just as ANOVA partitions a dependent variable into effects for the independent variable (main effects & interactions), G-theory uses ANOVA to partition an individual’s measurement score into an effect for the universe-score and an effect for each source of error and their interactions in the design.

  20. Statistical Modeling • In ANOVA we were driven to test specific hypotheses about our independent variables and thus sought out the F statistic and p-value. • In G-theory we will use ANOVA to partition the different sources of variance and then to estimate their amount (Variance Component).

  21. One Facet Design • 4 Sources of Variability • systematic differences among subjects • (object of measurement) • systematic differences among raters (occasions, items) • subjects*raters interaction • random error confounded

  22. Two Facet DesignComponents of Variance • Example of a fully crossed two facet design (Kroll, et. al.) • Seven sources of variance are estimated: • subjects • raters • observations • sxr • sxo • rxo • sxrxo,e

  23. Variance Components (sxo) Subjects (s) Observations (o) (sxrxo) + Error (sxr) (oxr) Raters (r)

  24. Relative ErrorFacet of Determination: Subjects (sxo) Subjects (s) Observations (o) (sxrxo) + Error (sxr) (oxr) Raters (r) F2rel = F2sr /nr +F2so /no+F2sro,e/nrno

  25. Absolute ErrorFacet of Determination: Subjects (sxo) Subjects (s) Observations (o) (sxrxo) + Error (sxr) (oxr) Raters (r) F2abs = F2r/nr +F2o /no +F2sr /nr +F2so /no +F2or /nonr +F2sro,e /nonr

  26. Generalizability CoefficientsAKA: Reliability Coefficients Relative Generalizability Coefficient for Subjects: F2s 2 = ------------- F2s +F2rel Absolute Generalizability Coefficient for Subjects: F2s  = ------------- F2s +F2abs

More Related