1 / 14

Assessment

Assessment. Why do it? What kinds are there?. Why assess?. Reliability. True score + error (we can’t measure true scores) Reported as a correlation (“r”).

unity
Télécharger la présentation

Assessment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assessment Why do it? What kinds are there?

  2. Why assess?

  3. Reliability • True score + error (we can’t measure true scores) • Reported as a correlation (“r”). • E.g., a correlation of .8 between two sets of test scores is interpreted as meaning that 80% of the variability is true score variability. • Methods of establishing reliability • Test-retest • Alternate-forms • Split-half • Internal consistency

  4. Improving reliability (> 0.70) • The more items, the greater the reliability. • Avoid ceiling and floor effects (too easy or too hard). • Wider range of abilities taking the test, the greater the reliability. • If people taking test have similar backgrounds, the reliability will be higher. (Note: consider that with similar backgrounds there will still be a range of abilities) • More objective the scoring, the greater the reliability.

  5. Improving reliability • Use straightforward, clearly worded questions. • Good directions can increase reliability. • Reliability is increased if the people taking the test are rested, calm, well, and taking the test seriously. • Handout: Writing test questions

  6. Content Criterion Psychological test Consequential Convergent Discriminant Face Predictive Concurrent Validity Construct Note: A criterion is an extra-test variable – something outside of the test that the test should be able to predict)

  7. Relationship . . . Reliability Validity

  8. Don’t get confused • Reliability and validity for assessments and for research studies • E.g., Validity of psychological experiments • Internal validity • External validity

  9. Types of Tests • Published achievement tests • Standardized empirically documented tests • Non-standardized, not empirically documented tests • Teacher-made tests

  10. Norm-referenced • Interpret a student’s assessment performance by comparing it to the performance of a well-defined group of other students who have also taken the same assessment” (p. 393). • Norm-group needs to be well-defined to ensure validity. • “Numbers” that compare a student to the norming group.

  11. Criterion-referenced • “infer the kinds of performances the student can do in a domain, rather than the student’s relative standing in a norm group” (p. 393). • These lose validity when domain is not well-defined or when the assessment is a poor sample of the domain. • “Numbers” compare student to a standard.

  12. Real test scores

  13. Be sure to review the following • Appropriate use of test results • Inappropriate use of test results

  14. Teacher-made tests • How can you strengthen reliability and validity? • Your tasks: • Exam questions in EP • Feedback rubric/scale for MA I

More Related