1 / 13

Validity

What is Validity. Accuracy of assessmentWhether or not the assessment measures what it is supposed to measure. Why do we assess?. To determine students' status with respect to educationally relevant variables such as:what a student has learned about a particular concept or subjecthow we decide

odakota
Télécharger la présentation

Validity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Validity Ch. 3 Popham (2005)

    2. What is Validity Accuracy of assessment Whether or not the assessment measures what it is supposed to measure

    3. Why do we assess? To determine students status with respect to educationally relevant variables such as: what a student has learned about a particular concept or subject how we decide to instruct students Above all, to make informed decisions about our students learning and our own instruction

    4. In order to make better decisions, we must have accurate Assessments VALID ASSESMENTS

    5. Assessment Domain The set of knowledge, skills, and or affective dispositions represented by a test Content Standards In order to measure an assessment domain such as reading comprehension, we use a sample (i.e. sampling) of the content (10 question test about a story students have read) to make inferences about the students status with respect to the entire assessment domain (e.g., reading comprehension)

    6. Inferences Test-based inferences are interpretation of what the tests scores mean (with regard to the particular domain) Tests, themselves, hold no validity Validity hinges on the accuracy of our inferences about students status with respect to an assessment domain pp. 51 and 52. Thus, the accuracy of our inferences depends on the accuracy of a test Validity centers on the accuracy of inferences that teaches make about their students

    7. Validity Argument Validity is an overall evaluation of the degree to which to which a specific interpretation of test results are supported. Need to assemble evidence in order make an argument to show their tests permit the inferences that the test maker claim.

    8. Validity Evidence- (Need a variety of evidence) Content-Related Evidence- extent to which test matches instructional objectives Criterion- Related Evidence-extent to which scores on the test are in agreement with or predict an external criterion Construct-Related Evidence-The extent to which an assessment corresponds to other variables, as predicted by some rationale or theory

    9. Content-Related Evidence Extent to which the assessment procedure adequately represents the content (standard) or the assessment domain being sampled Test development External Review Self-evaluation and peer review

    10. Criterion Related Evidence The extent to which scores on the test are in agreement with (concurrent validity) or predict (predictive validity) an external criterion Predictive Validity: ACT/SAT (aptitude test) predicts GPA in college Concurrent Validity: If the end-of-year math tests in 4th grade correlate highly with the statewide math tests, they would have high concurrent validity.

    11. Construct-Related Evidence The extent to which an assessment corresponds to other variables (writing ability, anxiety, sleep deprivation, language differences, etc), as predicted by some rationale or theory If you can correctly hypothesize that ESOL students will perform differently on a reading test than English-speaking students (because of theory), the assessment may have construct validity Usually determined by a series of studies (e.g., intervention, differential population, related measures, etc.)

    12. Face Validity- test appears to measure what it is measuring (Main thing to remember is that may not always be the caseneed to review tests) Consequential Validity- validity of use of test results (Main thing to remember here is that our tests and our inferences about their scores have consequences for our students)

    13. RELIABITY When a test is valid (or have valid score-based inferences), it is most likely reliable (consistent) However, A reliable test may not always be valid (In other words, a teacher can consistently measure something that he/she never attended to measure)

More Related