1 / 23

VALIDITY AND RELIABILITY

VALIDITY AND RELIABILITY. Chapter Four. CHAPTER OBJECTIVES. Define validity and reliability Understand the purpose for needing valid and reliable measures Know the most utilized and important types of validity seen in special education assessment

elvin
Télécharger la présentation

VALIDITY AND RELIABILITY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VALIDITY AND RELIABILITY Chapter Four

  2. CHAPTER OBJECTIVES • Define validity and reliability • Understand the purpose for needing valid and reliable measures • Know the most utilized and important types of validity seen in special education assessment • Know the most utilized and important types of reliability seen in special education assessment

  3. VALIDITY • Denotes the extent to which an instrument is measuring what it is supposed to measure.

  4. Criterion-Related Validity • A method for assessing the validity of an instrument by comparing its scores with another criterion known already to be a measure of the same trait or skill.

  5. validity coefficient Criterion-related validity is usually expressed as a correlation between the test in question and the criterion measure. The correlation coefficient is referred to as a

  6. CONCURRENT VALIDITY The extent to which a procedure correlates with the current behavior of subjects

  7. PREDICTIVE VALIDITY The extent to which a procedure allows accurate predictions about a subject’s future behavior

  8. CONTENT VALIDITY Whether the individual items of a test represent what you actually want to assess

  9. CONSTRUCT VALIDITY The extent to which a test measures a theoretical construct or attribute. CONSTRUCT Abstract concepts such as intelligence, self-concept, motivation, aggression and creativity that can be observed by some type of instrument.

  10. A test’s construct validity is often assessed by its convergent and discriminantvalidity.

  11. FACTORS AFFECTING VALIDITY • Test-related factors • The criterion to which you compare your instrument may not be well enough established • Intervening events • Reliability

  12. RELIABILITY The consistency of measurements A RELIABLE TEST Produces similar scores across various conditions and situations, including different evaluators and testing environments.

  13. How do we account for an individual who does not get exactly the same test score every time he or she takes the test? • Test-taker’s temporary psychological or physical state • Environmental factors • Test form • Multiple raters

  14. RELIABILITY COEFFICIENTS • The statistic for expressing reliability. • Expresses the degree of consistency in the measurement of test scores. • Donoted by the letter r with two identical subscripts (rxx)

  15. TEST-RETEST RELIABILITY Suggests that subjects tend to obtain the same score when tested at different times.

  16. Split-Half Reliability • Sometimes referred to as internal consistency • Indicates that subjects’ scores on some trials consistently match their scores on other trials

  17. INTERRATER RELIABILITY Involves having two raters independently observe and record specified behaviors, such as hitting, crying, yelling, and getting out of the seat, during the same time period TARGET BEHAVIOR A specific behavior the observer is looking to record

  18. ALTERNATE FORMS RELIABILITY • Also known as equivalent forms reliability or parallel forms reliability • Obtained by administering two equivalent tests to the same group of examinees • Items are matched for difficulty on each test • It is necessary that the time frame between giving the two forms be as short as possible

  19. STANDARD ERROR of MEASUREMENT (SEM) OBTAINED SCORE • The score you get when you administer a test • Consists of two parts: the true score and the error score Gives the margin or error that you should expect in an individual test score because of imperfect reliability of the test

  20. Evaluating the Reliability Coefficients • The test manual should indicate why a certain type of reliability coefficient was reported. • The manual should indicate the conditions under which the data were obtained • The manual should indicate the important characteristics of the group used in gathering reliability information

  21. FACTORS AFFECTING RELIABILITY • Test length • Test-retest interval • Variability of scores • Guessing • Variation within the test situation

  22. CHAPTER OBJECTIVES • Define validity and reliability • Understand the purpose for needing valid and reliable measures • Know the most utilized and important types of validity seen in special education assessment • Know the most utilized and important types of reliability seen in special education assessment

  23. THE END

More Related