1 / 22

Chapter 17 Assessing Measurement Quality in Quantitative Studies

Chapter 17 Assessing Measurement Quality in Quantitative Studies. Measurement. The assignment of numbers to represent the amount of an attribute present in an object or person, using specific rules Advantages: Removes guesswork Provides precise information Less vague than words.

Télécharger la présentation

Chapter 17 Assessing Measurement Quality in Quantitative Studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 17Assessing Measurement Quality in Quantitative Studies

  2. Measurement • The assignment of numbers to represent the amount of an attribute present in an object or person, using specific rules • Advantages: • Removes guesswork • Provides precise information • Less vague than words

  3. Errors of Measurement Obtained Score = True score + Error Obtained score: An actual data value for a participant (e.g., anxiety scale score) True score: The score that would be obtained with an infallible measure Error: The error of measurement, caused by factors that distort measurement

  4. Factors That Contribute to Errors of Measurement • Situational contaminants • Transitory personal factors • Response-set biases • Administration variations • Problems with instrument clarity • Item sampling • Instrument format

  5. Key Criteria for Evaluating Quantitative Measures • Reliability • Validity

  6. Reliability • The consistency and accuracy with which an instrument measures the target attribute • Reliability assessments involve computing a reliability coefficient • most reliability coefficients are based on correlation coefficients

  7. Correlation Coefficients • Correlation coefficients indicate direction and magnitude of relationships between variables • Range • from –1.00 (perfect negative correlation) • through 0.00 (no correlation) • to +1.00 (perfect positive correlation)

  8. Three Aspects of Reliability Can Be Evaluated • Stability • Internal consistency • Equivalence

  9. Stability • The extent to which scores are similar on 2 separate administrations of an instrument • Evaluated by test–retest reliability • Requires participants to complete the same instrument on two occasions • A correlation coefficient between scores on 1st and 2nd administration is computed • Appropriate for relatively enduring attributes (e.g., self-esteem)

  10. Internal Consistency • The extent to which all the instrument’s items are measuring the same attribute • Evaluated by administering instrument on one occasion • Appropriate for most multi-item instruments • Evaluation methods: • Split-half technique • Coefficient alpha

  11. Equivalence • The degree of similarity between alternative forms of an instrument or between multiple raters/observers using an instrument • Most relevant for structured observations • Assessed by comparing observations or ratings of 2 or more observers (interobserver/interrater reliability) • Numerous formula and assessment methods

  12. Reliability Coefficients • Represent the proportion of true variability to obtained variability: r = VTVo • Should be at least .70; .80 preferable • Can be improved by making instrument longer (adding items) • Are lower in homogeneous than in heterogeneous samples

  13. Validity • The degree to which an instrument measures what it is supposed to measure • Four aspects of validity: • Face validity • Content validity • Criterion-related validity • Construct validity

  14. Face Validity • Refers to whether the instrument looks as though it is measuring the appropriate construct • Based on judgment, no objective criteria for assessment

  15. Content Validity • The degree to which an instrument has an appropriate sample of items for the construct being measured • Evaluated by expert evaluation, via the content validity index (CVI)

  16. Criterion-Related Validity • The degree to which the instrument correlates with an external criterion • Validity coefficient is calculated by correlating scores on the instrument and the criterion

  17. Criterion-Related Validity (cont’d) Two types of criterion-related validity: • Predictive validity: the instrument’s ability to distinguish people whose performance differs on a future criterion • Concurrent validity: the instrument’s ability to distinguish individuals who differ on a present criterion

  18. Construct Validity Concerned with the questions: • What is this instrument really measuring? • Does it adequately measure the construct of interest?

  19. Methods of Assessing Construct Validity • Known-groups technique • Relationships based on theoretical predictions • Multitrait-multimethod matrix method (MTMM) • Factor analysis

  20. Multitrait-Multimethod Matrix Method Builds on two types of evidence: • Convergence • Discriminability

  21. Convergence • Evidence that different methods of measuring a construct yield similar results • Convergent validity comes from the correlations between two different methods measuring the same trait

  22. Discriminabililty • Evidence that the construct can be differentiated from other similar constructs • Discriminant validity assesses the degree to which a single method of measuring two constructs yields different results

More Related