1 / 14

Questionnaire Development

Questionnaire Development. Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University. Definition of Validity. Instrument measures what it is intended to measure: Appropriate Meaningful Useful Enables a performance analyst or evaluator to draw correct conclusions.

glynnis
Télécharger la présentation

Questionnaire Development

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

  2. Definition of Validity • Instrument measures what it is intended to measure: • Appropriate • Meaningful • Useful • Enables a performance analyst or evaluator to draw correct conclusions

  3. Types of Validity • Face • Content • Criterion • Concurrent • Predictive • Construct

  4. Face Validity • It looks OK • Looks to measure what it is supposed to measure • Look at items for appropriateness • Client • Sample respondents • Least scientific validity measure Looks Good To Me

  5. Organized review of format and content of instrument Comprehensiveness Adequate number of questions per objective No voids in content By subject matter experts Content-Related Validity Balance Definition Sample Content Format

  6. Subject Instrument A Instrument B Task Observation InventoryChecklist John yes no Mary no no Lee yes no Pat no no Jim yes yes Scott yes yes Jill no yes Usually expressed as a correlation coefficient (0.70 or higher is generally accepted as representing good validity) How one measure stacks-up against another Concurrent = at same time Predictive = now and future Independent sources that measure same phenomena Seeking a high correlation Criterion-Related Validity

  7. A theory exists explaining how the concept being measured relates to other concepts Look for positive or negative correlation Often over time and in multiple settings Usually expressed as a correlation coefficient (0.70 or higher is generally accepted as representing good validity) Construct-Related Validity Prediction 1 - Confirmed THEORY Prediction 2 - Confirmed Prediction 3 - Confirmed Prediction n - Confirmed

  8. Definition of Reliability • The degree to which measures obtained with an instrument are consistent measures of what the instrument is intended to measure • Sources of error • Random error = unpredictable error which is primarily affected by sampling techniques • Select more representative samples • Select larger samples • Measurement error = performance of instrument

  9. Types of Reliability • Test-Retest • Equivalent Forms • Internal Consistency • Split-Half Approach • Kuder-Richardson Approach • Cronbach Alpha Approach

  10. Administer the same instrument twice to the same exact group after a time interval has elapsed. Calculate a reliability coefficient (r) to indicate the relationship between the two sets of scores. r of+.51 to +.75 moderate to good r over +.75 = very good to excellent Test-Retest Reliability T I M E

  11. Also called alternate or parallel forms Instruments administered to same group at same time Vary: Calculate a reliability coefficient (r) to indicate the relationship between the two sets of scores. r of+.51 to +.75 moderate to good r over +.75 = very good to excellent Equivalent Forms Reliability Stem: -- Order -- Wording Response Set: -- Order -- Wording

  12. Split-Half Break instrument or sub-parts in ½ -- like two instruments Correlate scores on the two halves Best to consult statistics book and consultant and use computer software to do the calculations for these tests Kuder-Richardson (KR) Treats instrument as whole Compares variance of total scores and sum of item variances Cronbach Alpha Like KR approach Data scaled or ranked Internal Consistency Reliability

  13. Reliability and Validity So unreliable as to be invalid Fair reliability and fair validity Fair reliability but invalid Good reliability but invalid Good reliability and good validity The bulls-eye in each target represents the information that is desired. Each dot represents a separate score obtained with the instrument. A dot in the bulls-eye indicates that the information obtained (the score) is the information the analyst or evaluator desires.

  14. Comments and Questions

More Related