1 / 32

Cut Points

Cut Points. ITE - 695. Section One. What are Cut Points?. I. Introduction. A . The more critical the issue (task) the more critical the cut point (example: programming a machine). 1. Interpretation of readouts. 2. Tolerances in measurement. B . Assumption: Test has both of these:

dara
Télécharger la présentation

Cut Points

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cut Points ITE - 695

  2. Section One • What are Cut Points?

  3. I. Introduction A. The more critical the issue (task) the more critical the cut point (example: programming a machine). 1. Interpretation of readouts. 2. Tolerances in measurement. B. Assumption: Test has both of these: 1. Validity. 2. Reliability. C. Select instrument that best measures action needed (performance vs. explanation).

  4. II Types A. Normative-Referenced Testing (NTR) 1. Significance - Accepted reliability & validity 2. Measurement a. Common Averages: - mode - median - mean

  5. II Types (cont.) b. Variability: - range - quartile deviation - standard deviation 3. Reliability - Historical acceptance

  6. II Types (cont.) B. Criterion-Referenced testing (CRT) 1. Significance a. Testing b. Distribution 2. Measurement a. Judgements b. Variables

  7. II Types (cont.) 3. Reliability a. Criterion not based on normal distribution. b. Data dichotomous, mastery/non-mastery.

  8. NORM REFRENCED TESTING 1. Separate test takers 2. Seek Normal Distribution Curve

  9. NORM REFRENCED TESTING 1. Test items separate test - takers from one another. 2. Normal Distribution Curve.

  10. MEASURES of CENTRAL TENDENCIES • MODE • MEDIAN • MEAN • MEASURES of VARIBILITY or SCATTER • RANGE • DEVIATION (QUARTILE) • DEVIATION (STANDARD)

  11. CRITERION REFERENCED TESTING 1. Test items based on specific objectives. 2. Mastery Curve

  12. Standard normal curve with standard deviations SEE HANDOUT

  13. CRITERION REFRENCED TEST 1. Test Compares to Objectives 2. Mastery Distribution

  14. Norm-Reference Testing Criterion Referenced Testing GOALS Test Achievement Test Performance Mastery RELIABILITY Usually High Usually Unknown VALIDITY Usually High Instruction Dependent ADMINISTRATION Variable Standard STANDARD Averages-Based Performance Levels Based MOTIVATION Likelihood of Success Avoidance of Failure COMPETITION Student to Criterion Student to Student INSTRUCTIONAL DOMAIN Cognitive or Psychomotor Low Level Cognitive

  15. Comparison models? INPUT PRODUCT (NRT Results) (Instruction) Model For NRT Construction DESIGN TEST INPUT PRODUCT (Instruction) (CRT Results) MODIFY? YES NO (Test, Objectives, or Instruction) Model For CRT Construction

  16. Mastery curve SEE HANDOUT

  17. Frequency distributions with standard deviations of various sizes SEE HANDOUT

  18. Section II Establishing Cut Points Three Primary Procedures

  19. ESTABLISHING CUT-POINT 1. Informed Judgement 2. Conjectural Approach 3. Contrast Group

  20. I. Informed Judgement A. Significance: Separates mastery from non- mastery B. Procedure: 1. Analyze consequences of mid- classification (political, legal, or operational). 2. Gather previous test-taker data. 3. Ask other stakeholders. 4. Make decision.

  21. II Conjecture Method A. Significance: “Angoff-Nedeisky Method” - most useful. B. Procedure: 1. Select three informed judges. 2. Estimate probability of correct response. 3. Chosen cut-off is average of the three judges.

  22. III Contrast Group Method A. Significance: Single strongest technique; should still use human judgement. B. Procedure: 1. Select judges to identify mastery/non-mastery. 2. Select equal groups (15 minimum, 30 optimum). 3. Administer mastery/non-mastery test to both groups. 4. Plot scores on distribution chart. 5. Make critical cut-off where two distributions intersect. 6. Adjust score between highest non-master and lowest master. score.

  23. Establishing A Criterion Cut-Point Mastery Level - (Separates master from non-master) 1. Informed judgement 2. Conceptual Approach 3. Control groups

  24. Establishing A Criterion Cut-Point (cont.) Mastery Level - (Separates master from non master) 1. Informed judgement 2. Conceptual Approach 3. Control groups

  25. Establishing A Criterion Cut-Point (cont.) Mastery Level - (Separates master from non master) 1. Informed judgement 2. Conceptual Approach 3. Control groups

  26. Contrasting group method of cut-off score selection chart. SEE HANDOUT

  27. Section Three: Reliability

  28. I. Types A. Internal Consistency 1. Kuder-Richardson Method. 2. Computer Statistical Package. 3. Problem: Lack of variance. 4. Problem: Excludes items that measure unrelated objectives. B. Test-Retest Score Consistency.

  29. Review Types of Validity: Methods of Establishing Cut-Points 1. Content 1. Informed Judgment 2. Construct 2. Conjecture Method 3. Criterion-related 3. Contrast Group Method Types of Reliability: 1. Test-Retest 2. Internal Consistency 3. Equivalent forms 4. Interrupter reliability

  30. Section Four: Review Questions • Validity cannot exist without reliability. (True or False) • Since CRT relies on judgment rather than normal distribution for scoring, how is reliability assured? • If it becomes necessary for you to establish cut-point for your training program, which of the three methods would you use and why? (Informed judgment, Conjecture method, or Contrast group method)

  31. Norm-Reference Testing Criterion Referenced Testing GOALS Test Achievement Test Performance Mastery RELIABILITY Usually High Usually Unknown VALIDITY Usually High Instruction Dependent ADMINISTRATION Variable Standard STANDARD Averages-Based Performance Levels Based MOTIVATION Likelihood of Success Avoidance of Failure COMPETITION Student to Criterion Student to Student INSTRUCTIONAL DOMAIN Cognitive or Psychomotor Low Level Cognitive

More Related