1 / 35

Iowa’s Application of Rubrics to Evaluate Screening and Progress Tools

Iowa’s Application of Rubrics to Evaluate Screening and Progress Tools. John L. Hosp, PhD University of Iowa. Overview of this Webinar. Share rubrics for evaluating screening and progress tools Describe process Iowa Department of Education used to apply rubrics. Purpose of the Review.

maxime
Télécharger la présentation

Iowa’s Application of Rubrics to Evaluate Screening and Progress Tools

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Iowa’s Application of Rubrics to Evaluate Screening and Progress Tools John L. Hosp, PhD University of Iowa

  2. Overview of this Webinar • Share rubrics for evaluating screening and progress tools • Describe process Iowa Department of Education used to apply rubrics

  3. Purpose of the Review • Survey of universal screening and progress tools currently being used by LEAs in Iowa • Review these tools for technical adequacy • Incorporate one tool into new state data system • Provide access to tools for all LEAs in state

  4. Collaborative Effort The National Center on Response to Intervention

  5. Structure of the Review Process

  6. Overview of the Review Process • The work group was divided into 3 groups: • Within each group, members worked in pairs

  7. Overview of the Review Process • Each pair: • had a copy of the materials needed to conduct the review • reviewed and scored their parts together and then swapped with the other pair in their group • Pairs within each group met only if there were discrepancies in scoring • A lead person from one of the other groups participated to mediate reconciliation • This allowed each tool to be reviewed by every work group member

  8. Overview of the Review Process • All reviews will be completed and brought to a full work group meeting • Results will be compiled and shared • Final determinations across groups for each tool will be shared with the vetting group two weeks later • The vetting group will have one month to review the information and provide feedback to the work group

  9. Structure and Rationale of Rubrics • Separate rubrics for universal screening and progress monitoring • Many tools reviewed for both • Different considerations • Common header and descriptive information • Different criteria for each group (a, b, c)

  10. Universal Screening Rubric Header on cover page

  11. Group A

  12. Group B

  13. Judging Criterion Measure Additional Sheet for Judging the External Criterion Measure (Revised 10/24/11) • An appropriate Criterion Measure is: • External to the screening or progress monitoring tool • A Broad skill rather than a specific skill • Technically adequate for reliability • Technically adequate for validity • Validated on a broad sample that would also represent Iowa’s population

  14. Judging Criterion Measure (cont)

  15. Judging Criterion Measure (cont)

  16. Judging Classification Accuracy

  17. Judging Classification Accuracy (cont)

  18. Key + = proficiency/mastery - = nonproficiency/at-risk 0 = unknown = Sensitivity = Specificity Sensitivity and Specificity Considerations and Explanations Explanations: True means “in agreement between screening and outcome”. So true can be negative to negative in terms of student performance (i.e., negative meaning at-risk or nonproficient). This could be considered either positive or negative prediction depending on which the developer intends the tool to predict. As an example, a tool that has a primary purpose of identifying students at-risk for future failure would probably use ‘true positives’ to mean ‘those students who were accurately predicted to fail the outcome test’. Sensitivity = true positives/true positives + false negatives Specificity = true negatives/true negatives + false positives

  19. Consideration 1: Determine whether developer is predicting a positive outcome (i.e., proficiency, success, mastery, at or above a criterion or cut score) from a positive performance on the screening tool (i.e., at or above benchmark or a criterion or cut score) or a negative outcome (i.e., failure, nonproficiency, below a criterion or cut score) from negative performance on the screening tool (i.e., below a benchmark, criterion, or cut score). Prediction is almost always positive to positive or negative to negative; however in rare cases it might be positive to negative or negative to positive.

  20. Group B (cont)

  21. Group C

  22. Group C (cont)

  23. Group C (cont)

  24. Progress Monitoring Rubric Header on cover page Descriptive info on each work group’s section

  25. Group A

  26. Group B

  27. Group B (cont)

  28. Group B (cont)

  29. Group C

  30. Group C (cont)

  31. Group C (cont)

  32. Findings • Many of the tools reported are not sufficient (or appropriate) for universal screening or progress monitoring • Some tools are appropriate for both • No tool (so far) is “perfect” • There are alternatives from which to choose

  33. Live Chat • Thursday April 26, 2012 • 2:00-3:00 EDT • Go to rti4success.org for more details

More Related