1 / 9

Race to the Top Assessment Program: Public Hearing on Common Assessments January 20, 2010

Aab-sad-nov08item09 Attachment 1 Page 1 of 20. Race to the Top Assessment Program: Public Hearing on Common Assessments January 20, 2010 Washington, DC Presenter: Lauress L. Wise, HumRRO. Overview. Through-Course Summative Assessments Key evidence required

palmer-luna
Télécharger la présentation

Race to the Top Assessment Program: Public Hearing on Common Assessments January 20, 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Aab-sad-nov08item09Attachment 1Page 1 of 20 Race to the Top Assessment Program: Public Hearing on Common Assessments January 20, 2010 Washington, DC Presenter: Lauress L. Wise, HumRRO

  2. Overview • Through-Course Summative Assessments • Key evidence required • Common High School End-of-Course Exams • Comparability across assessments • Challenges for Computer-Based Testing • Comparability with paper-and-pencil and other alternatives • Continuous Process Improvement • Support for continued improvements after the initial grants • Further Research Needs • Value-added, use of performance tasks, and ??? Panel on Common Assessment Grants

  3. Through-Course Accountability Systems • Alternative models for through-course assessments • Parallel forms of the same test at different times (best, last, avg) • Oregon assessment model • Does not really support increased depth of assessment • Students tested in areas not yet covered in their curriculum • Segmented assessments + summative end-of-course • Each assessments covers a unique piece of the curriculum; timing may not matter (states must ensure curricular coverage) • Provides greatest depth of coverage of particular objectives • Does not provide mid-term measures of “catching up” • Cumulative assessments • Each assessments covers current and prior curriculum • Provides students chances to demonstrate improvement on mastery earlier objectives (impact of remediation) • Less coverage of specific topics than with segmented assessments • More (but less clear) options for obtaining overall scores Panel on Common Assessment Grants

  4. Through-Course Accountability Systems • Establishing the validity of summative scores • Content-related validity evidence – alignment studies • Correlation with other indicators of achievement • Teacher ratings and course grades • Cognitive lab analysis of student knowledge and skill • Predictive evidence • Correlation with achievement at next grade • Correlation with subsequent postsecondary preparedness • Consequences: impact on instruction and learning • Surveys and observation of curriculum and instruction • Analysis of achievement trends over time Panel on Common Assessment Grants

  5. Common High School End-of-Course Exams • Comparability of exams across courses • Agreement on content must precede common exams • Similar process for each course for establishing content specs • Evaluate effectiveness of different state curricula in supporting mastery of targeted content and skills for each subject • High school accountability (and student accountability) • Example, percent of all students who: • Pass all core EOC exams (e.g., Algebra, English II) • Pass at least some number of other EOC exams • Better indicator of high school “value added” than obtained with core exams alone Panel on Common Assessment Grants

  6. Challenges for Computer-Based Testing • Technology (including computer platforms and software systems) will continue to evolve rapidly • Target testing systems to the best that can currently be imagined today (may still be obsolete tomorrow). • Some tasks (e.g., extensive simulations) assessed on the computer cannot be covered in a paper test • Where adaptation or accommodation is needed for some students, look at how they learn targeted skills in the classroom. • Universal Design principles apply to computer tests • Elimination of inappropriate sources of variation increases the likelihood of comparability across testing modes or platforms • Plenty of examples of good comparability and equating studies to investigate mode differences to support studies needed where comparability across modes is important. Panel on Common Assessment Grants

  7. Continuous Process Improvement • Process for identifying items that do not work, finding out why, and revising item writing and review • Test/Item specifications • Item writing (exercise development) guides • Item and test review processes • Process for evaluating impact on teaching and learning • Are changes in desired directions? • Analysis of predictive power of assessment results • Success at the next grade • College preparedness by the end of high school • Are some essential skills not covered? • Need ongoing develop-test-revise cycles Panel on Common Assessment Grants

  8. Further Research Needs • Value-Added • Focus on goal more than on method • Really how to coach, evaluate, improve teacher effectiveness (and principal) • Performance Tasks • Really how to assess and encourage performance not well covered by other modes • Complex inquiry, extended problem-solving • Team skills? • Preparedness and Prerequisite Skills • Initial choice of common standards not necessarily final • Develop-test-revise learning trajectory models Panel on Common Assessment Grants

  9. Summary of Recommendations • Through-Course Summative Assessments • Use cumulative or segmented models to increase depth • Require multiple types of validity and impact evidence • Common High School End-of-Course Exams • Use in developing high school accountability models • Challenges for Computer-Based Testing • Count on continuing technological advances • Avoid extraneous features (universal design) for comparability • Continuous Process Improvement • Require feedback systems for improving item/test development • On-going analyses of impact to support improvement of content standards, test specifications, and reporting • Further Research Needs • Start from goals rather than methods • Add analyses and improvement of learning trajectory models Panel on Common Assessment Grants

More Related