1 / 15

Race to the Top Assessment Competition

Race to the Top Assessment Competition. Public & Expert Input Meetings General & Technical Assessment Washington, DC January 20, 2010. Race to the Top Applications In…. Indiana Iowa Kansas Kentucky Louisiana Massachusetts Michigan Minnesota Missouri Nebraska New Hampshire

albert
Télécharger la présentation

Race to the Top Assessment Competition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Race to the TopAssessment Competition Public & Expert Input Meetings General & Technical Assessment Washington, DC January 20, 2010

  2. Race to the Top Applications In… • Indiana • Iowa • Kansas • Kentucky • Louisiana • Massachusetts • Michigan • Minnesota • Missouri • Nebraska • New Hampshire • New Jersey • New Mexico • New York • North Carolina • Ohio • Oklahoma • Oregon • Pennsylvania • Rhode Island • South Carolina • South Dakota • Tennessee • Utah • Virginia • West Virginia • Wisconsin • Wyoming • Alabama • Arizona • Arkansas • California • Colorado • Connecticut • Delaware • District of Columbia • Florida • Georgia • Hawaii • Idaho • Illinois

  3. Race to the Top Assessment Competition • Race to the Top Assessment Competition: $350M to support consortia of States implement common standards by funding the development of a new generation of common assessments aligned to them • Applicants: Consortia of States • Timeline: • March 2010 Release notice inviting applications • June 2010 Applications due • Sep 2010 Grants awarded

  4. Goals of the Assessment Program • Support States in delivering a system of more effective and instructionally useful assessments: • More accurate information about what students know and can do: • Achievement of standards • Growth • On-track to college and career ready by the time of high school graduation • Reflects and supports good instructional practice • Includes all students, including English language learners and students with disabilities • Usable to inform: • Teaching, learning, and program improvement • Determinations of school effectiveness • Determinations of principal and teacher effectiveness for the purposes of evaluation and support • Determinations of individual student college and career readiness

  5. Other Requirements • Subjects and Grades – at a minimum: • Reading/language arts and mathematics • Grades 3-8 and high school • Summative assessments – at a minimum – but: • Not necessarily end-of-year • Not necessarily once during the year • Not necessarily one test • May replace rather than add to assessments currently in use • Be valid, reliable, and fair

  6. Goals for the Input Meetings • Paint a vision of the what the next generation of assessment systems could and should look like. • Provide concrete expert and public guidance to ED staff, in response to questions asked in the notice. • Help prepare States to develop the highest quality proposals with the greatest likelihood of impact.

  7. Where Are We Today • Heard input from 42 experts and 79 members of the public, and received over 50 pieces of written input • As we put pen to paper…questions arose in these areas: • “Through-course” summative assessments (good idea? validity /reliability for accountability purposes?) • HS end-of-course assessments (ensuring consistent, high levels of rigor) • Use of technology (issues with requiring) • Need for innovation and additional research (other areas this competition could/should support in order to advance the field)

  8. Questions for Final Expert Meeting • The Department is considering requiring “a through-course summative assessment system” – that is, a system that includes components of assessments delivered periodically throughout the school year whose results are aggregated to produce summative results. If we do this, how should we ask applicants to describe their approaches and/or plans for such a system, including any special considerations related to “though-course summative assessments” on the issues outlined below? What evidence should we request if such summative results are part of an accountability system? • Validity – including construct, content, consequential, and predictive validity • External validity for postsecondary preparedness • Reliability – including inter-rater reliability if human scored • Fairness • Precision across the full performance continuum (e.g. from low to high performers) • Comparability across years If States administer components of the “through-course assessments” at different times or in a different sequence, but the aggregated summative results are part of an accountability system, what are the issues around validity, equating, or comparability that we should be aware of?

  9. Questions for Final Expert Meeting • The Department is considering inviting applicants to create a “system” for developing and certifying the quality and rigor of a set of common end-of-course summative exams in multiple high school subjects. What evidence should we ask applicants to provide to ensure that, across a consortium, their proposed “system” will ensure consistent and high levels of rigor? • If the Department requires computer-based test administration, are there specific implementation challenges that we should ask applicants to consider and address in their proposal? In particular, what evidence or strategies should we require of applicants to ensure that the computer-based and any needed paper-and-pencil versions assess comparable levels of student knowledge and skill while preserving the full power of the computer-based item types? Are there special challenges related to computer-based testing for students with disabilities and what additional evidence or strategies should we require of applicants to ensure that computer-based tests yield valid results for this population of students?

  10. Questions for Final Expert Meeting • The Department wants to encourage ongoing innovation and improvement of assessment design, development, administration, and use. However, given that we are proposing four-year grants, what should we ask of applicants to ensure that they have structured a process and/or approach that will lead to innovation and improvement over time? • With the help of experts, we identified two issues that seem to require additional, focused research. Have we described the issues correctly? Are there other issues that need additional focused research? • Use of value-added methodology for teacher and school accountability • Comparability, generalizability, and growth modeling for assessments that include performance tasks

  11. Agenda 10:00-10:15 Welcome/Setting the Stage 10:15-12:15 Expert Presentations 12:15-1:15 Lunch (on your own) 1:15-2:15 Expert Presentations 2:15-3:45 Round Table Discussion 3:45-4:00 Break (public speakers queue up) 4:00-5:00 Public Speakers 5:00 Conclusion

  12. Housekeeping • Submitting your questions • Time keeping • Cell phones on vibrate please • Today’s session will be transcribed and posted to www.ed.gov, together with the presentations • Additional written input may be submitted TODAY to racetothetop.assessmentinput@ed.gov

  13. States Attending Today Arizona Colorado Connecticut Delaware Florida Georgia Idaho Illinois Kentucky Louisiana Maryland Massachusetts Minnesota Montana New Hampshire New York North Dakota Ohio Oregon Rhode Island South Carolina Tennessee Virginia Wisconsin Wyoming States in italics are participating by phone and/or WebEx.

  14. On the Panel Invited Experts • Jamal Abedi, Professor of Education, University of California Davis School of Education • Randy Bennett, Distinguished Scientist, Educational Testing Service, Research and Development Division (ETS) • Lizanne DeStefano, Professor of Education, University of Illinois, College of Education • Scott Marion, Associate Director, National Center for the Improvement in Educational Assessment (NCIEA) • Jeff Nellhaus, Deputy Commissioner of Education, Massachusetts Department of Elementary and Secondary Education • Laurie Wise, Principal Scientist, Human Resources Research Organization (HumRRO) From the U.S. Department of Education • John Easton, Director, Institute of Education Sciences • Joanne Weiss, Director of Race to the Top, Office of the Secretary • Ann Whalen, Special Assistant to the Secretary • Judy Wurtzel, Deputy Assistant Secretary, Office of Planning, Evaluation and Policy Development

More Related