1 / 27

Determining Informative Student Growth on the Florida Assessments for Instruction in Reading

Determining Informative Student Growth on the Florida Assessments for Instruction in Reading. Yaacov Petscher, Ph.D. Director of Research Florida Center for Reading Research. Common Questions. How much growth should occur? What score type should be used for growth?

teague
Télécharger la présentation

Determining Informative Student Growth on the Florida Assessments for Instruction in Reading

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Determining Informative Student Growth on the Florida Assessments for Instruction in Reading Yaacov Petscher, Ph.D. Director of Research Florida Center for Reading Research

  2. Common Questions • How much growth should occur? • What score type should be used for growth? • Why are students’ PRS scores in Grade 1 decreasing? • What do the Reading Comprehension and Word Analysis Ability scores really tell us?

  3. Things to Discuss • Review Goals and FAIR Psychometrics • Review of FAIR & Score Types • How they were derived • 2010-2011 changes to FAIR • Growth in FAIR subtests • Comparison of Growth

  4. Goals in Assessment – K2 • What do we want to maximize in a screen? • Correct classification or base rates • Focus on correct classification • Cost = higher false positives/false negatives • Focus on predictive power • Cost = Lower sensitivity

  5. Reliability of Broad Screen *The Letter Sounds task was more reliable than Letter Names at AP 1; however, due to the restricted range for high risk, a policy decision was made to use the Letter Name task in order to better capture the floor of the distribution. Because the Broad Screen in G2 is a timed task, precision estimates are not reported; however, test-retest reliability was strong (.79-.84).

  6. Predictive Validity of Broad Screen

  7. Goals of Assessment – 3-12 • What do we want to maximize? • Reliable estimate of student ability • Computer adaptive test (CAT) • Allows for individual test creation • Limits form effects • Difficult to teach to the test • Starting students in the CAT

  8. FAIR-FCAT Relationship

  9. FAIR K-2 Broad Screen • Each grade has different tasks • Kindergarten • Grade 1 • Grade 2 • Probability of Reading Success (PRS) • What does this mean? • How can it be used?

  10. “The Probability of Reading Success (PRS) score predicts the student’s percent chance of being at or above grade level by the end of the year based on the performance for that assessment period (AP) and time of year. The 40th percentile on the SESAT (K) or SAT-10 (grade 1 and 2) is the cut point for grade level performance.The PRS can be used descriptively to compare class, school, or district level performance from one AP to the next. “

  11. State Median PRS K-2

  12. Analyzing Student Progress • Make descriptive comparisons • <85% PRS • Was there a change in PRS? • Yes – YAY! • Did they shift zones? • No – Look at TDI information and examine progress • >=85% PRS • Did the student remain the “green zone”? • Grade 1 question…

  13. Previous Grade 1 PRS

  14. New Grade 1 PRS

  15. RC Screen Helps us identify students who may not be able to meet the grade level literacy standards at the end of the year as assessed by the FCAT without additional targeted literacy instruction.  Mazes Helps us determine whether a student has more fundamental problems in the area of text reading efficiency and low level reading comprehension.  Word Analysis Helps us learn more about a student's fundamental literacy skills--particularly those required to decode unfamiliar words and read accurately.  Purpose of Each 3-12 Assessment

  16. FAIR 3-12 Score Types

  17. Percentiles • Raw score transformation that indicates the rank of the student compared to others of the same grade • Does not denote mastery (criteron-referencing) but relative performance

  18. Standard Scores • Standardized scores are derived from raw scores to compare one student’s performance on a test to the mean of all other students at that grade • Mean = 100, SD = 15, Range = 55-145

  19. Ability Scores • Similar to standard scores! ......but different! • Mean = 500, SD = 100, Range = 200-800

  20. Why we use Ability Scores

  21. Reading Comprehension Mazes Word Analysis AP Score PM score AP Score PM score AP Score PM score student lexile score student lexile score Percentile rank WAAS Percentile rank Adj. Maze SS FSP SS RCAS %ile & SS

  22. Why not FSP? • FSP includes previous FCAT • Differential calculation of FSP • Student may gain in FAIR Reading but not change FSP

  23. Gain Score Analyses • Simple difference scores • Only students who were in all APs • Only within the testing window • Only for consistent grade students

  24. Analyzing Student Progress • Unlike K-2 we have Ability Scores • Determine if score type is AP or PM score • AP score • <85% FSP • Was there a change in FSP? • >=85% FSP • Did the student remain the “green zone”? • PM score • Examine AS in light of state results • Did ability score increase for the student?

  25. Changes to RCAS • New passages and linking • Range of scores for RCAS & Lexile changing • RCAS 2009-2010 • 200-800 • RCAS 2010-2011 • 150-1000 • Uncapped Lexile 2009-2010 • 220L – 1735L • Uncapped Lexile 2010-2011 • 225L – 2105L

  26. Next Steps • Analyzing specific growth targets • Is there merit in knowing the gain scores? • Working with JRF to provide guidelines to districts/schools

More Related