1 / 29

Yi-Chen Wu, Martha Thurlow , & S heryl Lazarus National Center on Educational Outcomes

Longitudinal Analysis of Effects of Reclassification, Reporting Methods , and Analytical Techniques on Trends in Math Performance of Students with Disabilities. Yi-Chen Wu, Martha Thurlow , & S heryl Lazarus National Center on Educational Outcomes University of Minnesota.

klaus
Télécharger la présentation

Yi-Chen Wu, Martha Thurlow , & S heryl Lazarus National Center on Educational Outcomes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Longitudinal Analysis of Effects of Reclassification, Reporting Methods, and Analytical Techniques on Trends in Math Performance of Students with Disabilities Yi-Chen Wu, Martha Thurlow, & Sheryl Lazarus National Center on Educational Outcomes University of Minnesota This paper was developed, in part, with support from the U.S. Department of Education, Office of Special Education Programs grants (#H373X070021and #H326G110002). Opinions expressed herein do not necessarily reflect those of the U.S. Department of Education or Offices within it.

  2. NCEO Web site(http://www.cehd.umn.edu/nceo/)

  3. Outline • Background • Achievement gap • Explanations • Ysseldyke and Bielinski (2002) study • Questions • Method • Data source • Analytical Techniques • Results • Conclusions

  4. Achievement gap • Focused on race/ethnicity or poverty. • Less attention on achievement gaps between SPED vs. Non-SPED • Research on Achievement Gap (Chudowsky, Chudowsky, & Kober, 2009a; 2009b) • Examined gaps for subgroups by proficiency rate & mean SS, but no comparison between SPED and Non-SPED • Examined the achievement over time for SWD, but not the gap between SWD vs. SWOD over time

  5. Explanations on gap increasing over time between SWD and SWOD • SPED drop out of school=high achievement (McMillen & Kaufman, 1997) • Tests given in higher grades are less valid for SWD (Thurlow & Ysseldyke, 1999; Thurlow, Bielinski, Minnema, & Scott, 2002) • Students with lower performance moved in SPED and students with higher performance move out SPED (Ysseldyke and Bielinski, 2002)

  6. Ysseldyke and Bielinski (2002) study • Explored the extent to which reclassification impacts the size of the achievement gap between GED and SPED across grades. • to compare the effects of different reporting methods, and to examine the effects of reclassification • They argued that fair comparisons involved using clearly defined and consistent comparison groups, and that special education status complicates the reporting because status changes over time.

  7. Ysseldyke and Bielinski (2002) study • They used three methods to analyze trends in performance (cross-sectional, cohort-static and cohort-dynamic), and found that gap trends depended on the method used • examined how the use of scaled scores and effect size could be used for reporting results.

  8. Purpose • The Ysseldyke and Bielinski (2002) study • did not use proficiency to examine the reporting results • is now more than a decade old • was completed prior to the implementation of ESEA 2001. • There is a need to take a new look at how achievement gap trends are affected by the method used to calculate them.

  9. Research Questions • Reporting Methods: How does the use of cross-sectional, cohort-static, and cohort-dynamic data analysis methods affect interpretation of trends in the performance of students with disabilities? • Analytical Techniques: How does the score used in the analyses (proficiency level, scaled score, effect size) affect interpretation of trends and achievement gaps? • Reclassification: To what extent do students move in and out of special education each year, and what are the achievement characteristics of those who do and do not move?

  10. Method • Data source • used math assessment data for grades 3-8 from a midwestern state • Cross-sectional • 2005-06 to 2009-10 • 305,819 records • Cohort • 2005-06 to 2009-10+ 2004-05 (G3-8) • 8,231 students with 6-yr records

  11. Method- Methods Used to Measure Gap • Cross-sectional • five years of data were used to calculate the average performance to reduce year-to-year variations that might affect results if data from a single year were selected. • Cohort-static • A cohort across six years • Group membership stayed the same across years. • Cohort-dynamic • group membership was redefined every year

  12. Method- Analytical Techniques

  13. Results—RQ1 • How does the use of cross-sectional, cohort-static, and cohort-dynamic data analysis methods affect interpretation of trends in the performance of students with disabilities? • Using PF to show the trend over time among the three methods used to measure gap

  14. Results—Comparing reporting methods Figure 1. Cross-sectional method: Percentage of students above proficiency level on math assessment by SPED and non-SPED 21-->47

  15. Results—Comparing reporting methods Figure 2: Cohort-static method: Percentage of students above proficiency level on math assessment by SPED and Non-SPED 22->21

  16. Results—Comparing reporting methods Figure 3. Cohort-dynamic method: The percentage of students above proficiency level on math assessment by SPED and Non-SPED 22-->45

  17. Results—Comparing reporting methods Cross-sectional • Quit different • Quite similar • Steady Cohort-dynamic Cohort-static

  18. Results—RQ2 • How does the score used in the analyses (proficiency rate, scaled score, effect size) affect interpretation of trends and achievement gaps?

  19. Results—Comparing Analytical Techniques Figure 4. Percent proficient: Achievement gap (difference between non-SPED and SPED) in percent proficient on math assessment

  20. Results—Comparing Analytical Techniques Figure 5. Scaled score: Achievement gap (difference between non-SPED and SPED) in mean scaled score on math assessment

  21. Results—Comparing Analytical Techniques Figure 6. Effect size: Achievement gap (difference between non-SPED and SPED) in effect size on math assessment

  22. Results—Comparing analytical techniques Proficiency Level • Quit different • Quite similar • Steady Effect size Scaled Score

  23. Results—RQ3 • To what extent do students move in and out of special education each year, and what are the achievement characteristics of those who do and do not move?

  24. Results—Reclassification Figure 7. Mean math scaled scores by special education status across years Note: NS1 = Students who remained in non-special education in both of two consecutive years; NS2 = Students who moved from non-special education to special education in the second of two consecutive years; S1 = Students who remained in special education in both of two consecutive years; S2 = Students who moved from special education to non-special education in the second of two consecutive years.

  25. Results—Reclassification • Non-SPED only • Students stayed in non-SPED for six years • Non-SPED to SPED • Students moved from non-SPED to SPED only onceover six years • SPED to Non-SPED • Students moved from SPED to non-SPED only once over six years • Back and forth • Students moved between SPED and non-SPED more than once over six years • SPED only • Students stayed in SPED for six years

  26. Results—Reclassification Figure 8. The effect size between different reclassification groups in math assessment by using non-SPED only group as the reference group

  27. Discussion and Conclusion • Different methods of reporting data present different pictures of the gap between SPED and non-SPED • This study was undertaken to update the work done more than a decade ago by Ysseldyke and Bielinski (2002) • Replicated + proficiency level • Confirmed • Suggestions

  28. Discussion and Conclusion • Suggestions • The choice of method affects what the results look like and the possible interpretation of findings. • Tracking individual student performance provides a better indication of how well schools are educating their students than cross-sectional models where the grade remains the constant but the students change. • Cross-sectional models should not be used when examining trends across grades. • Cohort-static and cohort-dynamic methods enable educators to make comparisons among individual students

  29. Discussion and Conclusion • Specific situation for each reporting method • If the goal is to know how well students do yearly without considering changing students => cross-sectional • http://www.schooldigger.com/go/MN/schools/3243001386/school.aspx • If states and districts want to account with precision for the reclassification of students each year. => cohort-dynamic • When the goal is to account for individual student performance over time without regard to the nature of services received=> cohort-static

More Related