1 / 36

The IES RTI Evaluation Study: What Can We Conclude and Not Conclude about It?

The IES RTI Evaluation Study: What Can We Conclude and Not Conclude about It?. Douglas Fuchs and Lynn Fuchs Vanderbilt University Presented at Annual Project Directors Meeting Office of Special Education Programs August 1, 2016. Preface.

Télécharger la présentation

The IES RTI Evaluation Study: What Can We Conclude and Not Conclude about It?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The IES RTI Evaluation Study:What Can We Conclude and Not Conclude about It? Douglas Fuchsand Lynn Fuchs Vanderbilt University Presented at Annual Project Directors Meeting Office of Special Education Programs August 1, 2016

  2. Preface • This talk is not a comprehensive review or critique of the IES RTI evaluation. • Instead, it’s a relatively brief explanation of what Lynn and I believe can be concluded—and should not be concluded—from the study. • And it’s also a brief reflection on where we’re at with respect to RTI more generally.

  3. Part I: RTI EvaluationOverview of Lynn’s Talk • Begin by highlighting some important features of the IES evaluation we should all keep in mind when thinking about the study’s findings. • Then she uses those study features to make 3 points about what we can and cannot conclude from the evaluation. • She ends with a short summary of our “take” on the IES report.

  4. Overview • Five features of the IES evaluation we should all keep in mind while thinking about the study’s findings. • Three points about what we can and cannot conclude from the evaluation. • Short summary of our “take” on the IES report.

  5. Study Features to Keep in Mind: #1 • This quasi-experimental evaluation only involved the “Impact” schools: schools that self reported they were fully implementing RTI. • The “Reference” schools, discussed in the IES report, were not part of the evaluation study. • The report describes similarities and differences in the RTI practices of Impact vs. Reference schools. • So it’s easy to be confused that the evaluation compared results for these 2 groups of schools. • The IES evaluation study did not compare results for Impact vs. Reference schools.

  6. Study Features to Keep in Mind: #2 • The independent variable was the use of screening tools to designate students in need of intervention. • The independent variable was not whether students received intervention. • The regression discontinuity design (RDD) compared students whose screening scores placed them just above the screening cut point against students whose screening scores placed them just below the screening cut point.

  7. Study Features to Keep in Mind: #3 • We don’t know who did and did not actually receive intervention. • The IES database does not have any data about which students actually received intervention. • But the report does indicate that the RDD analysis included • Some students who were below the screening cut-point but did not receive intervention • Some students above the screening cut-point but did receive intervention.

  8. Study Features to Keep in Mind: #4 • The IES document states that, to be included in the analysis, Impact schools had to report they had at least one student at Tier 1, one at Tier 2, and one at Tier 3. • Only 89 of 143 schools met this criterion at 1st grade (report doesn’t say how many at other grades, but it does say this pattern was similar at all three grades). • Given the inclusion criterion of 1 student at each tier, it’s not clear how many Impact schools in the analysis adhered to their screening rules and actually provided intervention to students whose screening scores indicated need for intervention.

  9. Study Features to Keep in Mind: #5 • Only one piece of information about RTI practice was verified: Impact schools did apply a screening cut-point to their reading screening data. • This was the study’s independent variable. • So conclusions are about • The effects of conducting screening and applying a cut-point to designate students with vs. without need for intervention • Conclusions are not about • The effects of delivering intervention to students who, according to the screening system, require it.

  10. What We Can and Cannot Conclude: #1 On the basis of this evaluation • Can conclude that screening for the purpose of designating students for intervention does not appear to improve reading outcomes. • Cannot conclude anything about • The effects of actually providing intervention or • The effects of RTI as a system that integrates the use of assessment data with intervention.

  11. The independent variable is the use of screening to designate students in need of intervention. • The independent variable is not providing intervention to those students. • The evaluation does not have the data required to know which students in the analysis did and did not receive intervention. • So the IES study cannot assess whether students who needed and actually received intervention did better (or worse) than expected.

  12. To Further Raise Caution, As Per the Report • 45% of Impact schools reported providing intervention to students above the schools’ screening cut-point. So, it’s likely some students in the analysis who were just above the screening cut-point (and should not have received intervention) did receive intervention. • Only 62% of schools had at least one first-grade student at Tier 1, 2, and 3. So, it’s also likely some students in the analysis who were just below the screening cut-point did not receive intervention.

  13. The study provides information on a limited question: Does the use of screening to designate students in need of intervention improve reading outcomes, regardless of whether schools adhere to the screening rules and whether they adhere to the RTI intervention component?

  14. What We Can and Cannot Conclude: #2 On the basis of this evaluation • We can draw conclusions only about students at or around the 40th percentile. We cannot draw conclusions about students much below the 40th percentile.   • This is because • RDD only included students just above and just below the schools’ screening cut-point for designating students in need of intervention. • The functional screening cut-point across the Impact study schools was the 41st percentile (i.e., according to the schools’ screening cut-points, 41% of students in the Impact schools met criteria for Tier 2 or Tier 3). • By contrast, most RTI models and most RTI school-based research, reserve Tiers 2 & 3 intervention for the lowest 20-25th of students. • So, results cannot be used to determine whether RTI is effective for the students many researchers, practitioners, and policymakers believe RTI is meant to help.

  15. This surprisingly high functional cut-point raises two problematic aspects of RTI implementation in the Impact schools. • The unexpectedly large percentage of students who met the screening cut-point for Tiers 2/3 may reflect poor quality Tier 1. • Impact schools’ screening systems were designed to provide intervention to many students who didn’t require it. • Over-identification of students is highly undesirable because it • Dilutes the effectiveness of intervention for students who do require intervention • Carries potentially negative effects for students who don’t require the intervention (they don’t need work on the foundational skills addressed in intervention). This issue is especially relevant in the IES study because in 67% of Impact schools, intervention supplanted at least part of the Tier 1 program. Children near the 40th percentile probably required the multi-component focus of Tier 1 more than they required work on phonics and phonemic awareness. • If the Impact sample’s practices represent many RTI schools, then many schools are not adhering to recommended practices, and RTI practice needs to be improved.

  16. What We Can and Cannot Conclude: #3 On the basis of this evaluation • We can draw conclusions about students with IEPs whose reading performance is around the 40th percentile. We cannot draw conclusions about students with IEPs whose reading performance is below the 40th percentile. • At grade 1, students with IEPs in RDD analysis (i.e., IEP students who scored just above or below the screening cut-point – at or around 41st percentile) did more poorly at the end of the year if they were below rather than above that screening cut-point. But IEP students who perform around the 40th percentile are highly unusual. Thus, this finding has poor generalizability to students with IEPs.

  17. Unfortunately, the Education Week article interpreted this result to mean that “students who were already in special education … performed particularly poorly if they received intervention” (p. 1). • This is an incorrect or misleading comment on three counts. • The IES report generalizes only to students with IEPs who read around the 40th percentile. • The study authors have no way of knowing whether students with IEPs around the 40th percentile received RTI intervention. • This finding was at grade 1, but not at grades 2 and 3.

  18. Summary • Since the release of the IES evaluation, many in the media (e.g., Sarah Sparks of Education Week) have accepted it as an evaluation of RTI. • Yet, there is broad agreement among virtually all researchers and many practitioners and policymakers that RTI refers to a multi-dimensional system of service delivery: at the least, the combined use of a screening measure, assessments of responsiveness to instruction, and multiple tiers of increasingly intensive and evidence-based interventions at each tier.

  19. Summary • By contrast, the IES evaluation focused on something much more narrow. It asked the relatively simple question, “Does the application of a screening measure that produces a continuous distribution of scores, and a cut-point to distinguish those who perform above and below it, result in 2 groups that perform reliably differently from each other at year’s end?” • At first grade, the answer was “yes.” Children whose screening scores were just below the 40th percentile ended the year below students whose screening scores were just above that percentile. • At second and third grades, there were no between-group differences. • Such findings should have little bearing on how we think about RTI, at least for those of who think RTI is more than the use of a screening measure and cut-point.

  20. In Closing • Because of RTI’s popularity, and RTI’s potential importance to the lives of children and youth, RTI’s value should be evaluated. It should be evaluated comprehensively, rigorously, and fairly. • For whatever reasons, the IES evaluation, in our view, was not comprehensive. • If results are wrongly interpreted to mean that “RTI doesn’t work” • The hard work of practitioners, researchers, and policymakers to make RTI effective may stop. • The 10-15 years of research on related assessments and interventions, much of it funded by IES, may be viewed incorrectly as not effective. • The point here is that we must all be very clear on what we can and cannot conclude from this evaluation.

  21. Part II: Better Questions to Ask about RTI Than “Does It Work?”Lessons from 15 Years of Implementation

  22. But Let’s Start with “Does It Work?”

  23. Why It’s Wrong to Say RTI Doesn’t Work “Micro Reasons” • Researchers have developed universal screens with validated cut-points for practitioners to identify “at-risk” students. • Progress monitoring systems permit teachers to see if children identified by screens are not sufficiently responsive to classroom instruction and require more intensive intervention.

  24. Why It’s Wrong to Say RTI Doesn’t Work • There are stronger core reading and math textbooks today than 15 years ago: texts based on research-validated principles of instruction. • Teachers may choose from an array of validated supplemental instructional programs (e.g., TAI, CWPT, PALS) that give students an opportunity to practice basic skills; that give teachers a means of differentiating instruction.

  25. Why It’s Wrong to Say RTI Doesn’t Work • Stronger texts and supplemental instructional programs together represent a better tool kit than was previously available: a better chance to make classrooms more accommodating of greater academic diversity. • Researchers have developed and validated effective Tier 2 programs to be used with progress monitoring systems for students who are not sufficiently responsive to classroom instruction.

  26. Why It’s Wrong to Say RTI Doesn’t Work “Macro” Reasons • More teachers, administrators, policymakers recognize many children and youth are performing poorly; many SWD are performing abysmally (NAEP, SEELS, NLTS, state report cards) • Growing recognition classrooms need strengthening and multi-tiered systems of support—RTI—are necessary.

  27. Why It’s Wrong to Say RTI Doesn’t Work • Accountability (for better and worse) has sharpened a focus on student achievement. • Micro and macro considerations together are a reasonable basis to be hopeful about RTI. • There is considerable anecdotal evidence that hardworking educators around the country have succeeded with parts of RTI (e.g., screening or benchmarking or small-group instruction).

  28. BUT…

  29. It’s Also Wrong to Say RTI Works • No experimental studies or quasi-experimental studies of RTI’s efficacy as a comprehensive, integrated system. • Thereare a handful of state- or district-run evaluations of RTI: • Instructional Support Teams in Pennsylvania • Intervention-Based Assessments in Ohio • Building Assessment Teams, Heartland, IA • Problem-Solving Model, Minneapolis, MN

  30. Why It’s Wrong to SayRTI Works • Evaluators of such programs (some of which were pre-referral interventions, not RTI) should be commended for their efforts. • But… they infrequently collected achievement or fidelity data. Rather, they tracked • Referrals • Special education placements • Subjective judgments about student behavior. • Results may not be seen as supportive of RTI effectiveness, or its overall purpose.

  31. Quiz • True or False: RTI Doesn’t Work?

  32. Quiz • False

  33. Quiz • True or False: RTI Works!

  34. Quiz • False

  35. Quiz • For 100 bonus points, “If RTI isn’t a failure and isn’t a success, what is it?”

  36. Should We Be Optimists or Pessimists about RTI/MTSS? • Cause for optimism: We have the knowledge and tools to make at least part of RTI work. We have the way if not the will. • Cause for pessimism: Evidence suggests many practitioners do not: • Consistently choose correct tools • Implement RTI with fidelity • Know what to do with students who require most intensive supports • Have the resources for intensive intervention.

More Related