1 / 25

RtI: Big Ideas and Key Elements

RtI: Big Ideas and Key Elements. VPA 2011 Julie J. Benay. The Big Ideas of RtI. High quality instruction Frequent Assessment Data Based Decision Making. Key Components to RtI. High quality, research based core instruction Universal screening and benchmark testing

ita
Télécharger la présentation

RtI: Big Ideas and Key Elements

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RtI: Big Ideas and Key Elements VPA 2011 Julie J. Benay

  2. The Big Ideas of RtI High quality instruction Frequent Assessment Data Based Decision Making

  3. Key Components to RtI • High quality, research based core instruction • Universal screening and benchmark testing • Continuous progress monitoring • Research based interventions • Interventions adjusted based on data, including: frequency, intensity, fidelity • Collaboration, teaming, shared responsibility

  4. RtI: Procedures • Universal benchmark screening • Ongoing progress monitoring • Interventions provided with sufficient frequency, fidelity and intensity • Instructional adjustments made based on data

  5. Why Universal Screening and Benchmarks? • Universal screening in the fall provides a quick measure of gains or losses over the summer months • Winter benchmark prevents students from “falling through the cracks” while there is still time to intervene • Spring benchmark provides picture of annual growth; useful information for summer programming; and a starting point for comparison in the fall

  6. Research on “Curriculum Based Measures” • 30 years of strong research indicate the reliability and predictive value of CBM (Fuchs and Fuchs) • More than 200 empirical studies published in peer-review journals • (a) provide evidence of CBM’s reliability and validity for assessing the development of competence in reading, spelling, and mathematics and • (b) document CBM’s capacity to help teachers improve student outcomes at the elementary grades.

  7. Mastery Measurement • With mastery measurement, teachers test for mastery of a single skill and, after mastery is demonstrated, they assess mastery of the next skill in a sequence • Scores in mastery measurement cannot be compared over the course of a year, making it impossible to quantify rates of progress • Many tests of mastery are teacher designed and lack validity and reliability

  8. Curriculum Based Measures • Each CBM test assesses all the different skills covered in the annual curriculum. • CBM samples the many skills in the annual curriculum in such a way that each weekly test is an alternate form (with different test items, but of equivalent difficulty). • Therefore, scores earned at different times during the school year can be compared to determine whether a student’s competence is increasing.

  9. Curriculum Based Measures CBM makes no assumptions about instructional hierarchy for determining measurement CBM incorporates automatic tests of retention and generalization CBM is distinctive: • Each CBM test if of equivalent difficulty Samples the year-long curriculum • CBM is highly prescriptive and standardized Reliable and valid scores

  10. CBM Basics CBM monitors student progress throughout the school year Students are given probes at regular intervals depending on the intervention Weekly, bi-weekly Teachers use CBM data along with other data to quantify short- and long-term goals that will meet end-of-year goals

  11. Using CBM CBM tests are brief and easy to administer All tests are different, but assess the same skills and the same difficulty level CBM scores are graphed for teachers to use to make decisions about instructional programs and teaching methods for each student CBM data management solutions are available commercially

  12. Taking the Temperature • CBMs are highly sensitive to learning but they are not perfect assessments • They do one thing, and they do it well. They take the temperature of student learning • A value in CBM is that they can be given again if the student had a bad day or testing procedures were invalidated • Results of CBM should be triangulated with other forms of assessment to develop a well rounded approach to making instructional decisions

  13. CBM Results False Positive False Negative Student performs within average range on the CBM, but other data sources indicate the student is not fully understanding learning objectives • Student CBM score indicates a learning problem, but all other data sources provide the team with confidence that no problem exists

  14. Using CBMs: Devil in the Details • Determine who, what, where, and when to administer • Train staff to administer and prepare to monitor fidelity (less of an issue with computer based administration such as Renaissance) • Decide how to organize and access data • Establish times to meet as teams to discuss data • Trained leaders to guide discussions

  15. Screening Tools Chart • The National Center on RtI has developed a “tools” chart to assist schools in selecting benchmark screening and progress monitoring tools. • The chart allows schools to consider validity and reliability in measuring both proximal and distal results • Other factors to consider include expense of the product and complexity of administration

  16. Aimsweb • Aimsweb started in Eden Prarie MN with 9 employees in a small company called Edformation • The tool was so useful that it was quickly purchased by Harcourt Assessment • Following that purchase, Pearson purchased the product and now owns it • This year, Aimsweb became SIF compatible, allowing the information to be integrated with PowerSchool or other SIF compatible SIS modules

  17. Aimsweb Demonstration The easiest way for me to illustrate the use of CBMs in planning for instruction is to demonstrate using Aimsweb. However, all the principles of practice I will demonstrate can be done with pencil and graph paper! The principles and practice include: • Establishing and recording benchmark scores • Determining which students need supplemental (Tier II) instruction • Establishing a goal or target to be achieved within the timeframe (usually 10-20 weeks) and setting aimline. Rate of improvement (ROI) can be used to calculate the goal mathematically • Setting up progress monitoring charts for each student • Tracking data using progress monitoring probe scores

  18. The really essential question Access to another tier of instruction is a potentially life changing decision for a student. How and when do you make that decision?

  19. RtI Decision Making Models

  20. Decision Rules in a Problem Solving Model We need guidance in regard to decision rules. We should NEVER be encouraged to use one source of data, and we need to ask: • What measures are you using and what information are you getting from that data? • What source of data did you examine first? • What was the target and who established it? • When you make a decision about who will be with an intervention teacher, how did you decide? Think about how you considered your boundaries (eg group size) and how you prioritized the needs of the learners.

  21. Using CBM for Systems Analysis “Screening and progress monitoring data can be aggregated and used to compare and contrast the adequacy of the core curriculum as well as the effectiveness of different instructional and behavioral strategies for various groups of students within a school. For example, if 60% of the students in a particular grade score below the cut point on a screening test at the beginning of the year, school personnel might consider the appropriateness of the core curriculum or whether differentiated learning activities need to be added to better meet the needs of the students in that grade.”

  22. Response to Instruction “How will we respond when a student isn’t learning?” – Mike Schmoker “Find out what the child is thinking and intervene accordingly” – Dr. Lillian Katz RtI is more than “response to intervention.” Digging deeper means considering the quality of instruction at all tiers, building in quality assessments, and helping teachers to have meaningful conversations about their practice.

  23. How do we respond to the data? “Students in the same school who experience difficulty in learning will be subject to very different responses based upon the beliefs and practices of individual teachers.” (DuFour et al) “In short, a primary difference between the historical and contemporary approaches is the emphasis on proper instruction first, rather than believing there is something wrong within the student.” (John Hoover)

  24. RtI: The Really Big Ideas Shared Ownership for Learning How can we work together to design a coherent, cohesive plan of instruction? How can all teachers share ownership for student success across grade levels and tiers of instruction? Accountability Among Adults How do we create a climate where it is safe to admit what you don’t know? How can we create communities of professional practice that capitalize on the strengths within our own system? Response to Instruction How do we help all teachers become better at “finding out what the child is thinking and responding accordingly” (Dr. Lillian Katz) How do we move from checklists of symptoms and a focus on eligibility to identifying and implementing effective teaching strategies?

  25. The Big Ideas High quality instruction: Deep curriculum and assessment work, supported by teacher learning communities where teachers openly and honestly participate in collaborative work; coordinated, coherent instructional plans for striving learners Frequent assessment: Universal screening and benchmark testing to ensure that no student “falls through the cracks,” reliable progress monitoring data from more than one source, ongoing, quality formative assessments Data based decision making: Thoughtful decisions focused on coherent plans of instruction rather than eligibility; usingthe data to change instruction at all tiers rather than spending time admiringor discussing the data itself Shared ownership Adults are accountable Response to instruction

More Related