1 / 137

Child Find and Eligibility Determination for AEA Special Education Support Staff

Fall 2011. Child Find and Eligibility Determination for AEA Special Education Support Staff. Day 2. Overview of Day 2. Discrepancy Needs Exclusionary Factors Decision Making . Discrepancy. Discrepancy.

tahir
Télécharger la présentation

Child Find and Eligibility Determination for AEA Special Education Support Staff

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fall 2011 Child Find and Eligibility Determination for AEA Special Education Support Staff Day 2

  2. Overview of Day 2 Discrepancy Needs Exclusionary Factors Decision Making

  3. Discrepancy

  4. Discrepancy The difference between the individual’s current level of performance compared to peers’ level of performance or other expected standards at a single point in time. -- Iowa Special Education Eligibility Standards, 2006

  5. Rigor of Decisions • A basic tenet of problem solving is that as the intensity of a problem rises, the amount of resources that we use in solving the problem also rises. • Similar to this, as the intensity of a problem rises, the rigor of our discrepancy information also needs to increase.

  6. What Makes Data More or Less Rigorous? • Technical Adequacy • Objectivity • Amount • Directness of Measure

  7. Technical Adequacy High Rigor # of letters written: all students receive 10 minutes to write the alphabet on standard paper during writing class at their desk • Standardized Administration: administered under conditions that specify where, when, how, and for how long children may respond to the questions or "prompts.” • Reliability and validity of the data source # of letters written: students use their own writing paper to write all letters during the day Low Rigor

  8. Technical Adequacy High Rigor Assessing Math Skills for multiplication: Permanent product of classroom multiplication test administered in a standardized fashion. • Reliability: Consistency or Repeatability • Validity: The test measures what it is intended to measure • Meaningful measure of a targeted skill Assessing Math Skills for the Unit: Total grades for the unit made up of homework, tests and quizzes. Low Rigor

  9. Technical Adequacy: So What? High Rigor Peer Comparison Performance Standard: Data must be able to be compared to a performance standard to be useful to make decisions Teacher Expectation Low Rigor

  10. Technical Adequacy DIBELS Oral Reading Fluency High

  11. Technical Adequacy Teacher-designed math rubric Low

  12. Objectivity High Rigor Aggression: Number of incidents of aggression • Data that refers to observable and measurable characteristics of the problem • Objective data can be assessed quantitatively or qualitatively Aggression: Number of suspensions (suspensions  aggression) Low Rigor

  13. Objectivity BASC (behavior rating scale) used to measure specific behavior of concern Low

  14. Objectivity Parent log of the number of ounces consumed High

  15. Amount High Rigor A median of 20 blurt outs during 30 minute observation periods across multiple settings with multiple teachers • Multiple data sources • Consistent data collected at different times • Consistency across data provides more confidence in our decisions 1 data point indicating 20 blurt outs during a 30 minute period Low Rigor

  16. Amount Baseline collected once Low

  17. Amount Stable and representative baseline collected 3 times across 3 days High

  18. Directness High Rigor Health: BP, urinalysis, blood level assessments etc. • Measures what you intend and need to measure • Skill specific • Example methods: • Direct observation/assessment • Review of Permanent product • Parent checklist • Teacher rating • Teacher/parent report Health: Web MD’s checklist of symptoms Low Rigor

  19. Directness Teacher tallied student’s incidences of hitting based on definition in intervention plan High

  20. Directness Aggression measured through a teacher’s report completed at end of day based on memory Low

  21. Checklist Not all data sources will meet all elements of TOAD Multiples measures and data sources help assure all elements of TOAD can be addressed Some measures weigh higher on the decisions than others

  22. Checklist: Let’s Do One Together Ms. K made a checklist of morning routine tasks. She asked the TA to complete it based on Katie's independence on her morning routine. The TA completed it one time during her break at lunchtime.

  23. Comparing Data • Is the 1st ranked team twice as good as the 2nd ranked team? • Is the difference between the skills of the 3rd and 4th ranked teams the same as the difference between the skills of the 18th and 19th ranked teams?

  24. Four Types of Data + − + − × ÷

  25. Nominal • A scale of measurement in which numbers stand for names • Allows only for classification Examples: 1 = proficient 2 = non-proficient 1 = true 2 = neutral 3 = false

  26. Think About “Our school meets the unique needs of all of its students.” On a Likert scale of 1-5 (1=strongly disagree and 5=strongly agree), the average score was 2.8. Thus, more than ½ of the community stakeholders have positive perceptions regarding this question. Is this an accurate statement?

  27. Ordinal A way of measuring that ranks (orders) on a variable. The difference between the ranks needs not be equal (unequal intervals between units of measure). • Examples: • Percentile rank • Class rank • Rubric scores • Grade and age equivalents

  28. Grade/Age Equivalent Scores • If a 5th grade student receives a grade equivalent score of 7.4 this DOES NOT mean that student can perform 7th grade work. • It suggests that a typical 7th grader in the fourth month of school would receive the same score if 7th graders had taken the 5th grade test.

  29. Interval A scale or measurement that describes variables in such a way that the distance between any two adjacent units of measure (or intervals) is the same, but in which there is no meaningful zero point. • Examples: • Year (A.D.) • Fahrenheit • Celsius • Standard Scores

  30. Ratio A measurement or scale in which any two adjoining values are the same distance apart and in which there is a zero point • Examples: • ITBS National Standard Score (NSS) • MAP-RIT scores • Percent • Frequency, duration (raw scores) • Lexile scores

  31. Thumbs Up or Thumbs Down? Using national norms, the average 2nd grade student in the fall of the school year reads at a rate of 44 correct words per minute. In the spring of the year, the average 2nd grade student reads at a rate of 90 correct words per minute. To meet this goal in 24 weeks, the student must gain approximately 1.9 words per minute per week.

  32. Thumbs Up or Thumbs Down? Beth obtained an ITED National Standard Score (NSS) of 250 during her 8th grade year. She obtained a NSS of 260 during her 9th grade year. Given that average students are to grow 10 NSS points between these two years, Beth demonstrated average growth during this time.

  33. What Types of Data Do You Use? Rubric Scores % ITBS NSS Percentile Rank CBM Scores What type of data do you frequently use? Are you using data appropriately? Are you reporting data appropriately?

  34. Discrepancy During Evaluation AEA Special Education Procedures Manual (July, 2011), p.44

  35. Discrepancy Multiple Methods and Data Sources R

  36. Multiple Sources of Data Using RIOT Methods There must be at least two sources of data for each area of concern.

  37. Discrepancy During Evaluation AEA Special Education Procedures (July, 2011), p.44

  38. Discrepancy Peer/Expected Performance R

  39. Performance Standards A performance standard (or standard of comparison) is used as a rule or basis of comparison in measuring or judging performance. Data Based Decision-Making Manual, Heartland AEA 11, 2008

  40. Performance Standards First Consider: • Iowa Core Curriculum Essential Concepts and Skills • Iowa Early Learning Standards • Iowa Core Content Standards Then Consider: • District measure of peer performance • District/AEA/state/national norms • Developmental norms • Classroom expectations • School policies

  41. Performance Standards Norm Referenced Comparisons • Individual’s performance is compared with the performance of a normed group • e.g. local, national, user Criterion Referenced Comparisons • Individual's performance is compared to an established standard of performance • e.g. research, developmental, parent, medical, teacher

  42. Expected Level of Performance • The individual would be able to perform at the “floor” of an expected range. • When using percentiles, the expected range would be one standard deviation below or above the “middle” = 16th percentile to the 84th percentile. • Record the range, not just a score.

  43. Discrepancy During Evaluation AEA Special Education Procedures (July, 2011), p.44

  44. Discrepancy Magnitude of Discrepancy R

  45. Determining Magnitude Read the Magnitude of Discrepancy information in the Portfolio. This is a section of the Special Education Procedures Manual. Discuss the reading with your table partners.

  46. Magnitude of the Discrepancy • Size of the difference between the standard and current performance • Ways to measure the magnitude: • Absolute difference • Percentile ranks • Discrepancy ratios

  47. Absolute Difference • Absolute difference is the difference between the current performance from the performance standard • Percentage of points earned on an objectively defined behavior point sheet • Peers: 95%; Student: 65% • 95 – 65 = 30 percentage points (Absolute difference)

  48. Absolute Difference Magnitude Q: How do you determine if an absolute difference is significant? A: Convert the absolute difference into a percentage, then use the guideline of 25% or more difference.

  49. Percentile Rank • Describes how a score compares to other scores on the same assessment • Examples: • 30th percentile means a student scored as well or better than 30% of the comparison group • 50th percentile means a student scored as well or better than 50% of the comparison group

  50. Percentile Rank vs. Percent • Portion of the whole thing • Answers the question, “how much?” or “what part of 100?” • Absolute difference • Describes how score fits into the distribution of scores • Answer the question, “how well compared to…” • Relative (dependent on how everyone else performs)

More Related