1 / 118

Oklahoma School Psychologists Association Fall Conference 2008

Oklahoma School Psychologists Association Fall Conference 2008. Pattern of Strengths and Weaknesses In SLD Evaluations: What’s It All About?. Jim Hanson, M.Ed. Oregon School Psychologists Association (OSPA) JaBrHanson@yahoo.com

tabitha
Télécharger la présentation

Oklahoma School Psychologists Association Fall Conference 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Oklahoma School Psychologists AssociationFall Conference 2008 Pattern of Strengths and Weaknesses In SLD Evaluations: What’s It All About? Jim Hanson, M.Ed. Oregon School Psychologists Association (OSPA) JaBrHanson@yahoo.com Thjs powerpoint uses materials from former presentations by Suzy Harris, Attorney at Law, David Guardino, Oregon Department of Education (ODE), and Betsy Ramsey, Oregon Parent Training Initiative (ORPTI)

  2. Objectives • Review requirements for SLD eligibility, including changes in IDEA 2004 & OARs • Review two types of SLD evaluation – • Response to Intervention (RTI) • Pattern of Strengths and Weaknesses (PSW) • English Language Learners • Step by Step Implementation Process

  3. Changes to SLD Eligibility Requirements34 CFR 300.307 - 311 & OAR 581-015-2170 • Changed from “severe discrepancy” to “pattern of strengths and weaknesses” • Added option of RTI (OAR - based on district model) • Added progress monitoring component for both RTI and PSW evaluations • Observation – before or during • Exclusionary factors remain

  4. SLD Evaluation Components – Both (if needed) • Developmental history • Assessment of cognition, fine motor, perceptual motor, communication, social-emotional, memory (if student exhibits impairment in one or more of these areas) • Medical statement • Impact of disability on educational performance

  5. Oregon Department of Education • Like any other disability determination under IDEA, SLD can’t be based on any single criterion – meaning a single test, assessment, observation, or report. • An evaluationof a student suspected of having SLD must include a variety of assessment tools and strategies. • Evaluation must include input from student’s parents and an observation of the student’s academic performance and behavior in the general education classroom.

  6. Eligibility TeamOAR 581-015-2170(2) • Group of qualified professionals • Parents • Regular classroom teacher • Person qualified to conduct individual diagnostic evaluations using instruments that meet OAR requirements (school psychologist, speech pathologist, etc.)

  7. Qualified evaluatorsOAR 581-015-2110(4)(a)(D)&(E) Assessments and other evaluation materials must be: • “administered by trained and knowledgeable personnel” and • “administered in accordance with any instructions provided by the producer of the assessments.”

  8. Federal Definition Unchanged “A disorder in one or more of the basic psychological processes involved in understanding or using language, spoken or written, which manifests itself in the imperfect ability to listen, think, speak, read, write, spell, or do mathematical calculations. Such terms include such conditions as perceptual disabilities, brain injury, minimal brain dysfunction, dyslexia, and developmental aphasia”

  9. Order of the Presentation • Not the IQ/Achievement Discrepancy • Response to Intervention • Pattern of Strengths & Weaknesses • Complimentary, not exclusive approaches for SLD • Other disabilities?

  10. Professional Position Statements: No Discrepancy, Yes to RTI & PSW • National Association of School Psychologists 2007 • Oregon School Psychologists Association 2006 • Oregon Branch of the International Dyslexia Association 2007 • National Joint Commission on Learning Disabilities 2005 • U.S. Department of Education Office of Special Education Programs (OSEP) 2007

  11. President’s Message “I would hope that the goal here is to expand the methods of assessment available to the practitioner and not to limit them. It seems possible that these two very valuable approaches can be utilized along a continuum of collecting information about a child that would culminate in a very clear and comprehensive evaluation that would be of value to all.”Huff, L. (2005, February). President’s Message. NASP Communique, 33, 2-3.

  12. WE CAN ALL GET ALONG

  13. Weaknesses of the Old IQ/Achievement Discrepancy Model • Does not address the federal definition of SLD • Does not discriminate between disabled and non-disabled readers, or among children who were found to be easy or difficult to remediate (Vellutino et al. 2000 p. 235) • Results in a “Wait to Fail” model: not identified early • Failure to rule out lack of instruction or lack of effective curriculum as a causal factor for underachievement • Not consistently applied • Does not explain why a student is struggling to read or provide research-based interventions

  14. Why Not Full Scale IQ?-Prediction • Full Scale IQ explains only 10-20% of specific areas of achievement • Specific cognitive abilities explain 50-70% of specific areas of achievement • (Flanagan, Ortiz & Alfonzo, 2007)

  15. What is Response to Intervention? • Tier 1: Researched-based general education reading curriculum with universal screening (for all students) on Big Ideas of reading (phonemic awareness, phonics, vocabulary, fluency, comprehension) • Tier 2: Small group interventions based on students’ needs with lowest 20%, monitor progress to determine if they respond. If they don’t respond: • Tier 3: Comprehensive special education assessment, small group or individualized instruction based on results

  16. Phoneme Segmentation Fluency

  17. ASHA Guidance for SLP at All Tiers • http://www.asha.org/members/slp/schools/prof-consult/NewRolesSLP.htm • New Roles: Program Design (selecting reading curricula) Collaboration (universal screening, interpreting screenings, language base of literacy) Serving Individual Students (sound error screening, cut points, norm-referenced assessment, evidence-based practices for speech and language services in RTI or PSW)

  18. Speech Pathologists at Tier 1 • Phoneme Segmentation Fluency (DIBELS) Phonological Awareness K-1 Hear and manipulate sounds in spoken words Benchmark Fall First Grade 35 and above .68 Spring K with Spring 1st WJ III ACH Total Reading • Word Use Fluency (DIBELS) Vocabulary and Oral Language K-3 Use target word in sentence Currently no benchmark goal .44-.48 with TOLD-3

  19. Efficacy and Effectiveness • Randomized Controlled Trials (RCT) means treatments effective under specific conditions for specific populations when delivered with fidelity: in standardized, replicable fashion • Progress Monitoring: Serial Independent Assessment of Achievement Skills (Fletcher, 2005) means measuring the same academic skills over time with tests not directly aligned with the curriculum (to reduce contextual variables in determining RTI) • PM Effectiveness does not mean just looking at chapter tests scores that measure different skills

  20. Rousseau, the READ dog • Reading Education Assistance Dogs • Effective based on single case studies (Progress Monitoring) • Efficaciousness with RCTs not established

  21. Tier Two CBM Basics • Identify skills in the year-long curriculum • Determine weight of skills in the curriculum • Create multiple alternate test forms • each test samples the entire year’s curriculum • each test contains the same types of problems • Give tests frequently (weekly/monthly) • Review results • Modify instruction as appropriate

  22. Different PM devices • Dynamic Indicators of Basic Early Literacy Skills (DIBELS) • Aimsweb reading, writing and math probes • EasyCBM • Home-made Curriculum-Based Measures • New idea for supplementing CBM: IRT

  23. Advantages of commercial products K-1 screenings predictive of success on state reading assessments at third grade Universal screener tied to progress monitor Computer format is friendly for school-based teams and for parent communication Avoids “circularity” in intervention and diagnosis (Suhr, 2008)

  24. Progress Monitoring

  25. Research findings • CBM with “goal raising rule” for students responding well: effect size .52 SD moderate • CBM with “change the program rule” for students not responding well: effect size .72 SD moderate (.80 = large) • Results in teachers planning more comprehensive reading programs Fletcher, et.al. 2007

  26. MLC example • Kindergarten Phoneme Segmentation Fluency (DIBELS) information along with teacher nomination • Twelve week PS intervention group led by Speech Pathologist • Progress Monitoring by Educational Assistant trained in DIBELS administration and interpretation • Team determination of student progress and referral for comprehensive evaluation

  27. Technically adequate measures • SEM on reading fluency measures can vary by up to fifteen correct words per minute based on grade and testing conditions (Christ & Silberglitt, 2007) • Issues in determining “gain scores” under RTI potentially more complex than under severe discrepancy models (Reynolds, 2008) • Standards for Educational and Psychological Testing (1999)

  28. CBM and W: Meets ODE requirement for RTI with PM • Using CBM scores coupled with: • Equal interval scores from standardized tests in a test-retest format (e.g., WJ-III “W score”) • Form A for pre-test, then Form B for post-test for children in intervention group • W score provides basis for (a) comparison to performance from national sample of peer-compares child to “normal” development of skills; (Decker, S. L., 2008, p 7-8) • and (b) is age appropriate to the child’s age and grade placement (ODE sample forms for SLD, p. 8) • W based on Item Response Theory (IRT) and theta parameters, not home-made CBM based only on scope, or on CBM with technical dubiousness • W score could be integrated into CBM for RTI and for RTI after eligibility (e.g., IEP evaluation procedures) (Weiss, 2008)

  29. RTI Benefits (Feifer & Della Toffalo, 2007) • Ecological validity • Quicker and cheaper • Does not rely on teacher nomination • Linked to curricular decision-making • Encourages scientific interventions • Gets kids help earlier • De-emphasizes labels • Reduces “curriculum casualties” • Pro-active, not reactive

  30. RTI Concerns • Does not address the federal definition of a learning disability • Does not answer why a student is not responding to intervention • Lack of technically adequate measures • Lack of evidence-based third-tier interventions • One-size fits all approach to intervention: two students can fail at the same task for very different reasons • Misses the bright dyslexic kids • Potential to miss co-morbid conditions

  31. The use of RTI-only models …not only reasonable but a desirable and expected outcome of RTI that a child would be considered learning disabled in one teacher’s classroom but not in a different classroom where the general achievement level and progress rate of other students was different (Reschley, 2005)

  32. RTI Issues • National Reading Panel Report (2000) teaching of phonics, a best practice, accounts for approximately 10% in the variance of reading treatment outcomes (Hammill & Swanson, 2006)

  33. Case Example • Student was identified by DIBELS and teacher as lowest 20% in alphabet knowledge (phonics not phonological) • Student did not respond to research-based phonics interventions • Student evaluated by outside agency: IQ/ACH discrepancy-not good enough for us • Student evaluated by school-based team: SLD with a specific memory deficit (MA) • Rule out ADHD and depression • Interventions prioritized

  34. Oregon Experience • U of O, Bethel, Tigard-Tualatin, Oak Grove • Reading First - NCLB Funds, K-3 - High Poverty/Low Achieving Schools, Cohort A - 33 schools in 14 districts - 3yrs,17 schools Cohort B - 8 districts -1yr, Cohort C - 6 non RF schools matched for comparison • Oregon RTI Initiative - IDEA Funds, district - wide reform, TTS contract years/numbers of Schools, 5 districts – 1 yr, 9 additional districts 2006-2007, secondary preparation grants • Support for All Students Reading – SIG Funds, emphasis on secondary – Bethel contract • Parent Education – SIG Funds ORPTI contract

  35. From RtI to PSW and Neurological Theory

  36. Response to Intervention Research-based general education curriculum Curriculum-based assessment of progress Tiered interventions Part of comprehensive evaluation Pattern of Strengths & Weaknesses Norm-referenced assessment based Academic comparison Academic-cognitive comparison Part of comprehensive evaluation Oregon options (either or both)

  37. The definition of PSW (34 CFR 300.311(a)(5)), (34 CFR 300.309(a)(2(ii)). • Evaluation documentation must consider whether the student exhibits a pattern of strengths and weaknesses • In performance, achievement or both • Relative to age, State approved grade levels standards, or intellectual development • That is determined by the group to be relevant to the identification of SLD using appropriate instruments

  38. A six-box interpretation

  39. OSEP allows Teams to Choose • §300.309(a)(2)(ii) permits, but does not require, consideration of a pattern of strengths or weaknesses, or both, relative to intellectual development, if the evaluation group considers that information relevant to an identification of SLD.

  40. Main Idea of PSW • Many academic and cognitive abilities in the average range • Specific academic weaknesses • Specific cognitive weaknesses • Research-based links between the academic and cognitive weaknesses • Unrelated cognitive abilities are average or above • Full Scale IQ is irrelevant, except for MR

  41. Not Full Scale IQ Explosive growth of scientific knowledge about true “processes” that enable acquisition of reading, math and writing • Cattell-Horn-Carroll (CHC) Theory of Cognitive Abilities: The Cognitive “Table of the Elements” • PASS theory based on Luria • All major tests revised to incorprate CHC, even those based on Luria (KABC), except CAS

  42. Four Major Research-Based Models • Cognitive – academic approaches: • Flanagan, Oritz & Alfonso, 2007 • Naglieri, 1999 • Fiorello & Hale, 2004 • Academics only approach • Fletcher, Lyon, Fuchs, & Barnes 2007 • Comments from Fuchs 2007: Academics-only approach is based on studies that he conducted that he feels are no longer valid. Therefore, the academics-only approach is not recommended for Oregon schoolchildren.

  43. Academics Only Approach: Not Recommended • Word recognition & spelling <90 (phonological poor, spatial & motor skills good) • Reading fluency <90, accuracy good (automaticity problem: RAN poor) • Reading comprehension <90, 7 points below word reading (vocabulary, working memory & attention poor, phonics good) • Math computations <90, all reading good (executive functioning, working memory & attention poor, phonics and vocabulary good) • Spelling <90 (residuals of poor phonics, fluency often impaired) • Word recognition, fluency, comprehension, spelling & math <90 (language and working memory poor)

  44. Empirical multivariate statistical methods: Morris (1998) • Rate (affects fluency and comprehension) • Rate & Phonology • Rate, Phonology, & VSTM (big group) • Phonology, VSTM, & Spatial • Phonology, VSTM, & Lexical • Global & Language • Global • No longer valid categories according to co-author Fletcher, 2007

  45. Consistency-Discrepancy (Naglieri) • Processing Strength to Academic Strength (no significant difference) • Processing Strength to Academic Weakness (significant difference) • Processing Weakness to Academic Weakness (no significant difference) • Processing Strength to Processing Weakness (significant difference)

  46. Concordance-Discordance (Hale & Fiorello • Cognitive Hypothesis Testing • Examine results from cross battery assessment • If there are differences between scores on a similar construct (e.g., working memory), use task demands analysis, method of input and output of particular tasks • Use other methods such as checklists, dynamic assessment, observations to document strengths and weaknesses for curricular planning

  47. Hale and Fiorello (p. 135) write: Using an intellectual/cognitive measure (e.g., the Woodcock-Johnson III [WJ-III]), a fixed battery (e.g., the Halstead-Reitan), and additional hypothesis-testing measures (e.g., subtests from the Comprehensive Test of Phonological Processing [CTOPP]) might be the ultimate approach for conducting CHT.

  48. Flanagan, Ortiz, & Alonzo’s Aptitude-Achievement Consistency (2007) • After RTI and/or documentation of instruction and progress monitoring and rule out exclusionary factors • Documentation of underachievement-norm referenced achievement test (Standard Score <85) • Measure all cognitive abilities that research shows support the specific area of achievement at specific age of child • At least one of those abilities must be below 85 and have documented ecological correlates • Cognitive abilities that don’t relate are average or above: “otherwise normal ability profile” • Computer Program: “SLD Assistant” • The Essentials of Cross Battery Assessment: Second Edition Wiley, New York.

  49. What is CHC Intelligence Theory? • Cattell, Horn and Carroll • 7 Broad Categories of Intelligence • Clean, Not Mixed Factors (No Sharing) • Many Narrow Categories of Intelligence Underneath Each Broad Factor • Less Emphasis on a Full-Scale Score

More Related