1 / 35

Written Language

Written Language. Tobin White Frisco ISD Lloyd Kinnison Texas Woman’s University 2008. Literacy components. Primary Components are: Reading Writing Reading has received the most attention during the present discussions of Response to Intervention.

branxton
Télécharger la présentation

Written Language

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Written Language Tobin White Frisco ISD Lloyd Kinnison Texas Woman’s University 2008

  2. Literacy components • Primary Components are: Reading Writing • Reading has received the most attention during the present discussions of Response to Intervention. • Writing is necessary for success in a technological society.

  3. Written Expression Assessment • Assessment for written language skills received minimal research until the concept of written expression was included within the definition of learning disabilities under PL 94-142.

  4. Why Assess Written Expression? • Monitor student progress • Make instructional decisions. • THEREFORE ASSESSMENTS MUST: • easy to use • permit frequent administration • be affordable • instructional decisions are data-based

  5. Commonly Used Assessment Tools • Test of Written Language (TOWL) • Written Expression Subtests • W-J Ach III • KTEA II • WIAT- II • PIAT-III • CBM

  6. Curriculum-Based MeasuresWritten Expression • Production dependent • fluency measures • Studies show adequate reliability • Significant correlations with TOWL • Stanford Achievement Test • (spelling, correct letter sequence, • word sequence) (Deno, et al 1982; Gansle, VanDer Hayden Naquin, & Slider, 2002).

  7. CBM- cont’d • Production Independent • Accuracy measures • Percentage of words spelled correctly • percentage of legible words • percentage of correct word sequence • Strongly correlated with teacher’s holistic ratings of middle school students writing. • (Tindal & Parker, 1989; Espin, Shin, Dino, Skara, Robinson, & Benner, 2000)

  8. CBM/CBA • Curriculum-Based Measures/Assessment provide a model within which skill hierarchies, environmental variables, or instructional variables may be examined by the specific probes. • When data suggest a student fluency is not increasing with instruction would be considered that a problem exists and needs a change in instructional procedures.

  9. CBM/CBA • A limitation of curriculum-based assessment, in general, is the technical adequacy of the measure and skill of the professional interpreting the data. • CBM’s in written expression create a challenge compared to reading or mathematics.

  10. Challenge • Assessment of written expression using CBM generally uses an incomplete stimulus (sentence fragment) and asks the students to create their own prose to this ambiguous stimulus. • The challenge is that writing is a multidimensional task that is frequently assessed with a one-dimensional production-based measure.

  11. Gansle et al 2004 • Gansle and colleagues reported measures such as correct capitals, punctuation marks, complete sentences, and words in complete sentences have superior technical characteristics to words in correct sequence, total words written, and words spelled correctly • Traditional CBMs use total words written, correct word sequence, and words spelled correctly and have adequate reliability and validity. (Jewell& Malecki, 2005)

  12. Purpose • The purpose of this study was to assess the written expression of 4th grade students using Curriculum Based Assessment. • The participants were 20 fourth grade students in a general education setting which included students with disabilities.

  13. Probes • The study used the procedures and probes recommended by Jim Wright(2007) • Provide the students with a story-starter; • Give the students 1 minute to plan response; • Give students 3 minutes to write as much as they can; • Collect writing probes from students.

  14. Methods • 5 writing probes were administered over 1 week to establish baseline. • One writing sample was collected each week for the next 8 weeks. • Scoring procedures followed the recommendations of Wright (2007).

  15. Participants • Population • 20 students in a 4th grade general education class • 2 students were served in special education: 1 student had a Learning Disability in Written Expression and the other had an Other Health Impairment for ADHD • Why chosen? • Since the scope of the study did not include the use of a specific intervention, fourth grade was chosen since it was known that much instruction in writing (including after school tutoring sessions) was being implemented in preparation for the Writing TAKS

  16. Scoring • The writing probes were scored based on total words written. • Why was this method selected? • Objective data • Least time intensive way to measure writing • Teachers need a simple, practical way to score the writing samples if they are to consistently use CBM writing probes • All teachers can use this scoring method with little to no training involved

  17. Why Not Use Total Words Spelled Correctly? • An analysis of the spelling errors revealed that fluent writers took risks with larger vocabulary words but did not usually spell these words correctly. The number of incorrectly spelled words thus rivaled the number of incorrectly spelled grade level words of less fluent writers. • Didn’t want to penalize students for taking risks in their writing. • Spelling is more closely correlated withreading ability. Total Words Written is more closely correlated with written expression standards.

  18. Analyzing the Data • Each week a class average was calculated. • Each student’s score (total number of words written) was then compared to the class average to rank student performance. • Individual student data was plotted from week to week to measure growth in writing fluency.

  19. Case Study #1 • Student A • Baseline of 61 (average of 5 data points) • Range: 45-103

  20. Individual Data Versus Class Average Student A(Total Words Written)

  21. Case Study #2 • Student B: • Baseline of 30 (average of 5 data points) • Range: 9-39

  22. Individual Data Versus Class Average Student B(Total Words Written)

  23. Comparing Student Performance(Total Words Written)

  24. What Does This Tell Us? • It is clear that Student A is not at risk and Student B is struggling; however, what information can we glean from graphing an individual’s performance compared to the class average? • We can quickly see how individual students compare to the class average in order to select the population that needs increased interventions put in place. • We can determine whether student scores are increasing over the course of an intervention by looking at the trend line. Is this enough data to determine which students are “at risk”? No!

  25. Rate of Growth Dilemma • From looking at the class average, we can see that there was not a clear trend line indicating growth in total number of words written over the course of the 8 week study.

  26. Why Wasn’t There Growth? Possible Explanations: • Some story starters may have been more appealing to students and inadvertently given the students more ideas for writing, which would feasibly impact writing fluency. • Variation from week to week in level of focus, energy, and effort. • An 8-week period may not be a long enough length of time for students to grow in their writing fluency.

  27. Why is this Problematic? • 8 weeks should be a sufficient amount of time for determining which students are at risk. • We cannot afford to wait until significant growth occurs to put interventions in place. • Difficult to tell which students are at risk.

  28. A Better Way? • Use of national norms allows us to compare an individual student’s average performance to other fourth graders around the nation. • This provides a more concrete way to determine who is considered “at risk.”

  29. The Importance of Using an Average • Significant variation occurred from week to week with individual students. (Wide range of scores) • Using an average gives us a more accurate overall picture of student capability than would looking at an individual score that may be particularly high or low for that student.

  30. Does It Work? • Twelve of the twenty students in the class (almost half) came out above the 50th percentile. • Two students in the class came out below the 25th percentile as compared with national norms: • These two students were… the two students in special education! • The lowest student was… the student who is LD in Written Expression!

  31. How Might We Use this Method? • For RTI, teachers can compare their students’ performance with national norms for a concrete “at risk” measurement. (Districts may want to develop local norms that hold students to higher standards than do national norms.) 2. This method helps us identify who needs additional interventions, and ultimately, who would be a good candidate for a special education referral.

  32. References • Deno,S.,Marston, D., & Mirken, P.K. (1982). Valid measurement procedures for continuous evaluation of written language expression. Exceptional Children,48, 368-371. • Espin, C., Shin, J. Deno, S., Skara, S., Robinson, S. & Benner, B. (2000). Identifying indicators of written expression proficiency for middle school students. Journal of Special Education, 34, 140-153. • Gansle, K., Noell, G., VanDenHeyden, A., Naquin, G., & Slider, N.J. (2002). Moving beyond total words written: The reliability, criterion validity, and time cost of alternate measures for curriculum-based measurement in writing. School Psychology Review,31, (4) 477-497. • Gansle, K., VanDerHeyden, A., Noell, G., Resetar, J., & Williams, K., (2006). The technical adequacy of curriculum-based and rating-based measures of written expression for elementary school students. School Psychology Review,35, (4), 435-450. • Jewell, J., & Malecki, C., The utility of CBM written language indices: An investigation of production-dependent, production-independent, and accurate-production scores. School Psychology Review, 34, 27-44. • Tindal, G., & Parker, R. (1999). Assessment of written expression for students in compensatory and special education programs. Journal of Special Education, 23, 169-183. • Wright, J. (2007). Curriculum Based Measurement Warehouse. Retrieved June 24, 2007 from www.interventioncentral.org

More Related