html5-img
1 / 29

Writing assessment items and instructional texts for English Second Language speakers

Writing assessment items and instructional texts for English Second Language speakers Marise Ph. Born Erasmus University Rotterdam, The Netherlands & Cheryl Foxcroft Nelson Mandela Metropolitan University, South Africa born@fsw.eur.nl , cheryl.foxcroft@nmmu.ac.za

Ava
Télécharger la présentation

Writing assessment items and instructional texts for English Second Language speakers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Writing assessment items and instructional texts for English Second Language speakers Marise Ph. Born Erasmus University Rotterdam, The Netherlands & Cheryl Foxcroft Nelson Mandela Metropolitan University, South Africa born@fsw.eur.nl, cheryl.foxcroft@nmmu.ac.za

  2. Part 1 : General Challenges • English is a global language that is a widely used language medium in business and higher education. • Most instructional texts and journals are published in English. Many higher education institutions use English as the main medium of instruction and assessment. Given the high cost of adapting tests together with the argument that a certain level of English proficiency is needed in many countries to function effectively in the workplace, many assessment measures (tests) are only available in and/or are administered in English. This could pose difficulties for people whose 1st language is not English as English can be confusing at times.

  3. Part 1 : General Challenges Cont. • The same word can mean different things, for example, “tire” could mean that you are getting tired or it could mean the round rubber wheel on a motor car, if the American spelling is used (and the differences between the English and American spelling of a word adds to the confusion). • A word used as part of a term can take on a different meaning, for example, “random” is usually used to refer to a chance/accidental/ haphazard happening, but “random sampling” is not haphazard it is a systematically planned probability sampling technique that gives everyone an equal chance of being selected.

  4. Part 1 : General challenges cont. • Unfortunately, many writers of texts and developers of tests have English first language speakers in mind when they pen (write) texts and test items, yet many second and third language (etc.) English speakers will want to derive meaning and knowledge from the texts or their futures may depend on answering the test items correctly. But, do such texts and test items not immediately put non-native English speakers at a disadvantage?

  5. Part 1 : General challenges cont. • According to Nell (2000), in a multilingual country such as South Africa, “language is generally regarded as the most important single moderator of test performance”. This is because performance on assessment measures could be the product of language difficulties and not ability factors if a measure is administered in a language other than the test-takers home language” (in Foxcroft & Roodt, 2005, p. 230).

  6. Part 1 : General challenges cont. • Language rarely constitutes the trait being assessed directly. Instead, language commonly is used as a vehicle of communication between the test (i.e., through written text or presented orally by an examiner) and the person taking the test. Although a test may not be designed to assess language, scores may be attenuated by a test-taker’s deficiencies in listening or reading … or in speaking or writing. These and other qualities that may artificially depress test performance have the potential to promote construct irrelevant variance and thus decrease the test’s validity. Efforts to minimize the impact of language-related qualities, including reading, when designing or adapting tests that assess other traits are needed” (Oakland & Lane, 2004, p. 242)

  7. Readability defined • Among other things, there is a relationship between the reading difficulty of a test and the extent to which there could be construct irrelevant variance. Thus, knowledge of the methods used to assess reading difficulty may lead to the development/adaptation of better texts and tests for non-native English speakers. • “Readability is the sum total (including the interactions) of all those elements within a given piece of printed material that affects the success a group of readers have with it. The success is the extent to which they understand it, read it at an optimum speed, and find it interesting” (Dale & Chall, 1949, p. 23).

  8. Factors that contribute to text difficulty (Oakland & Lane, 2004, p. 248 Text Factors Reader Factors Text Difficulty Syntax Reading Fluency Vocabulary Background Knowledge Idea Density Language Cognitive Load Motivation & Engagement

  9. Part 2 : Readability Study Readability of Instructional Material in English for first- and second-language readers Marise Ph. Born Erasmus University Rotterdam, The Netherlands & Cheryl Foxcroft Nelson Mandela Metropolitan University, South Africa

  10. Overview • Context of study: ITC ORTA-project • Readability: matching reader and text • Factors determining readability • First- and second language readers • Measuring readability • Purpose of study • Study among South African students: Method and Results • Implications for follow-up research and for practice

  11. On-Line Readings in Testing and Assessment Project (ORTA) Cheryl Foxcroft (South Africa) and Marise Born (The Netherlands) Goal:To provide on-line, free-of-charge readings on aspects of testing and assessment to students and scholars from developing countries in particular. How: Inviting knowledgeable authors in the domain of testing and assessment to provide extremely reader-friendly texts. • No royalties involved. Authors contribute to advance knowledge across the globe. • Framework of topics, invitation letter, guide for authors available from cheryl.foxcroft@upe.ac.za, born@fsw.eur.nl • ORTA is a dynamic project, where texts are added to the website as they become available and the framework of topics may be expanded.

  12. Readability • How to determine the ORTA instructional material’s readability? • One possibility: Flesch-Reading ease formula, derived from sentence length and syllables per word. Material receives a reading ease score. Measures surface structure characteristics. • Problem: assumption of readability as an inherent property of the text. • Runs counter to the meaning of readability as a match between reader and text: • How is a particular text is responded to by different readers?

  13. Readability ~cont’d We can distinguish between: • collection of individuals with given interests and reading skills • collection of reading materials, differing in content, style and complexity Match of both sides determines the extent to which the material can be read with profit. Profit: not only comprehensible but also compelling information!

  14. Factors determining readability Readers: Goals: Interest & motivation to read the text Background knowledge of the topic Reading fluency Language: knowledge of the words in the text Instructional text: Surface structure features of text (paragraphing, titles, sentence length, text aids such as tables and graphs…) Word ease/ frequency Causal structure of, and inferences required in text

  15. First- and second- language readers Have FL and SL reader differences been an issue at all in the readability literature until now? Gap in research: • Linguists: language learning, modern language journal, foreign language annals • Psychologists: how can instructional texts be improved? • Marsh et al study (2000; 2002) among Chinese: SL-instruction in early high school years negative effects on academic self-concept and academic achievements. Particularly problematic in nonlanguage subjects.

  16. Measuring readability Many measures have been used (Wagenaar et al. 1987): Objective ones: Flesch score (surface level feature); reading time, eye movements during reading; recall test, sentence completion tests; number of required inferences per 100 words (structure-level feature), etc. Subjective ones: Perceived difficulty of text, assessment of text readability; compellingness, etc.

  17. Purpose of present study St Paul’s first letter to the Corinthians 14:9: “Except ye utter by tongue words easy to be understood how shall it be known what is spoken?” Determining readability differences for first- and second-language readers of English university-level instructional texts.

  18. Participants:Undergraduate students at the NMMU South campus Characteristics of Sample (N=59) • 3rd years • Intro to • psych • assessment

  19. Stimulus Material • Given a passage to read on history of psychological testing (6 pages) • Some statistics: • Counts: words = 1847; characters = 10433; paragraphs = 25; sentences = 78 • Averages: sentences per paragraph = 4.5; words per sentence = 22.5 • Readability: Passive sentences = 25%; Flesch Reading Ease = 20.4 (Mean should be 60-70); Grade level = 12. • Why we did not use these as objective measures: readability measures not sensitive to academic writing style

  20. Procedure • Two classes – counterbalanced order of presentation. • Class 1 (n=21). • Pretest. • Cloze test. • Read text (stimulus material). • Answer questions regarding word ease, word meaning, drawing inferences, and rating of surface features. • Post-test • Class 2 (n=38). • Pretest. • Read text (stimulus material). • Answer questions regarding word ease, word meaning, drawing inferences, and rating of surface features. • Cloze test. • Post-test

  21. Results Pretest correlates significantly with some of the readability measures (e.g., perceived difficulty = .3 and Cloze = .6)

  22. Results continuedPost test All showed significant Improvement, but ESL- Xhosa showed least Sign. diff among 3 lang groups. EFL mean > ESL-Xhosa mean

  23. Results continuedCloze Test NB: Only half the sample could complete the task For example – A further milestone in the development of ……… psychological assessment came from the work of …….., a professor of philosophy in Germany. …….. to McReynolds, (1986), Thomasius made two …….. contributions to the emerging field of assessment.

  24. Results continuedPerceived Difficulty (Word Ease) EFL, ESL-Afrik & ESL-Xhosa EFL and ESL comparison

  25. Results continuedWord Meaning

  26. Results continuedRating of Importance of Visual/Surface Features

  27. Results continuedCausal Inferences

  28. Example: Implications for practice Differences in causal inferences. • Implication: enhance instructional texts by leaving in the inferences in text (Britton & Gulgoz, 1991). • Do not use a different word when mentioning a concept for the second time • Realization that automated inferences one makes as an SME who is writing an instructional text, are the ones that a novice must go through • Training writers: possible (Britton et al., 1989; T.M. Duffy et al., 1989) ? • Implication: enhance underdeveloped inference-making skills in certain ESL groups.

  29. Implications for follow-up research and writing of instructional texts • Investigate validity of readability measures: • Is the cloze test measuring something different in the ESL-group? • Do not treat all ESL people as the same • Use further measures of readability (reading rate, free –recall tests), • Use covariates (motivation, interest: Matthew effect (Stanovich, 1986)). • Use experimental design: several enhanced conditions (improved causal structure, surface structure, word difficulty…) • Can get info on writing instructional texts for ESL students from Marise and Cheryl.

More Related