1 / 42

Teacher Assessment versus Exams

Teacher Assessment versus Exams. Peter Tymms CEM, Durham University. www.cemcentre.org. Overview. The Issue The importance of LAs, Schools and teachers Fairness and bias Coverage and sampling Teacher assessment Exams and tests Predictive validity Conclusions. The Issue.

glennv
Télécharger la présentation

Teacher Assessment versus Exams

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Teacher Assessment versus Exams Peter Tymms CEM, Durham University www.cemcentre.org

  2. Overview • The Issue • The importance of LAs, Schools and teachers • Fairness and bias • Coverage and sampling • Teacher assessment • Exams and tests • Predictive validity • Conclusions

  3. The Issue • Teacher assessment is unfair because it is unreliable and biased. • Exams are simply snapshots and are unrepresentative of the work that has really be done

  4. Which matters most? • LA • School • Teacher • Pupil

  5. Newcastle Commission: Data Sources • Several national datasets including • ASPECTS, PIPS, MidYIS & YELLIS • KS1, KS2, KS3 & GCSE • Looked a value-added using 3 level multilevel models

  6. Example using KS2 English

  7. Willms’ Diagram

  8. The Teacher Effect

  9. Which matters most? • LA • School  • Teacher  • Pupil     

  10. Conclusion Pupils vary enormously Teachers have the greatest impact Schools are relevant Authorities hardly vary at all Proximate variables dominate

  11. Hypothesis • The best teachers will be best at judging their students

  12. What is bias? • Bias appears in a test when part of an assessment is harder for a particular group. • Or when an assessor systematically downgrades a group or an individual for construct irrelevant reasons

  13. Example of item bias Pigeon Turtle

  14. Examples of teacher bias • Annecdote • By Sex (eg baseline & page 17 Harlen) • By ability – judgement anchored by experience • By Ethnicity – assault experiments • By social class • By behaviour (origin of ability testing. Binet) • By Age – (EPICure study) • By incident – eg spilling a glass of water. • The halo (or horns) effect (e.g. P scales)

  15. P Scales in 2004

  16. Teacher reliability • How should reliability be assessed • By looking at the internal consistency of judgements? • By looking at the link to external assessments? • By comparing over time? • By comparing one teacher with others? • Facets model within Rasch measurement

  17. Trusting teachers’ judgementHarlen 2005 “The findings of the review by no means constitute a ringing endorsement of teachers’ assessment; there was evidence of low reliability and bias in teachers’ judgements”

  18. 5-14, Portfolios & single level tests • 5-14 assessments • What about portfolios? • inter-rater very low for maths and writing • English teacher levels in SATs • early 1990s “considerable error” • later quite common to find teacher = test results • single level tests compromised by teacher judgement

  19. Is it OK for teachers to assess their own pupils for High Stakes exams? • How does the power to grade affect relationships? • Would you give McEnroy a B?

  20. Exam/test reliability • Typically around 0.9 but … • Distinguish the assessment of • Convergent questions • Divergent questions

  21. Exam/test bias • Pre-tests are often used to address issues of bias • But we put much reliance on judgment. • England’s major exams are largely not pre-tested.

  22. Are Exams inappropriate snapshots? • Issue 1: Questions must be representative samples of the course under exam conditions. • Issue 2: Constraint on the nature of the assessment • Multi-method Multi-trait challenge • Issue 3: Impact of stress on performance • Positive & Negative (links to introversion)

  23. Introvert and Extrovert Effort Stimulus

  24. We need to match format to content • Some things must be assessed by judgement: • Social interactions • Quality of research • Poetry • Art • Some things are best assessed left to tests • Mental arithmetic • Spelling • Phonological awareness • Diagnostic assessments (e.g. INCAS) • Even so perhaps there is a final arbiter

  25. Predictive validity Developed ability test (MidYIS/IQ/etc) Attainment test (Std Grade/Highers) Teacher Grade Later success – degree, salary etc

  26. We need the evidence but .. • Prediction is often poor • Two major reasons

  27. Prediction of Educational Achievement

  28. Correlation = 0.7

  29. Select top 15%

  30. Correlation = 0.39

  31. Cream top 3%; r=0.19

  32. So, poor prediction because of • Prior selection • Variable outcome measures

  33. Conclusion: Judgements or tests? • Should we do both? (Profiles) • But, how do we ensure that judgements and tests are independent? • How can judgements be kept free from bias? • Virtually impossible in high stakes tests • Essential for formative work

  34. No easy solutions Thank you

  35. References • Campbell, D. T., & Fiske, D. W. (1959). Convergent and Discriminant Validation by the Multitrait-Multimethod Matrix. Psychological Bulletin, 56, 81-105. • Cooper, B. (1998). Using Bernstein and Bourdieu to understand children's difficulties with "realistic" mathematics testing: an exploratory study. Qualitative Studies in Education, II(4), 511-532. • Eysenck, H. J. (2006) The Biollogical Basis of Personaility.Transaction publishers • Harlen, W. (2005). Trusting teachers' judgement: research evidence of reliability and validity of teachers' assessment used for summative purposes. Research Papers in Education, 20(3), 245-270. • Johnson, S., Hennessy, E., Smith, R., Trikic, R., Wolke, D., & Marlow, N. (2009). The EPICure Study: Academic attainment and special educational needs in extremely preterm children at 11 years. London: Nottingham/London/Warwick. • Koretz, D., Stecher, B. M., Klein, S. P. & McCaffrey, D. (1994) The Vermont Portfolio Assessment • Program: findings and implications, Educational Measurement: Issues & Practice, 13, 5–16. • Tymms, P. (1997). Value-added Key Stage 1 to Key Stage 2. London: School Curriculum and Assessment Authority. • Tymms, P., Jones, P., Albone, S., & Henderson, B. (2009). The first seven years at school. Educational Assessment and Evaluation Accountability, 21, 67-80. • Tymms, P., Merrell, C., Heron, T., Jones, P., Albone, S., & Henderson, B. (2008). The importance of districts. School Effectiveness and School Improvement, 19(3), 261-274. • Tymms, P., Merrell, C., & Jones, P. (2004). Using baseline assessment data to make international comparisons. British Educational Research Journal, 30(5), 673-689. • Willms, J. D. (1987). Differences Between Scottish Educational Authorities in their Examinations Attainment. Oxford Review of Education, 13(2), 211-232.

More Related