1 / 9

The Efficacy of Student Evaluations of Teaching Effectiveness

This paper examines the effectiveness and reliability of student evaluations of teaching in making decisions regarding faculty retention, promotion, and pay. It explores challenges to reliability, validity, data analysis, and conclusions drawn from student evaluations. Additionally, it discusses the variables that can influence student evaluations and provides best practices and recommendations for using student evaluations in faculty reviews.

cgeorge
Télécharger la présentation

The Efficacy of Student Evaluations of Teaching Effectiveness

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Efficacy of Student Evaluations of Teaching Effectiveness Megan O’Neill Assistant Professor Department of Humanities Presented to the Faculty Senate October 25, 2018

  2. Claim: “The current practice of using student evaluations as summative measures to determine decisions for retention, promotion, and pay for faculty members is improper an depending on the circumstances could be argued to be illegal.”Hornstein, 2017

  3. What are “Student Evaluations of Teaching” (SET)? History • 1970s- A formative measure of student opinions to give feedback to the individual instructor. • 1980s- Adopted by administrations to evaluate “teaching effectiveness” due to ease of data collection. • 1990s to Now- A few studies argue that there are small-to-moderate correlations between SET scores and student achievement. When reanalyzed for variations in sample size, little-to-no correlation exists (Uttl, White, and Gonzalez, 2016).

  4. Current Literature on SET • Challenges to Reliability • While inter-item consistency exists, agreement between students is no greater than “chance,” indicating that students do not agree with what they are being asked to evaluate. No inter-rater reliability. • Patterns of reliability indicate that the tool provides information about students, not instructors (Clayson, 2017). • Challenges to Validity • Tool measures student satisfaction only • By what valid criteria are students able to determine instructor knowledge, pedagogical practices, etc?

  5. Current Literature on SET • Challenges to Data Analysis • Statistical averages can not be drawn from categories that differ in quality, rather than quantity or magnitude (Stark & Freishat, 2014). • Challenges to Conclusions • Current data does not support the assumption that students learn more from higher rated instructors. (Uttl, White, and Gonzalez, 2016).

  6. What Variables Effect SET Scores? Research shows SET scores are effected by: • Discipline • Student interest and motivation • Class size • Class time (of day) • Classroom space • Gender • Race • Instructor Accent Uttl, White, and Gonzalez, 2016

  7. Gender & Race In double blind studies of online teaching, women and people of color are rated lower than white men for the same teaching (Wagner, Rieger, and Voorvelt, 2016). When instructor effectivness is evaluated using a holistic methods, the data shows that less competent male instructors receive higher SET scores than more effective female instructors (Boring, 2016).

  8. Best Practices & Recommendations • Drop omnibus items about “overall teaching effectiveness” and “value of the course” from SET. They are misleading. • Do not average or compare averages of student rating. Statistically unsound. Instead, report distribution, number of responders, and response rate. • Understand the limitations of student comments. While they are the authority on their own experiences, they are usually not well suited to evaluate pedagogy. • Avoid comparing teaching in courses of different types, levels, sizes, functions, or disciplines. • Use teaching portfolios as part of the review process. • Use classroom observations as part of milestone reviews. • Reconsider the role of SET in tenure and promotion decisions- the current model rewards instructors for not challenging students or disciplinary teaching paradigms.

  9. Works Cited Boring, A. (2017). Gender biases in student evaluations of teaching. Journal of Public Economics,145, 27-41. Clayson, D. (2018). Student evaluations of teaching and maters of reliability. Assessment and Evaluation in Higher Education, 43:4, 666-681. Hornstein, H. (2017). Student evaluations of teaching are an inadequate assessment tool for evaluating faculty. Cogent Education, 4, 1-8. Stark, P.B., & Freishat, R. (2014). An evaluation of course evaluations. Retrieved from https://www.stat.berkeley.edu/~stark/Preprints/teachEval14.pdf. Uttl, B., White, C., & Gonzakez, D. (2017). Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation,54, 22-42. Wagner, N., Rieger, M., & Voorvelt, K. (2016). Gender, ethnicity and teaching evaluations: Evidence from mixed teaching teams. Economics of Education Review, 54, 79-94.

More Related