1 / 28

Panel

Student Evaluation of Teaching: What Do We Know? 2/26/10 Talley North Gallery Co-sponsors: University Planning and Analysis Evaluation of Teaching Committee Office of Faculty Development. Panel

dominy
Télécharger la présentation

Panel

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Student Evaluation of Teaching: What Do We Know?2/26/10Talley North Gallery Co-sponsors: University Planning and Analysis Evaluation of Teaching CommitteeOffice of Faculty Development Panel Gary Roberson, EOT Committee Chair, Panel FacilitatorKaren Helm, Director UPAPaul Umbach, Leadership, Policy & Adult and Higher Education Gerald Ponder, College of Education, Associate Dean

  2. EOT Committee Sremaniak, Laura Emigh, Ted Petherbridge, Donna Bonto-Kane, Maria Carter, Mike Ambrose, John Helm, Karen Brown, Betsy • Roberson, Gary • Miller-Cochran, Susan • Lemaster, Rick • Ames, Natalie • Franzon, Paul • Rabiei, Afsaneh • Moss, Christina • Bartlett, James • Sannes, Phil

  3. Current Work of EOT Committee • Review of ClassEval instrument • Strategies for promotion of ClassEval

  4. Introduction of UPA Staff Karen Helm, Director Kay StewartNewmanMelissa House Trey Standish Lewis CarsonNancy Whelchel

  5. Agenda Gary Roberson, facilitator:Introductions, agenda, and note card instructions Karen Helm:What we know about ClassEval Paul Umbach: What the research tells us about student evaluations Gerald Ponder: • Other types of evaluation of instruction • How to respond to issues raised by student evaluations All presenters:Q and A

  6. Note cards Pink Cards Questions for the panel Please pass to outer aisle any time during the presentation and discussion Yellow Cards 1. Suggested revisions to ClassEval 2. Suggestions for improving student participation in evaluation of teaching

  7. Myths & Biases in Students’ Evaluations of Teaching: Paul D. UmbachAssociate ProfessorLeadership, Policy, and Adult and Higher Education

  8. Common myths • Students cannot consistently and accurately judge their instructor and instruction because they are immature, lack experience, and are capricious • Student ratings are based on nothing more than popularity, with friendly humorous instructors getting the highest ratings • Harder courses requiring more effort are rated lower than easier courses. • Students cannot make accurate judgments until they have distance from the course

  9. Common myths (continued) • Time and day of the course affect student ratings • Students cannot contribute meaningfully to instructional improvement • Gender of the student is related to ratings • Student ratings are unreliable and invalid Based on following reviews: Abrami, Leventhal, and Perry (1982); Cohen (1980); Feldman (1977, 1978, 1987, 1989a, 1989b, 2007); Levinson-Rose and Menges (1981); Marsh (1984, 1987, 2007); Marsh and Dunkin (1992)

  10. In fact, most research suggests that students’ evaluations of teaching are: • Reliable and stable • Primarily a function of the instructor rather than the course that is taught • Relatively valid against a variety of indicators of effective teaching • Relatively unaffected by a variety of variables hypothesized as potential biases

  11. Bias in students’ evaluations of teaching • “In essence, the question is whether a condition or influence actually affects teachers and their instruction, which is accurately reflected in students’ evaluations (a case of nonbias), or whether in some way this condition or influence only affects students’ attitudes toward the course or students’ perceptions of instructors (and their teaching) such that the evaluations do no accurately reflect the instruction that students receive (a case of bias) (Feldman, 2007, p. 96).”

  12. In other words, “Bias exists when a student, teacher, or course characteristic affects the evaluations made, either positively or negatively, but is unrelated to any criteria of good teaching, such as increased learning (Marsh, 2007, p. 350).”

  13. Potential bias • Slightly higher ratings for… • Smaller classes (nonlinear) • Teachers of upper-level courses • Teachers of higher ranks • Students in elective courses • Students in major courses • Student interest in the course • This might not indicate bias

  14. Potential bias (continued) • Modest or small correlations between grades and evaluations • Usually between .10 and .30, wither the unit of analysis is the individual or the class (Feldman, 1976, 1977, 2007) • Association can be not be bias • “Validity effect” • “Student characteristics effect” • Or it could be related to bias • Attributional bias and retributional bias • Grading leniency effect • Disciplinary differences

  15. A comment on research of and the use of SETs • Should SETs be multidimensional? • Flaws in some previous research • Formative and summative uses • Should personnel decisions rely on single global rating items, a single score representing a weighted average, or a profile of multiple components? • Should institutions offer normative comparisons? • Should they control for potential biases? • Should they construct a normative comparison group for similar courses? • Should we be concerned about possible non-response bias?

  16. Measurement Representation Target Population Construct Coverage Error Validity Measurement Sampling Frame Sampling Error Measurement Error Sample Response Nonresponse Error Processing Error Respondents Edited Response Adjustment Error Postsurvey Adjustments Survey Statistic From Groves, Couper, Lepkowski, Singer, & Tourangeau (2009, p. 49)

  17. So Your Course Evaluations Aren’t So Hot… So What? And Now What?

  18. So What? • Means? Range/Variability? Course History? Item analysis? Before you worry too much about your evals, do some examination to see how yours compare with the department and the history of the course. Also look to see if specific items can tell you anything. • Consultation/Mentoring About Evaluation Results Having a colleague help interpret and assign meaning helps. • And…?

  19. And Now What?

  20. And Now What? • Needs Assessment What do they know? How do I adjust? Who are they?

  21. And Now What? • How’s it going? Formative Assessment of Course @ 4 Weeks (Selden, 1997) Peter Selden has data that show that administering a course evaluation—or even asking students how things are going—at 4 weeks gives a good picture of what evaluations are going to be like. This is also soon enough to correct any big problems in the course so ratings at the end will be improved.

  22. And Now What? • Teach in cycles

  23. And Now What? • Give shorter and more frequent tests/projects/performances Shorter and more frequent tests give more valid results and seem less daunting.

  24. And Now What? • “Not yet” formative feedback (minute papers, short quizzes, practice assignments, revision to mastery as a course goal) Providing students with formative feedback that does not count as a grade increases learning and provides students with greater satisfaction and engagement in courses.

  25. And Now What? • Don’t forget Active Learning • Be “Fox-y” The “Dr. Fox” studies of some decades ago pointed out that course instruction and student perceptions benefit greatly from energy, enthusiasm, expressiveness, and apparent organization.

  26. Questions and Discussion

  27. Send comments to: teach_learn@ncsu.edu

More Related