1 / 74

Best Practices in Faculty Evaluation and Professional Enrichment

Best Practices in Faculty Evaluation and Professional Enrichment. A presentation for faculty and administrators at Sinclair Community College October 21, 2011 Michael Theall, Ph.D. Basic Premise:. Evaluation without development is punitive Development without evaluation is guesswork.

cullen
Télécharger la présentation

Best Practices in Faculty Evaluation and Professional Enrichment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Best Practices in Faculty Evaluation and Professional Enrichment A presentation for faculty and administrators at Sinclair Community College October 21, 2011 Michael Theall, Ph.D. M. Theall, SCC 10-21-11 (c)

  2. Basic Premise: Evaluation without development is punitive Development without evaluation is guesswork. M. Theall, SCC 10-21-11 (c)

  3. Basic Premise: Development and evaluation systems will not be complete until they are based on an understanding of the work that faculty are expected to do, and the skills that are required to do that work successfully! M. Theall, SCC 10-21-11 (c)

  4. Basic Premise: ALL DEVELOPMENT and EVALUATION ARE LOCAL! M. Theall, SCC 10-21-11 (c)

  5. 5 M. Theall, SCC 10-21-11 (c)

  6. 8 Steps to Develop a Comprehensive Faculty Evaluation System (Arreola, 2007) Determine • the faculty role model • the faculty role model parameter values • the definitions of the roles • the role component weights • the appropriate sources of information • the source weights • the data gathering methods • the design & selection of forms M. Theall, SCC 10-21-11 (c)

  7. Source – Impact Matrix (Arreola, 2007) M. Theall, SCC 10-21-11 (c)

  8. 8 Steps to Develop a Comprehensive Professional EnrichmentSystem (Theall, 2008) Determine: • the needs for enrichment functions; • the principal clients of services; • the configuration & location of programs; • the allocation of resources; • the intended impact of programs; • the connections to other campus programs. • the leadership structure; • the processes to create & implement programs. M. Theall, SCC 10-21-11 (c)

  9. Function – Client Matrix (Theall, 2008) M. Theall, SCC 10-21-11 (c)

  10. Sources of data: • Student Ratings • Peer Evaluation • Administrator Evaluation • Self – Evaluation • External Expert Evaluation • Alumni Ratings • Teaching Awards & SoTL • Media Documentation (videos) • Employer Opinions of Graduates • Student Learning M. Theall, SCC 10-21-11 (c)

  11. Colleagues Peers Self Dept. Head Students Professor Admin. Asst. External Others Berk’s “MULTISOURCE” evaluation array M. Theall, SCC 10-21-11 (c)

  12. What information do stakeholders need? EVALUATION INFORMATION MATRIX DEVELOPING A SYNERGY FOR IMPROVED PRACTICE M. Theall, SCC 10-21-11 (c)

  13. M. Theall, SCC 10-21-11 (c)

  14. BASIC ISSUES affecting data decisions • Reliability • Validity • Generalizability • Feasibility • Skulduggery

  15. Evaluation Purposes and Data FORMATIVE for improvement & revision INSTRUMENTAL CONSEQUENTIAL process & activities outcomes & effects SUMMATIVE for making decisions about merit & worth

  16. Research Findings: • Student ratings research • Research on other sources of evaluation data • Berk’s 13 sources of evidence for evaluating teaching M. Theall, SCC 10-21-11 (c)

  17. Ratings are: • Multidimensional • Reliable and Stable • Primarily a function of the instructor • Relatively valid as evidence of effective teaching • Relatively unaffected by a number of variables posed as biases • Useful as teaching feedback Marsh, 1987, 2007 M. Theall, SCC 10-21-11 (c)

  18. Uses of Ratings Data • IMPROVING INSTRUCTION • UNDERSTANDING CONTEXT & OUTCOMES • RESEARCH / SoTL • DECISION-MAKING M. Theall, SCC 10-21-11 (c)

  19. SEVEN MYTHS ABOUT STUDENT RATINGS 1. Students are not qualified to make judgments about teaching competence. 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

  20. SEVEN MYTHS ABOUT STUDENT RATINGS • Students are not qualified to make judgments about teaching competence. 1. Students are qualified to rate (report) certain dimensions of teaching. M. Theall, SCC 10-21-11 (c)

  21. SEVEN MYTHS ABOUT STUDENT RATINGS 2. Student ratings are simply popularity contests. 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

  22. SEVEN MYTHS ABOUT STUDENT RATINGS 2. Student ratings are simply popularity contests. 2. Students do discriminate among dimensions of teaching and do not judge solely on the personal popularity of instructors. (PS: Popularity is not necessarily a bad thing) M. Theall, SCC 10-21-11 (c)

  23. SEVEN MYTHS ABOUT STUDENT RATINGS • 3. Students are not able to make accurate judgments until after they have been away from the course for several years. 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

  24. SEVEN MYTHS ABOUT STUDENT RATINGS 3. Students are not able to make accurate judgments until after they have been away from the course for several years. 3. Ratings by current students are highly correlated with their later ratings and with ratings of the same instructors by of other students. M. Theall, SCC 10-21-11 (c)

  25. SEVEN MYTHS ABOUT STUDENT RATINGS 4. Student ratings are unreliable. 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

  26. SEVEN MYTHS ABOUT STUDENT RATINGS 4. Student ratings are unreliable. 4. Student ratings are reliable in terms of both agreement (similarity among students rating a course and the instructor) and stability (the extent to which the same student rates the course and the instructor similarly at two different times). M. Theall, SCC 10-21-11 (c)

  27. SEVEN MYTHS ABOUT STUDENT RATINGS 5. Student ratings are invalid 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

  28. SEVEN MYTHS ABOUT STUDENT RATINGS 5. Student ratings are invalid 5. Student ratings are valid, as measured against a number of criteria, particularly students' learning. M. Theall, SCC 10-21-11 (c)

  29. SEVEN MYTHS ABOUT STUDENT RATINGS • 6. Students rate instructors on the basis of the grades they receive. • 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

  30. SEVEN MYTHS ABOUT STUDENT RATINGS 6. Students rate instructors on the basis of the grades they receive. 6. Student ratings are not unduly influenced by the grades students receive or expect to receive. M. Theall, SCC 10-21-11 (c)

  31. SEVEN MYTHS ABOUT STUDENT RATINGS 7. Extraneous variables and conditions affect student ratings. 1 = agree 2 = disagree 3 = unsure M. Theall, SCC 10-21-11 (c)

  32. SEVEN MYTHS ABOUT STUDENT RATINGS 7. Extraneous variables and conditions affect student ratings 7. Student ratings are not unduly affected by such factors as student characteristics, course characteristics, and teacher characteristics. M. Theall, SCC 10-21-11 (c)

  33. Other relationships findings: • Class size: slight negative (curvelinear) • Prior interest in subject: positive • Elective vs. required courses: more positive for electives • Expected grade: slight positive (r =.20) • Work/difficulty: slight positive (curvelinear) • Course level: slight positive for upper division & graduate • Rater anonymity: more positive if violated M. Theall, SCC 10-21-11 (c)

  34. Other relationships findings: • Purpose of eval: more positive if manipulated • Instructor rank: none • Teacher/student gender: none • Teacher ethnicity/race: none • Research productivity: none • Student locus & performance attributions: none • Student personality: none M. Theall, SCC 10-21-11 (c)

  35. Research on other sources of data • Peer Evaluation Best for teacher knowledge, certain course or curricular issues, assessment issues, currency /accuracy of content, (esp. when used along with student ratings). If on teaching, less reliable and higher on average than student ratings. • Administrator Evaluation Necessary as part of process, but same problems as peers on teaching (criteria, process, instruments, validation, etc.) M. Theall, SCC 10-21-11 (c)

  36. Research on other sources of data • Self – Evaluation Provides the most complete picture of teacher thinking & instructional decisions/practices, but difficult to reliably interpret & use • External Expert Evaluation Useful, but require process cautions and careful use/interpretation; having a purpose is important M. Theall, SCC 10-21-11 (c)

  37. Research on other sources of data • Alumni Ratings Can be useful but generally the same as student ratings given same instrument; can shed light on teaching in terms of content, process, or curricular issues for formative purposes. • Media Documentation (videos) Excellent for formative purposes; unambiguous if used carefully to assess low-inference behaviors; need guidelines for use of others beyond teacher M. Theall, SCC 10-21-11 (c)

  38. Research on other sources of data • Teaching Awards & SoTL Awards unreliable due to lack of standard criteria & decision processes; SoTL valid and important IFrecognized within dept/college/univ. • Employer Opinions of Graduates Limited use; better for program evaluation & curricular issues M. Theall, SCC 10-21-11 (c)

  39. Research on other sources of data • Student Learning Outcomes Useful for formative (individual) or program purposes (if aggregated for assessment); not recommended for summative decisions. Test scores are not as reliable as ratings from a validated instrument. Criteria vary considerably (e.g., What does “All her students got ‘A’s” mean?) M. Theall, SCC 10-21-11 (c)

  40. Statistical analysis possibilities for reports of results • Descriptive statistics (item distributions in # and %) • Central tendency (mean, mode, median) 1 2 3 3 4 5 (3, 3, 3) 1 1 2 3 4 5 5 (3, 1 & 5, 3) 1 2 3 4 5 5 (3.33, 5, 3.5) 1 2 3 4 5 5 5 (3.57, 5, 4) • Standard deviations (sampling error) • Enrolled / responded #s and ratio M. Theall, SCC 10-21-11 (c)

  41. SAMPLING CRITERIA Class SizeMinimum acceptable response* 5-20 at least 80% 20-30 at least 75% 30-50 at least 60% 75% or more recommended 50-100 at least 50% 66% or more recommended >100 more than 50% *providing there is no systematic reason for absence or non-responding which might bias response. M. Theall, SCC 10-21-11 (c)

  42. INSTRUCTIONAL REPORT of EDUCATIONAL SATISFACTION: I.R.E.S.) Universitas pro Omnibus Discipuli et Facultitas in Excelcis Instructor: U.N. Fortunate Course #: HIS123 Course name: History of Everything Term/year: Spring, 1994 A B C D E F O amount learned 3 16 46 21 14 0 1 overall teacher 1 12 40 29 18 0 0 overall course 2 8 49 20 11 0 0 Note: (A) =5= Best; (F)=6=Worst ... Enrolled: 120; Responded: 53 M. Theall, SCC 10-21-11 (c)

  43. INSTRUCTIONAL REPORT of EDUCATIONAL SATISFACTION: I.R.E.S.) Universitas pro Omnibus Discipuli et Facultitas in Excelcis Instructor: U.N. Fortunate Course #: HIS123 Course name: History of Everything Term/year: Spring, 1994 % / # responses > A B C D E F O mean s d T grp amount learned 3/2 16/10 46/29 21/13 14/9 0/0 1/1 2.64 0.88 27 low overall teacher 1/1 12/8 40/25 29/18 18/11 0/0 0/0 2.43 0.96 24 low overall course 2/1 18/11 49/31 20/13 11/7 0/0 0/0 2.81 0.93 33 low Raw score: (A) =5= Best; (E) =1= Worst; F= Not applicable; O = Omitted; Enrolled = 120; Responded = 63: (sample adequate) T-score: Standardized score where 40 – 60 = mean, and each 10 points in each direction is one standard deviation Group score:= 0-10% = low; 10-30% - low middle; 30-70% = middle; 70-90% = high middle; 90-100% = high M. Theall, SCC 10-21-11 (c)

  44. Two evaluations of HIS 345 mean s d T group course amount learned 3.35 0.87 45 low-mid term/yr = spring, 1995 instr = UNFortunate overall teacher 2.76 0.76 35 low course = his 345 resp/enr = 29/61 overall course 2.85 0.90 37 low % resp=48 ______________________________________________________ amount learned 3.97 1.40 56 hi-midterm/yr= fall, 1995 instr = UNFortunate overall teacher 3.57 1.30 47 midcourse = his 345 resp/enr = 20/42 overall course 3.63 1.24 50 mid% resp=48 M. Theall, SCC 10-21-11 (c)

  45. Enrollment profiles for HIS 345 in two semesters Fr So Jn Sn Tot Course original enr. 6 17 15 23 61term/yr = spring, 1995 Instr = UN Fortunate final enr. 5 14 12 20 51 course = his 345 resp/enr = 29/51 eval respondents 5 13 11 0 29 % resp=57 original enr. 3 11 12 16 42 term/yr = fall, 1995 instr = UN Fortunate final enr. 2 7 8 12 29 course = his 345 resp/enr = 20/29 eval respondents 2 4 5 9 20 % resp=69 M. Theall, SCC 10-21-11 (c)

  46. Two evaluations of HIS 345 mean s d T group course amount learned 3.35 0.87 45 low-mid term/yr = spring, 1995 instr = UNFortunate overall teacher 2.76 0.76 35 low course = his 345 resp/enr = 29/61 overall course 2.85 0.90 37 low % resp=48 ______________________________________________________ amount learned 3.97 1.40 56 hi-midterm/yr= fall, 1995 instr = UNFortunate overall teacher 3.57 1.30 47 midcourse = his 345 resp/enr = 20/42 overall course 3.63 1.24 50 mid% resp=48 M. Theall, SCC 10-21-11 (c)

  47. Graphic display of 95% confidence intervals for individuals vs.comparison groups 1 2 3 4 5 Personal range Departmentrange Institutional range Teacher A Teacher B

  48. Guidelines for Good Evaluation Practice Guideline #1: (do your homework) • Establish the purpose of the evaluation and the uses and users of ratings beforehand; • Include all stakeholders in decisions about evaluation process and policy; • Keep a balance between individual and institutional needs in mind; • Build a real "system" for evaluation, not a haphazard and unsystematic process; M. Theall, SCC 10-21-11 (c)

  49. Guidelines for Good Evaluation Practice Guideline #2: (establish protection for all) • Publicly present clear information about the evaluation criteria, process, and procedures. • Establish legally defensible process and a system for grievances; • Establish clear lines of responsibility/ reporting for those who administer the system; • Produce reports that can be easily and accurately understood. M. Theall, SCC 10-21-11 (c)

  50. Guidelines for Good Evaluation Practice Guideline #3:(make it positive, not punitive) • Absolutely include resources for improvement and support of teaching and teachers; • Educate the users of ratings results to avoid misuse and misinterpretation; • Keep formative evaluation confidential and separate from summative decision making; • In summative decisions, compare teachers on the basis of data from similar situations; • Consider the appropriate use of evaluation data for assessment and other purposes. M. Theall, SCC 10-21-11 (c)

More Related