1 / 44

Growth, Value-Added and Teacher Effectiveness Measures

Growth, Value-Added and Teacher Effectiveness Measures. Philip R. Fletcher Senior Research Scientist Pearson. Teacher opinion. A recent international survey of teachers shows: --That the vast majority of teachers welcome appraisal and feedback on their work.

minowa
Télécharger la présentation

Growth, Value-Added and Teacher Effectiveness Measures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Growth, Value-Added and Teacher Effectiveness Measures Philip R. Fletcher Senior Research Scientist Pearson

  2. Teacher opinion • A recent international survey of teachers shows: • --That the vast majority of teachers welcome appraisal and feedback on their work. • --That it improves their job satisfaction and effectiveness as teachers. • --But too many teachers do not receive any feedback on their work at all. • --Moreover, evaluation is perceived to be an instrument of compliance rather than development.

  3. Teacher ratings • Most school districts use pass-fail ratings where nearly all teachers pass. • 99% of teachers in districts using binary ratings are rated satisfactory. • 94% of teachers in districts using multiple points are in the top two categories. • As Arne Duncan noted, “Ninety-nine percent of our teachers are above average.”

  4. Teacher salaries • Teacher compensation is very predictable. • Based on the teacher’s highest degree and years of seniority. • Almost completely unrelated to variations in teacher effectiveness.

  5. Effectiveness varies • Anecdotal and empirical evidence suggests that teachers differ dramatically in effectiveness. • An effective teacher will raise student test scores by ten percentiles per year. • Three years of effectively teachers raise test scores by thirty percentiles. • Traditional teacher evaluation systems fail to recognize these differences.

  6. Teacher recognition • The need to recognize teachers who make magnificent contributions to student learning. • The need to motivate people to gain expertise. • And the need to leverage expert teachers and reward them for their efforts. • To ensure that students are taught successfully, there is need to differentiate teacher effectiveness in terms of their impact on student learning.

  7. Status, growth and effectiveness • Student achievement is the status of accumulated subject matter knowledge at one point in time—a lagging indicator. • Student learning is growth in subject matter knowledge over time—a leading indicator. • It is student learning—not student achievement—that is most relevant in defining and assessing teaching effectiveness.

  8. Status, growth and effectiveness • Achievement provides evidence of the status of student knowledge and understanding at one point in time. • Learning is demonstrated by growth in student achievement from one point in time to another point in time–not by status at either point time alone. • Effectiveness is demonstrated by above-average student learning and growth.

  9. Status, growth and effectiveness • Schematically: • Status = Achievement • Growth = Learning • Relative Growth = Effectiveness

  10. Status and growth

  11. Relative growth and effectiveness

  12. Why growth? • Growth reflects learning,and we care about student learning. • Because the principle role of teachers is to enhance student learning. • Teacher effectiveness should be reflected in how much their students learn.

  13. Official incentives • Teacher Incentive Fund (TIF) grants require school districts to evaluate teachers. • Race to the Top (RttT) funds require a state commitment to measuring teacher effectiveness. • No Child Left Behind (NCLB) required testing of all students in reading in mathematics, leading to the development of longitudinal data systems linked to individual teachers.

  14. Student testing • Most states have test data linked to specific schools and teachers that can be used to track student growth. • Many assessment systems are based on student test score growth over time: • Value-added models • Student growth percentiles • Both address effectiveness in terms of learning rather than status.

  15. Value-added assessment • Value-added models are designed to assess school and teacher contributions to student growth. • A value-added assessment model is designed to demonstrate the impact of individual schools and teachers. • It is designed to distinguish between teacher effects and other outside influences.

  16. Value-added assessment • Value-added captures the growth that classes of students achieve during a single year of schooling. • To estimate classroom effects, student data include only the students enrolled in a particular class.

  17. Value-added assessment • Key idea is to statistically isolate the contribution of individual teachers from all other sources of influence. • Value-added analyses attempt to determine the amount of student growth that can be attributed to an individual teacher. • Value-added models quantify teacher effectiveness—the teacher’s contribution to student learning and growth.

  18. Value-added assessment • Value-added attributes causality to the teacher. • Teachers are responsible for the learning and growth of their students. • Under conditions of high stakes accountability, student growth has been directed toward cause and responsibility.

  19. Value-added assessment • Some statisticians would argue that value-added unsuited for drawing causal inferences that a given teacher is responsible for the increase in student test scores. • “We do not think that their analyses are estimating causal quantities, except under extreme and unrealistic assumptions.” –Rubin, Stuart, and Zanutto (2004). • “…it does not appear possible to separate teacher and school effects using currently available accountability data.” –Raudenbush (2004).

  20. Value-added assessment • Policymakers and school administrators generally express no such reservations and offer strong support for the value-added. • “If quality instruction is essential for student learning, then student learning should tell us something about the quality of instruction.”

  21. Descriptive accountability • Accountability system results may have value without making causal inferences. • From this perspective, accountability results should not be used to sanction teachers in schools. • Instead, they should be used to make sound judgments about quality and needed improvements. • Descriptive information and identification of schools, teachers, and students that may require further attention.

  22. Describing student growth • The Colorado Growth Model was designed to describe student growth and learning. • Quantile regression is used to model the complete distribution of student achievement over time. • The model quantifies distance = growth rate  time, probabilistically. • Growth percentiles describe the rarity of a student’s current growth, given their prior achievement.

  23. Student growth percentiles

  24. Student growth percentiles • Examining growth with achievement sheds new light on school performance. • Median growth above the 50th percentile identifies best practices and sources that can offer support. • Median growth below the 50th percentile identifies greatest needs and targets that need to receive support. • A gap-closing strategy is built around a consensus of school improvement.

  25. Student growth percentiles

  26. Common yardstick • Most states have administrative data that can be used as a common yardstick to identify the 25% most effective teachers. • Supervisor ratings and classroom observations provide no such common yardstick. • Local implementation of these other measures varies in 1600 school districts nationwide. • More importantly, they do not directly reflect student learning.

  27. Value-added and growth limitations • Value-added and growth percentiles are only available for teachers in certain subject matter areas. • Value-added and growth percentiles are available for only a small subset of teachers. • Value-added and growth percentiles are limited by the test. • Growth metrics are too narrow to provide information about how teachers can improve.

  28. Value-added and growth shortcomings • Value-added metrics and growth percentiles for individual teachers fluctuate from year to year. • They can be influenced by factors beyond the teacher’s control. • They are imperfect measures with a relatively large error component.

  29. Concern • How well does value-added predict the top 25% from year-to-year? • How well do alternative measures of teacher effectiveness predict the same top 25% from year to year? • Classroom observations? • Principals’ ratings? • Student surveys?

  30. Value-added and growth compare favorably • Value-added metrics and growth percentiles compare favorably with performance measures in other fields. • The correlation between SAT test scores and freshman success in college is 0.35. • The correlation in batting averages between years in professional baseball is 0.36. • The correlation between value-added estimates this year and next lies between 0.20 and 0.60. • While most value-added estimates correlate 0.30 and 0.40 between years.

  31. Value-added and growth prognosis • Recommend the use of value-added measures and growth percentiles, principally because they are related to student learning and growth. • Are mindful of their limitations and imperfections. • Strive to continually improve these growth measures.

  32. Suggestion • Use multiple measures—not only value-added metrics and growth percentiles. • Alternate measures should meaningfully supplement state test score data and increase prediction. • Alternate measures should be applicable to a broader range of teachers. • Provide direct information and feedback suggesting how teachers can improve teaching.

  33. Suggestion • Use core and non-core measures to validatethe full range of teacher effectiveness for a broader range of teachers. • Where growth measures benchmark the reliability of other teacher effectiveness measures. • Key idea is to predict benchmark growth measures. • Weight different measures based on their power to predict student learning and growth.

  34. Observational measures • What is needed is not so much an accounting of teacher time or a rating of teacher performance, but rather higher level inferences about the teacher’s ultimate purposes and effects. • Making holistic judgments requires higher levels of inference. • In short, we need a method to obtain holistic rankings reliably and validly. • Procedures must minimize rater effects and coding errors.

  35. Classroom Interactions • A complex situation, difficult to characterize unassisted. • Teacher practice and student-teacher interactions—from the participants’ point of view. • How do students and teachers interact in a practical and personal sort of way? • How do they approach and solve problems together? • Are there different classroom profiles?

  36. Concourse of meaning • The first challenge is to figure out what makes great teaching. • This is difficult and controversial from an educational perspective. • Yet relatively straightforward from a managerial perspective. • Find the best educators and give them an opportunity to debate and create the best pedagogy and teaching practice.

  37. Danielson Framework • Charlotte Danielson’s Framework serves as a source of statements about teacher effectiveness. • The Framework is divided into: • --4 Domains • --23 Components • --76 Elements • --304 Items

  38. Danielson Framework • The 4 Domains include: • --Planning and Preparation • --The Classroom Environment • --Instruction • --Professional Responsibilities

  39. Danielson Framework • The 2 Domains that students actually see: • --The Classroom Environment • --Instruction

  40. Danielson Framework • Scoring rubrics:

  41. Danielson Framework • Items:

  42. Danielson Framework • The Danielson Framework is prescriptive. • Unsatisfactory and basic performance are often just the negation of proficient and distinguished performance. • No guide to what teachers do when under stress. • Good behavior follows rules. Lacks insight from control theory and negative feedback. • “Students help set high standards.”

  43. Danielson Framework • A good basis for a limited number of items. • These items can be readily supplemented with items from other sources, by other authors. • Use these sources and create new items to fully cover what students and teachers actually do.

  44. Growth, value-added and teacher effectiveness measures

More Related