1 / 25

Human Capital Policies in Education: Further Research on Teachers and Principals

Human Capital Policies in Education: Further Research on Teachers and Principals 5 rd Annual CALDER Conference January 27 th , 2012.

noel-sweet
Télécharger la présentation

Human Capital Policies in Education: Further Research on Teachers and Principals

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human Capital Policies in Education: Further Research on Teachers and Principals 5rd Annual CALDER Conference January 27th, 2012

  2. ASSESSING TEACHER PREPARATION IN WASHINGTON STATE BASED ON STUDENT ACHIEVEMENT*Dan Goldhaber, Stephanie Liddle, & Roddy TheobaldCenter for Education Data and ResearchUniversity of Washington BothellThe research presented here utilizes confidential data from Washington State Office the Superintendent of Public Instruction (OSPI). We gratefully acknowledge the receipt of these data. We wish to thank the Carnegie Corporation of New York for support of this research. This paper has benefitted from helpful comments from Joe Koski, John Krieg, Jim Wyckoff, & Dale Ballou. We thank Jordan Chamberlain for editorial assistance, and Margit McGuire for thoughtful feedback. The views expressed in this paper do not necessarily reflect those of UW Bothell, Washington State, or the study’s sponsor. Responsibility for any and all errors rests solely with the authors. Calder Conference January 27th, 2012 www.cedr.us

  3. Context • “Under the existing system of quality control, too many weak programs have achieved state approval and been granted accreditation” (Levine, 2006, p. 61) • U.S. DOE debating reporting requirements that would include value-added based assessments of their program graduates (Sawchuk, 2012) • All 2010 RttT grantees committed to public disclosure of student-achievement outcomes of program graduates • Recent research (e.g. Boyd et al., 2009) has used administrative data to study teacher training, and, in some cases rank (Noell et al., 2008) institutions

  4. The Questions… and (Quick) Answers • How much of the variation in teacher effectiveness is associated with different teacher training programs in Washington State? • Less than 1% in both reading and math • Do training effects decay over time? • Yes—a bit faster for reading than math • What are the magnitudes of differences in student achievement effects associated w/ initial training programs? • Majority of programs produce teachers who cannot be distinguished from one another, but there are some educationally meaningful differences in the effectiveness of teachers who received training from different programs • Do we see evidence of institutional change or specialization? • Yes—some evidence of institutional change • No—little evidence of specialization (geographic or by student subgroups)

  5. Caveats • It’s not necessarily appropriate to consider program effects to be an indicator of value of training: • Selection of teacher candidates into programs • May not matter for program accountability • Teacher candidates who graduate from different training programs may be selected into school districts, schools, or classrooms that are systematically different from each other in ways that are not accounted for by statistical models • But, district or school fixed effects may (Mihaly et al., 2011) or may not (Jim Wyckoff) be appropriate • Precision of training program estimates contingent on sample size; we have to worry about non-random attrition from the teacher labor market

  6. Data and Sample • Information on teachers and students are derived from five administrative databases prepared by Washington State’s Office of the Superintendent of Public Instruction (OSPI) • Sample includes 8,732 elementary (4th, 5th, & a few 6th grade) teachers who received a credential anytime between 1959 and 2011 and were teaching in Washington during 2006-07 to 2009-10 school years • These teachers are linked (mainly through proctors) to 293,994 students (391,922 student-years) who have valid WASL scores in both reading and math for at least two consecutive years

  7. Analytic Approach • We estimate training program effects in two stages • Stage 1: estimate VAMs designed to net out student background factors from student gains and use these to obtain teacher-classroom-year effect estimates • Teacher-classroom-year effects are shrunk towards the mean using Empirical Bayes methods • Stage 2: model stage 1 teacher-classroom-year effects as a function of teacher credentials, including training program, school district/school covariates • Standard errors are corrected by clustering at the program level • In some specifications we include district/school fixed effects in stage 2 • Innovation of model is that we allow the effects of teacher training to exponentially decay depending on a teacher’s time in the labor market

  8. Program Estimates: MATH • The difference between the average program and the top program is 4% of a SD of student performance • The difference between the top program and bottom program is 10% of a SD of student performance

  9. Program Estimates: READING • The difference between the average program and the top program is 9% of a SD of student performance • The difference between the top program and bottom program is 16% of a SD of student performance

  10. Findings (1) • Training program effects do decay over time • Half life of program effects varies by specification (9-50 years) • Half life in reading is smaller for each specification • Individual training program estimates are directionally robust to model specification • Programs graduating teachers who are effective in math also tend to graduate teachers who are effective in reading (r = 0.4) • Training program indicators explain only a small percentage of the variation in teacher effectiveness • The effect size of one standard deviation change in program effectiveness is roughly a quarter of the effect size of a one standard deviation change in teacher effectiveness • There is much more variation within than between programs, but even the small percentage is comparable to percent explained by teachers’ experience and degree level

  11. Findings (2) • Differences between program indicators are educationally meaningful • In math, largest differences (0.10) are roughly twice the difference in: • Student achievement explained by limited English proficiency (0.06) • Productivity gains associated with early career experience (0.06) • In reading, largest differences (0.16) are roughly two or three times the difference in: • Student achievement explained by student poverty status (0.08) • Productivity gains associated with early career experience (0.06) • There is not much evidence of programs specializing, either geographically or in the students they serve • There is evidence that teachers who were trained in Washington State within last five to ten years are relatively more effective than those who had been credentialed in-state prior to 2000, at least as compared to teachers trained out-of-state

  12. Assessing Program Specialization:Proximity to Training Institution Teachers teaching in districts that are close to where they received their initial credential are not found to be differentially effective relative to teachers who teach in districts further from their training program.

  13. Assessing Institutional Change • Recent cohorts of in-state trained teachers are relatively more effective than prior cohorts (both measured relative to teachers credentialed by OSPI) • Some program indicator estimates have changed substantially over time for both math or reading • All programs are measured relative to those teachers who received training out of state, thus we cannot say whether findings are a reflection of the effectiveness of in-state credential or a change in the quality of teachers who are coming in from out of state

  14. Summary/Conclusions/Feedback • Program indicators do provide some information about teacher effectiveness • Magnitude of effects are sensitive to specification • Study is a starting point in a conversation/research • Admissions requirements, sequences of courses, different strands of teacher education programs, student teaching, etc. • We are modeling decay toward out-of-state teachers (close to the mean teacher), can we better capture acculturation (decay toward localized peers)?

  15. Backup Slides

  16. Comparison Across Model Specifications: MATH back

  17. Comparison Across Model Specifications: READING back

  18. Tension with School FE model Training Program A Training Program B Mean difference in program effects

  19. Tension with School FE model Training Program A Training Program B What if a school only hires teachers from a narrow range of the performance distribution?

  20. Tension with School FE model Training Program A Training Program B Estimated difference based upon within school comparison back

  21. ANOVA Results back

  22. Main Program Effects

  23. Examples of Program Cohort Effects(Referent group: Out-of-state teachers credentialed before 2000) back

  24. Pearson/Spearman Rank Correlationsof Program Effects by Subject and Model

  25. Decay Curves • Dots represent math models • Lines represent reading models • Colors represent model specifications back

More Related