1 / 26

Human Capital Policies in Education: Further Research on Teachers and Principals

Human Capital Policies in Education: Further Research on Teachers and Principals 5 rd Annual CALDER Conference January 27 th , 2012 . Where You Come From Or Where You Go? Distinguishing Between School Quality And The Effectiveness Of Teacher Preparation Programs January 27, 2012. Kata Mihaly

payton
Télécharger la présentation

Human Capital Policies in Education: Further Research on Teachers and Principals

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human Capital Policies in Education: Further Research on Teachers and Principals 5rd Annual CALDER Conference January 27th, 2012

  2. Where You Come From Or Where You Go?Distinguishing Between School Quality And The Effectiveness Of Teacher Preparation ProgramsJanuary 27, 2012 Kata Mihaly RAND Daniel McCaffrey RAND Tim Sass Georgia State University J. R. Lockwood RAND

  3. Introduction • Improving teacher effectiveness is one of four pillars for education reform under the Obama Administration • States are using evidence based techniques to evaluate teacher preparation program effectiveness • One technique links student achievement to the preparation programs where their teacher was trained and certified • Among the many concerns is that the school context could affect preparation program estimates

  4. Table 1 - Characteristics of Schools Where Graduatesfrom a Sample Program in Florida were Hired (N=22)

  5. Introduction • There are many potential problems of linking preparation program to student achievement • selection of teachers into and out of programs • selection of program graduates into teaching positions • how teacher performance is measured • Here we consider problem of trying to distinguish preparation program effects from environment of schools where graduates teach • We estimate preparation program effectiveness using Value Added Model of student achievement with data from Florida

  6. Overview of Research Questions • Can school fixed effects be included in the value added model? • If yes, does the inclusion of school fixed effects change preparation program estimates? • What are the implications of including school fixed effects on precision of estimates? • Are fixed effects suitable in this setting? • What is the impact of the sample restrictions: • Years of data • Inexperienced teachers

  7. Prior Research Comparing Value-Added of Teachers Across Preparation Programs • Models with Program and School Fixed Effects • New York City -- Boyd, et al. (2008) • Florida -- Sass (2008) • Kentucky -- Kukla-Acevedo, et al. (2009) • HLM Models • Texas – Mellor, et al. (2010) • Louisiana – Noell, et al. (2009)

  8. Data • Analyze recent graduates (<3 years of experience) from Florida elementary education teacher preparation programs • teaching in grades 4 and 5 during 2000/01-2004/05 period • 33 preparation programs • 1 to 496 teacher graduates in tested grades/subjects • Graduates from a single program working in 1 to 271 schools • Over 550,000 students • Sample also includes experienced teachers and those certified out of state or in Florida through alternative pathways.

  9. Fixed Effects Identification • School fixed effect estimation only possible if all preparation programs are linked to one another through the schools where their graduates teach • Preparation programs do not need to be linked directly • as long as there are some new teachers in the school who graduated from other programs • Regional Clustering could lead to stratification • Work of Boyd et al. (2005) on the “draw of home” suggests graduates tend to teach in schools close to where they grew up or where they went to college

  10. Figure 1 - Estimated Probability of Preparation Program Graduate Teaching at School with at Least one Graduate from another Program as a Function of Distance from Program to School Negative relationship indicates graduates are more likely to teach in schools closer to where they graduated

  11. Figure2 – Preparation Program and School Connections Shade of line represents strength of connection - the number of graduates from a program going to that school

  12. Figure 3 – Preparation Program Network Using a 5-Year Data Window • All preparation programs are connected, so school fixed effect estimation is possible

  13. Model – Preparation Program Effectiveness • We estimate a model of student achievement gains as a function of student characteristics, teacher experience, grade and year indicators and program fixed effects • Program effects are estimated relative to the average preparation program in Florida using STATA felsdvregdm command • We rank programs on effectiveness, and divide the rankings into quartiles • We compare the rankings and ranking quartiles with and without school fixed effects

  14. Table 2 – Top Tier Preparation Programs

  15. Table 3 – Bottom Tier Preparation Programs

  16. Results – Preparation Program Rankings • Rankings are significantly affected by the inclusion of school fixed effects in the value added model • Of the 12 programs in the top quartile of rankings in either specification, 8 programs are in a different quartile of rankings in the other specification • The bottom quartile of program rankings is more stable, with 6 programs in this quartile for both specifications. • There are 2 programs that switch from the bottom quartile of rankings for one specification to the top in the other specification

  17. Results – Variance Inflation • Schools where all new teachers came from a single program do not help identify preparation program effects in school fixed effect model: ~32% of our sample of teachers • Loss of these teachers can greatly inflate the standard errors of the estimated program effects for some programs • The standard errors of the preparation program estimates increases by an average of 16.5% after including school fixed effects • The variance inflation is severe for 10 of the 33 programs, with standard errors increasing over 20%

  18. Homogeneity Assumption • School fixed effects can only yield consistent estimates of program effectiveness if there are no systematic differences among teachers and schools that create the connections among programs from other teachers and schools in the state • Three tests of homogeneity assumption: • Are schools with graduates from 1 program different than school with graduates from more than 4 programs? • Are teachers that teach in schools with graduates from 1 program different than teachers that teach in schools with graduates from more than 4 programs • Are central schools, ones that help the most in connecting preparation programs different than other schools in the state?

  19. Figure 3 – Central School Locations Central schools have a disproportionately high influence in identifying program effects

  20. Results – Homogeneity Assumption • Three tests of homogeneity assumption: • Schools different? • YES: schools with new teachers from 4+ preparation programs are larger, higher % black and Hispanic, lower test scores and higher % free lunch • Teachers different? • YES: average characteristic of teachers in schools with from 4+ preparation programs are higher % black and Hispanic, lower test scores and lower SAT exams • Central schools different? • YES: larger schools, higher % Hispanic, immigrant, and LEP students Homogeneity Assumption Likely Violated

  21. Years of Data • The data window length affects program connections • programs will have a tie through a school if they both have a graduate teaching in the school sometime during the window • As we lengthen the window • more programs will have ties making estimation possible • however, requires the assumption that both school and program effects are constant over the entire window • Other implications • Variance inflation • Homogeneity assumption

  22. Results – Years of Data • Identification is robust to shorter window length • Even with 3 years of data the school fixed effects can be identified • Restricting to 2 years of data results in 3 preparation programs being disconnected • Variance inflation is worse • Due to an increase in the proportion of teachers in schools with graduates from a single program who are not used to estimate program effects • Characteristics of schools that contribute to program estimates with school fixed effects are very similar • Somewhat larger schools and higher % Hispanic

  23. Sample of Teachers • In the results reported so far only inexperienced teachers (< 3 years experience) were included in the analysis. • This restriction is warranted if the impact of the preparation program dissipates as the teacher gains experience on the job • We can include experienced teachers in the sample and restrict program effects to exclude these teachers • but experienced teachers change the connections between schools and preparation programs • Experienced teachers will identify school fixed effects • This can result in reduced variance of program effects • Our ability to compare programs could rely on differences between schools on experienced teachers

  24. Results – Sample of Teachers • Preparation program ranking quartiles are unaffected by the inclusion of experienced teachers in models without school fixed effects • Ranking quartiles in models with school fixed effects change for 8 out of 33 programs • 3 out of these 8 changes are more than 2 quartile differences • Experienced teacher result in lower program effect variancesin models with school fixed effects by 13% on average

  25. Summary and Conclusions • Good News for School Fixed Effects Models: • Despite regional clustering, Florida preparation programs are connected, so use of school fixed effects is feasible • There is significant variation in school characteristics for graduates of any preparation program, so use of school fixed effects desirable • Bad News for Models with School Fixed Effects • Including school fixed effects inflates variance of estimated program effects • Homogeneity assumption likely violated • Preparation program effectiveness rankings differ significantly with and without school fixed effects

  26. Summary and Conclusions • A 3-year data window and use of school fixed effects may provide a reasonable compromise between bias and variance inflation • However, there is no clean empirical method to identify a model with no bias or a model that yields program effect estimates with the smallest MSE • States will need to make a choice knowing that the choice may affect preparation program rankings and might be yielding a biased estimate unless untestable assumptions hold • States may need to consider if value added modeling alone can provide useful information about preparation program

More Related