200 likes | 328 Vues
Purpose of Presentation. To illustrate how evaluators and evaluation activities can be catalysts for change in the management and implementation of large-scale professional development (PD) projects, as demonstrated by a quasi-experimental analysis of a preschool literacy initiative. . Tennessee St
E N D
1. A Quasi-experimental Approach for Studying the Impact of the Tennessee State Improvement Grant’s (TN SIG) Professional Development (PD) on the Pre-school Classroom Environment Chithra Perumal MS MPA, Brent Garrett PhD and Kathy Strunk PhD
2007 OSEP Project Director Conference
July 16th 2007
2. Purpose of Presentation To illustrate how evaluators and evaluation activities can be catalysts for change in the management and implementation of large-scale professional development (PD) projects, as demonstrated by a quasi-experimental analysis of a preschool literacy initiative.
3. Tennessee State Improvement Grant Funded in 2003
Collaborative effort between TN Department of Education, the Mid-South Regional Resource Centers, and other partners.
Partners include: (1) IHEs - the University of TN, East TN State, University of Memphis, and Vanderbilt University, (2) parent organizations - Family Voices and STEP, and (3) the University of Kentucky, as the external valuator, and consulting from the Mid-South Regional Resource Center.
Three initiatives:
School
Preschool
Family
4. Where is the TN SIG Now? - Organizationally Project management ranked very highly by project partners and stakeholders.
Strong and active collaboration across IHEs and other partners.
Program staff and evaluators are working in a proactive, collaborative manner.
Partners no longer throw things at the evaluators.
5. Where is the TN SIG Now? - Programmatically Response to Intervention (RTI) component is leading state efforts in RTI implementation.
Formalization of literacy PD package, wrapping around existing Reading First activities. Focused on pre-school and late elementary/middle school grades.
Family involvement from STEP, TN’s Parent Training Information Center, and Family Voices, another statewide family advocacy organization.
6. Where Was The TN SIG 2 Years Ago? Transition of project leadership and project evaluators.
Eager and willing partners, but not fully informed about:
Purpose of SIGs
Project goals and objectives
Partners’ roles
Little monitoring or control over what was being implemented in the field (poor fidelity).
7. Evaluation as a Catalyst
8. Evaluation Activities Still a lot of process evaluation, focusing on collaboration and management activities. This information has been critical to improving project focus and implementation.
Greater attention to workshop/seminar evaluations, relying more on pre/post assessments.
Two month follow-up surveys used with some initiatives.
Using focus groups and other strategies to improve dissemination of products developed by the TN SIG.
Working with the state to access school-level data in a meaningful manner to report on school-level outcomes.
Implemented a small scale, quasi-experimental study to demonstrate impact of the early childhood literacy initiative.
9. TN SIG Pre-school Initiative Creation of a literacy rich classroom environment
Strengthening home school connection
Developing age appropriate literacy skills
20 pre-schools
Among the many supports provided by the TN SIG, is the provision of PD to pre-schools. PD is provided on 3 main areas –
on creation of a literacy rich classroom environment,
strengthening home school connection
and developing age appropriate literacy skills.
Currently there are 20 pre-schools statewide that participate in this initiative. One of the intermediate outcomes of the PD is the presence of a literacy rich classroom environment.Among the many supports provided by the TN SIG, is the provision of PD to pre-schools. PD is provided on 3 main areas –
on creation of a literacy rich classroom environment,
strengthening home school connection
and developing age appropriate literacy skills.
Currently there are 20 pre-schools statewide that participate in this initiative. One of the intermediate outcomes of the PD is the presence of a literacy rich classroom environment.
10. Purpose of the Pre-School Evaluation To assess the impact of PD on classroom environment.
To answer the ‘so-what’ question.To answer the ‘so-what’ question.
11. Evaluation Design Quasi-experimental approach
Non random group design
Convenience sampling
10 SIG schools and 10 comparison schools
Matched by curriculum, setting, and amount of PD To assess the impact of TA on the classroom environment, a quasi experimental method was implemented. The sample included 10 of the 20 pre-schools that receive TN SIG TA and 10 control schools (i.e. not a part of TN SIG initiative). The TN SIG pre-schools that were a part of the evaluation had recently joined the SIG initiative and had previously received little/no TA. At the same ti me these SIG pre-schools showed a high degree of willingness to receive PD and be a part of the SIG initiative. This is crucial for the implementation of the design and for assessing the impact of PD. The ten comparison schools were identified based on pre-school setting, star ratings, type of curriculum used, location, and observation feasibility. The matching was made as close as possible but also observation feasibility was take into account and also at times there was a lack of willingness to participate as comparison schools. The comparison schools were given books worth $50 as an incentive to participate. They were also asked to sign a written contract which states they there will be two observations in their schools and each observation will be approximately a year apart. Finding comparison schools proved to a be a long process. We were lucky to have a pre-school consultant who found the comparison schools. She spent almost a week (around 40 or so hours) trying to get comparison schools.
To assess the impact of TA on the classroom environment, a quasi experimental method was implemented. The sample included 10 of the 20 pre-schools that receive TN SIG TA and 10 control schools (i.e. not a part of TN SIG initiative). The TN SIG pre-schools that were a part of the evaluation had recently joined the SIG initiative and had previously received little/no TA. At the same ti me these SIG pre-schools showed a high degree of willingness to receive PD and be a part of the SIG initiative. This is crucial for the implementation of the design and for assessing the impact of PD. The ten comparison schools were identified based on pre-school setting, star ratings, type of curriculum used, location, and observation feasibility. The matching was made as close as possible but also observation feasibility was take into account and also at times there was a lack of willingness to participate as comparison schools. The comparison schools were given books worth $50 as an incentive to participate. They were also asked to sign a written contract which states they there will be two observations in their schools and each observation will be approximately a year apart. Finding comparison schools proved to a be a long process. We were lucky to have a pre-school consultant who found the comparison schools. She spent almost a week (around 40 or so hours) trying to get comparison schools.
12. Factors used for Matching Setting
Curriculum
STAR Ratings
Location Intervention Schools
Funding-- Head Start 4 Voluntary Pre-K 1 Child Care Centers 5
STAR-- Three 8 Two 1
Region-- East 4 West 3 Middle 3
Curriculum-- DLM 6 Inclusive 3 Creative 1
Comparison Schools
Funding-- Head Start 1 Voluntary Pre-K 3 Child Care Centers 6
STAR-- Three 6 Two 1
Region-- East 8 West 2
Curriculum-- DLM 1 High Scope 2 Abeka 1 Creative 1 No fixed curriculum 5
Setting: Childcare center (9), voluntary pre-K(6), Head Start (5) – BG: Balanced between intervention and comparison sites.
Curriculum: 13 centers using DLM to varying degrees, 7 using other curricula like Creative, OWL etc
STAR Ratings: TN has a 3 star rating program. Assessed on various aspects of the pre-school—health and safety, discipline, curriculum etc. Voluntary pre-K schools are exempt from the Star Programs. 12- 3 star preschools, 1- 2 star, 1-0 star – BG: Balanced between intervention and comparison sites.
Location: 7—Middle TN, 7-East, 6- West
Intervention Schools
Funding-- Head Start 4 Voluntary Pre-K 1 Child Care Centers 5
STAR-- Three 8 Two 1
Region-- East 4 West 3 Middle 3
Curriculum-- DLM 6 Inclusive 3 Creative 1
Comparison Schools
Funding-- Head Start 1 Voluntary Pre-K 3 Child Care Centers 6
STAR-- Three 6 Two 1
Region-- East 8 West 2
Curriculum-- DLM 1 High Scope 2 Abeka 1 Creative 1 No fixed curriculum 5
Setting: Childcare center (9), voluntary pre-K(6), Head Start (5) – BG: Balanced between intervention and comparison sites.
Curriculum: 13 centers using DLM to varying degrees, 7 using other curricula like Creative, OWL etc
STAR Ratings: TN has a 3 star rating program. Assessed on various aspects of the pre-school—health and safety, discipline, curriculum etc. Voluntary pre-K schools are exempt from the Star Programs. 12- 3 star preschools, 1- 2 star, 1-0 star – BG: Balanced between intervention and comparison sites.
Location: 7—Middle TN, 7-East, 6- West
13. Pre-schools The map shows the geographical location of SIG and comparison pre-schools. As you can see, we could not find comparison schools in Middle TN and also as most of our observersThe map shows the geographical location of SIG and comparison pre-schools. As you can see, we could not find comparison schools in Middle TN and also as most of our observers
14. Assessing the Classroom Environment Early Language & Literacy Classroom Observation (ELLCO)
Literacy Environment Checklist
Classroom observation and Teacher interview
Literacy Activities Rating Scale
Six observers
Reliability checks
Inter-rater agreement
The field tested observation toolkit—designed for prekindergarten to third-grade classrooms—helps administrators, principals, supervisors, and program directors gather crucial data schools need to build better classrooms and literacy programs, both by improving PD and comparing teachers’ practices with other. Trained observers complete the assessment in 1-1 and1/2 hours (it took us about ½ day, have to go with the pre-school schedule), using 3 tools in sequential steps:
Literacy Environment Checklist: In 15-20 minutes, users examine classroom layout and content, and diversity of reading, writing and listening materials. Very objective
Classroom observation and Teacher interview: In 20-45 minutes, users observe teachers interacting with children and rate the quality of classroom supports for literacy through 14 observation elements. After the observation is complete, users clarify aspects of it with 10-minute Teacher interview. Subjective and depends on the observer and is only a snapshot. Agreed to use only probes in the booklet for interviews.
Literacy Activities Rating Scale: In 10 minutes, observers record how many times and for how long nine literacy behaviors occurred in two categories, Book Reading and Writing. Very objective.
Six observers: evaluator, pre-school consultant, 3 SIG partners from Center of Literacy Studies, UT Knoxville and Dr. Strunk. All six were trained on ELLCO. Before formal observations began, inter-rater agreement was assessed among the six observers. The observers made independent observations in one pre-school classroom in Nashville. Later, discussions about scores were made and discussed on scores where there was marked variation among observers. We had an ELLCO trainer (Jana Crosby) to facilitate the discussion. On a five point ranking scale, all six observers either marked the same response or were off only by one point for 94% of the variables in the ELLCO instrument.
Formal assessments were made between February 07-March 07The field tested observation toolkit—designed for prekindergarten to third-grade classrooms—helps administrators, principals, supervisors, and program directors gather crucial data schools need to build better classrooms and literacy programs, both by improving PD and comparing teachers’ practices with other. Trained observers complete the assessment in 1-1 and1/2 hours (it took us about ½ day, have to go with the pre-school schedule), using 3 tools in sequential steps:
Literacy Environment Checklist: In 15-20 minutes, users examine classroom layout and content, and diversity of reading, writing and listening materials. Very objective
Classroom observation and Teacher interview: In 20-45 minutes, users observe teachers interacting with children and rate the quality of classroom supports for literacy through 14 observation elements. After the observation is complete, users clarify aspects of it with 10-minute Teacher interview. Subjective and depends on the observer and is only a snapshot. Agreed to use only probes in the booklet for interviews.
Literacy Activities Rating Scale: In 10 minutes, observers record how many times and for how long nine literacy behaviors occurred in two categories, Book Reading and Writing. Very objective.
Six observers: evaluator, pre-school consultant, 3 SIG partners from Center of Literacy Studies, UT Knoxville and Dr. Strunk. All six were trained on ELLCO. Before formal observations began, inter-rater agreement was assessed among the six observers. The observers made independent observations in one pre-school classroom in Nashville. Later, discussions about scores were made and discussed on scores where there was marked variation among observers. We had an ELLCO trainer (Jana Crosby) to facilitate the discussion. On a five point ranking scale, all six observers either marked the same response or were off only by one point for 94% of the variables in the ELLCO instrument.
Formal assessments were made between February 07-March 07
15. Pre-PD Results Overall SIG preschools had relatively higher literacy environment scores than the comparison preschools
A common problem encountered with Non-random group design is high mean scores for one group. A possible reason could be due to the fact that the monetary benefit was perceived as a large incentive by pre-schools with few resources and they chose to participate. Hence by nature of the design the comparison schools were of lower literacy quality. But both the SIG and comparison schools still have work to do in terms of literacy environment. There is room for improvement.
Subtotals for Classroom Observation
SIG Mean Std. Deviation Std. Error Mean
GenRoomEnv control 3.1200 .63386 .20044
SIG 4.4000 .62539 .19777
LLC_total control 2.7760 .54386 .17198
SIG 3.7935 .63425 .20057
A common problem encountered with Non-random group design is high mean scores for one group. A possible reason could be due to the fact that the monetary benefit was perceived as a large incentive by pre-schools with few resources and they chose to participate. Hence by nature of the design the comparison schools were of lower literacy quality. But both the SIG and comparison schools still have work to do in terms of literacy environment. There is room for improvement.
Subtotals for Classroom Observation
SIG Mean Std. Deviation Std. Error Mean
GenRoomEnv control 3.1200 .63386 .20044
SIG 4.4000 .62539 .19777
LLC_total control 2.7760 .54386 .17198
SIG 3.7935 .63425 .20057
16. Next Steps Inter-rater agreement
Post Measure
Adjusting for high pre-mean scores
Additional qualitative data Second reliability check/Inter-rater agreement before post observations
Post measure—February 08
Adjusting for high pre-mean scores before analysis
Interview with teachers, to get qualitative data on the impact of PD and what they have learnt about classroom environment.Second reliability check/Inter-rater agreement before post observations
Post measure—February 08
Adjusting for high pre-mean scores before analysis
Interview with teachers, to get qualitative data on the impact of PD and what they have learnt about classroom environment.
17. Limitations Possible observer bias
Use of convenience sampling
Only one observation in each pre-school by one observer Advantages Collaborative observer training
Inter-rater agreement
Feasibility
Increased buy-in from pre-schools
Ability to study statewide initiative on limited budget
18. Lessons Learned Collaborate with local resources
Pre-school is different from K-12
Inter-rater agreement proved to be crucial
19. Impact of Evaluation Internally
Aids strategic planning
Statewide Initiatives and System Change
Lessons learned shared
Suggestions made on ways grant activities con be folded into existing structures
Future Directions
Critical in helping refine focus and increase impact
Useful in thinking about direction of next grant proposal
1) The evaluation process 1) legitimizes the direction I take with SIG partners and 2) serves as a map for me to steer activities.
Evaluation gives me a clearer and more objective way to encourage more effective communication and to steer our activities toward a more refined focus.
For example, We learned from evaluation results that some workgroup members wanted more awareness of other SIG workgroups’ activities and ideas. This gave us an opportunity to openly discuss how to improve this area and to decide to formalize regular conference calls with those members who work just on family involvement. This led us to decide that during our first call we would like to discuss our approach to presenting our family literacy toolkit and workshops. Some members noted that we were not serving as many families who have special needs children as we originally intended. We want to look at alternatives that might promote more participation from these families.
Another example is our decision to hold two strategic planning sessions in the summer to identify specific activities that would lead to a formalized package of Tennessee SIG Literacy Products. Last year’s sessions led us to design PD activities and products with more focus and wider reach. For example, we have an Interactive Teacher Forum on our website now. This year’s sessions will focus not only on further formalization of products, but of how to incorporate them into the TN DOE infrastructure.
2) Evaluation results have offered a tool to build collaboration and discussion with key TN DOE policy-makers and decision-makers. One evaluation finding last year was to concentrate on increased collaboration with existing state structures and other agencies. SIG results are regularly presented to the education commissioners --- lessons learned are shared and then suggestions are made about ways grant activities could be folded into existing structures. I recently held a WebEx with both Sp Ed and Gen Ed Resource Centers across the state to train these leaders on our RTI initiative so that they can continue to carry on the work begun by SIG.
3) Evaluation has provided a guide to continue to direct my own thinking about what I need to focus on when I continue to push our efforts. When I first began, the grant was beginning its 3rd year. There was very little reach at that time. Evaluation results were critical in helping me refine our focus and increase our impact. I was able to see clearly that unless we made some critical decisions for change, we were not going to bring much improvement.
Our RTI initiative has been exciting because its direction is allowing us to gain a deeper, more beneficial evaluation. The first year, we developed RTI online modules and held 2 Trainings of Trainers. Our evaluation data included some process data involving participants experiences. Now, as school systems begin to actually implement RTI we will be able to follow those who were trained into their school settings and evaluate how well they are able to train others and implement RTI. Our data will look more quantitative and we may be able to do some fun things, like case studies.
Finally, the evaluation has helped me think about the direction of our next grant proposal. Because we have used the evaluation process to grow our grant, I have a deeper understanding of PD, grant processes, systems change, what works, what does not. This knowledge helps me know what makes sense for next time around and what I hope we can do even better next time. Evaluation has really been the engine of the TN SIG.
1) The evaluation process 1) legitimizes the direction I take with SIG partners and 2) serves as a map for me to steer activities.
Evaluation gives me a clearer and more objective way to encourage more effective communication and to steer our activities toward a more refined focus.
For example, We learned from evaluation results that some workgroup members wanted more awareness of other SIG workgroups’ activities and ideas. This gave us an opportunity to openly discuss how to improve this area and to decide to formalize regular conference calls with those members who work just on family involvement. This led us to decide that during our first call we would like to discuss our approach to presenting our family literacy toolkit and workshops. Some members noted that we were not serving as many families who have special needs children as we originally intended. We want to look at alternatives that might promote more participation from these families.
Another example is our decision to hold two strategic planning sessions in the summer to identify specific activities that would lead to a formalized package of Tennessee SIG Literacy Products. Last year’s sessions led us to design PD activities and products with more focus and wider reach. For example, we have an Interactive Teacher Forum on our website now. This year’s sessions will focus not only on further formalization of products, but of how to incorporate them into the TN DOE infrastructure.
2) Evaluation results have offered a tool to build collaboration and discussion with key TN DOE policy-makers and decision-makers. One evaluation finding last year was to concentrate on increased collaboration with existing state structures and other agencies. SIG results are regularly presented to the education commissioners --- lessons learned are shared and then suggestions are made about ways grant activities could be folded into existing structures. I recently held a WebEx with both Sp Ed and Gen Ed Resource Centers across the state to train these leaders on our RTI initiative so that they can continue to carry on the work begun by SIG.
3) Evaluation has provided a guide to continue to direct my own thinking about what I need to focus on when I continue to push our efforts. When I first began, the grant was beginning its 3rd year. There was very little reach at that time. Evaluation results were critical in helping me refine our focus and increase our impact. I was able to see clearly that unless we made some critical decisions for change, we were not going to bring much improvement.
Our RTI initiative has been exciting because its direction is allowing us to gain a deeper, more beneficial evaluation. The first year, we developed RTI online modules and held 2 Trainings of Trainers. Our evaluation data included some process data involving participants experiences. Now, as school systems begin to actually implement RTI we will be able to follow those who were trained into their school settings and evaluate how well they are able to train others and implement RTI. Our data will look more quantitative and we may be able to do some fun things, like case studies.
Finally, the evaluation has helped me think about the direction of our next grant proposal. Because we have used the evaluation process to grow our grant, I have a deeper understanding of PD, grant processes, systems change, what works, what does not. This knowledge helps me know what makes sense for next time around and what I hope we can do even better next time. Evaluation has really been the engine of the TN SIG.
20. Acknowledgements Alissa Ongie, East TN State
Center for Literacy Studies, UT Knoxville
Dr. Connie White
Dr. Reggie Curran
Lisa Crawford
Dr. Kathy Strunk, TN Department of Education
Jana Crosby, Read to Succeed, United Way
21. Chithra Perumal Chithra.Perumal@uky.edu
Dr. Brent Garrett garrett@win.net
Dr. Kathy Strunk Kathy.Strunk@state.tn.us