1 / 42

Recruitment and Retention: the evaluator’s role

Session 4: Trial management Recruitment and retention: the role of evaluators Meg Wiggins ( IoE ). Sub-brand to go here. Recruitment and Retention: the evaluator’s role. Meg Wiggins – Institute of Education, London . Recruiting schools. Retaining schools. Project examples.

clark
Télécharger la présentation

Recruitment and Retention: the evaluator’s role

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Session 4: Trial managementRecruitment and retention: the role of evaluatorsMeg Wiggins (IoE)

  2. Sub-brand to go here Recruitment and Retention: the evaluator’s role Meg Wiggins – Institute of Education, London

  3. Recruiting schools

  4. Retaining schools

  5. Project examples

  6. Intervention: 30 hours of primary school classroom chess teaching, delivered by external CSC tutors Cluster trial, randomised at school level Evaluation team at IoE: John Jerrim (Lead), Lindsey Macmillan, John Micklewright Process evaluation - Meg Wiggins, Mary Sawtell, Anne Ingold Example 1

  7. Chess in Schools - Recruitment • Community organisation – small central staff team • Recruitment expectations – return to known ground • Recruitment reality – IoE provided lists of schools selected on FSM % criteria, in their chosen Las • Capacity issues, limited understanding about RCTs, huge enthusiasm for the evaluation

  8. Chess in Schools – Recruitment 2 • Nearly reached target of 100 primary schools within tight timeframe • Succeeded by tenacious, labour intensive direct contact by phone • Often before school; strategies for speaking directly to head teachers • Ditched letter and emails as first approach • Brought in dedicated person to recruit

  9. Chess in Schools – Recruitment 3 • As evaluators we assisted recruitment by: • Providing extra schools from which to recruit • Providing extra time for recruitment • Channeling enthusiasm - providing focus

  10. Chess in Schools – Retention in study • Study designed to limit retention challenges • Influenced by learning from earlier IoE EEF evaluations • No testing within schools; use of NPD data • Collection of UPNs before randomisation

  11. Chess in Schools – Retention in study • Pre-randomisation baseline head teachers’ survey • Showed some confusion about the trial and intervention • Limited evaluation involvement in development of materials used in recruitment of schools • How much were they used? • Lack of forum for cascading study information beyond head/SLT

  12. Chess in Schools – Retention in intervention • Most intervention schools adopted the programme • CSC tell us that nearly all have completed the full 30 week intervention • End of intervention survey pending of tutors & teachers to confirm this Case study work flagged variation in schools re: lessons replaced by intervention • Important to study; not critical for schools/Chess tutors

  13. Chess in Schools – Lessons learnt • Beyond recruitment – importance of forum for cementing the key study messages within schools • Tension between role as impartial evaluator observing from a distance and partner in achieving a successful intervention and evaluation • Plan some interim formal means of assessing implementation and intervention retention • Design of the study means that retention issues remain minimal

  14. Early Language Learning & Literacy (ELLL) Project Example 2 Early Language Learning & Literacy (ELLL) Project • Intervention: Training primary class teachers to deliver a curriculum of French lessons as well as follow up activities linking the learning of French to English literacy. • Cluster trial, randomised within schools at class level, across two year groups (3 & 4) • IoE evaluation team: Meg Wiggins (Lead), John Jerrim, Shirley Lawes, Helen Austerberry, Anne Ingold

  15. Early Language Learning - Recruitment • Design of study influenced by: • Tight study timeline – curriculum changes – required post intervention testing • Extremely short recruitment window prior to commencement of teacher training • Capacity to deliver intervention to limited numbers • Challenges in determining inclusion criteria for schools • Key issues around specialist language teachers and within schools randomisation design • Over burdening of London schools – EEF issue

  16. Early Language Learning - Recruitment • Compromises reached: • Outside organisation brought in to recruit • London schools allowed • Relaxation of ban on specialist teachers (slight!) • Close liaison between CfBT and evaluation team • Case by case basis recruitment • Development of detailed recruitment materials – FAQs • Minimum target of 30 schools exceeded – 46 randomised

  17. Early Language Learning - Retention • Immediate post randomisation drop out: 9 schools • 2 couldn’t attend teacher training dates • 2 schools disagreed with randomisation • 5 never responded to invitation to teacher training • Additionally, 4 schools dropped one year group, but stayed in trial with other year group • Within one week – 46 schools reduced to 37!

  18. Early Language Learning -Retention • Evaluation team attended each training session and explained study to intervention teachers • Found almost no knowledge of study had been cascaded down by heads • Emphasised randomisation and no diffusion • Answered many questions! Learnt from them! • Provided teachers FAQs sheet • Explained plans for end of year testing

  19. Early Language Learning - Retention • Used additional training events to continue evaluation presence • All 37 schools have delivered (most of) the intervention • Organising testing dates (mostly by email) has been fairly straightforward • Lots of messages back and forth to finalise • Testing begins Tuesday

  20. Early Language Learning – Lessons Learnt • Tight recruitment period led to inclusion of schools that weren’t committed. Role of external recruitment agency? • Tension between confusing schools with contacts from programme and evaluation teams vs. not having evaluation messages clearly conveyed. • Need to ensure evaluation messages reach those that deliver interventions, not just to Heads. • Allowing time and resources for communicating with schools at every stage – no shortcuts to personal contact.

  21. Do our experiences tally with yours?Audience discussion

  22. Task - table discussion and feedbackWhat one top tip or suggestion would you make for recruitment, retention or communication with schools?

  23. My conclusions • Design with recruitment and retention at the fore • There is no substitution for evaluation team direct contact with schools – allocate resources accordingly • Be flexible – balance rigour with practicality. Choose your battles!

  24. Session 4: Analysis and reportingAnalysis methods and calculating effect sizesBen Styles (NFER)Analysis Plans: A cautionary taleMichael Webb (IFS)

  25. Analysis and effect size Ben Styles Education Endowment Foundation June 2014

  26. Analysis and effect size • How design determines analysis methods • Brief consideration of how to deal with missing data • How to calculate effect size

  27. ‘Analyse how you randomise’ • Pupil randomised • The ideal trial: t-test on attainment • Usually have a covariate: regression (ANCOVA) • Stratified randomisation: regression with stratifiers as covariates

  28. ‘Analyse how you randomise’ • Cluster randomised (think about an imaginary very small trial to understand why) • t-test on cluster means • Regression of cluster means with baseline means as a covariate • ‘It’s the number of schools that matters’

  29. BUT • If we have an adequate number of schools in the trial, say 40 or more • We have a pupil-level baseline measure • We can use the baseline to explain much of the school-level variance • Multi-level analysis

  30. Missing data • Prevention is better than cure • Attrition is running at about 15% on average in EEF trials • Using ad hoc methods to address the problem can lead to misleading conclusions • http://educationendowmentfoundation.org.uk/uploads/pdf/Randomised_trials_in_education_revised.pdf • Baseline characteristics of analysed groups • Baseline effect size

  31. Effect size • We need a measure that is universal • The difference between intervention group mean and control group mean • As measured in standard deviations

  32. 79 Effect size • See EEF analysis guidance at http://educationendowmentfoundation.org.uk/uploads/pdf/Analysis_for_EEF_evaluations_REVISED3.pdf • Write a spreadsheet that does it for you

  33. But what about multi-level models? • Difference in means is still the model coefficient for intervention • But the variance is partitioned – which do we use? • And the magnitude of the variance components change depending on whether we have covariates in the model – with or without?

  34. Arrggh!

  35. We want comparability • Always think of any RCT as a departure from the ideal trial • We want to be able to compare cluster trial effect sizes with those of pupil-randomised trials • We want to meta-analyse

  36. Which variance to use • Pupil-level • Before covariates

  37. This is controversial • Before or after covariates means two different things • At York on Monday leaning towards total variance but pupil-level better for meta-analysis • Report all the variances and say what you do

  38. Conclusions • A well designed RCT usually leads to a relatively simple analysis • Some of the missing data methods are the domain of statisticians • Be clear how you calculate your effect size

  39. Analysis Plans: A cautionary taleMichael Webb (IFS)

More Related