1 / 21

SREE Annual Conference March 6, 2010

SREE Annual Conference March 6, 2010. Using RTCs to determine the impact of reading interventions on struggling readers Newark Public Schools Jennifer Hamilton, Senior Study Director jenniferhamilton@westat.com Matthew Carr, Analyst matthewcarr@westat.com. Overview of Presentation.

lesley-love
Télécharger la présentation

SREE Annual Conference March 6, 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SREE Annual ConferenceMarch 6, 2010 Using RTCs to determine the impact of reading interventions on struggling readers Newark Public Schools Jennifer Hamilton, Senior Study Director jenniferhamilton@westat.com Matthew Carr, Analyst matthewcarr@westat.com

  2. Overview of Presentation • Context – Striving Readers in Newark, NJ • Fidelity of implementation • Adherence • Exposure • Discussion • For more information…

  3. Context – Newark, NJ • 35% children living in poverty (compared to 18% nationally) • Largest school district in the state of NJ • A ‘district in need of improvement’ for last 4 years • State took over the district in 1995 (limited control given back in 2008) • Only ~ 50% of students in grades 6, 7, & 8 are proficient readers

  4. Importance of Fidelity Fidelity is the extent to which the intervention as implemented is faithful to the pre-stated model. Little black dress is in; little black box is out • Internal validity - Helps to explain failure • External validity - Helps to make treatment more stable and replicable (treatment has to be well defined) • Helps ensure treatment is absent from control condition

  5. Components of Fidelity - Theory of Change Adherence Exposure

  6. Establishing Fidelity (Adherence) 4-steps: (1) identify, (2) measure, (3) score, (4) analyze Step 1: Identify critical components • Adaptation issue Step 2: Measure • Multiple sources of data, range of methodologies • Extant data (training receipt, class size, SRI, computer use) • Classroom observations • Practical considerations - $$$$ • Qualifications of data collection staff • Number of points in time (cost)

  7. Establishing Fidelity (Adherence) Step 3: Score • Assign sub-scores Numberof sessions per week using instructional software • Combine to a single score • Equal weighting

  8. Newark - Single Adherence Score • Year 1 = 88% • Year 2 = 82% • Year 3 = 89%

  9. Establishing Fidelity (Adherence) Step 4: Analysis • Descriptive • But profoundly unsatisfying, given all the effort and expense • Generally, should not be used as a mediating variable • Fidelity usually related to error term as well as outcome • Error term contains unmeasured factors, such as teacher quality/charisma and student engagement • Non-experimental/exploratory • Fidelity as a predictor (with lots of covariates) • Correlational

  10. Newark- Descriptive Adherence Data

  11. Exposure You are here

  12. Exposure Student Receipt of Intervention -- Components • Attrition • Attendance • No-Shows

  13. Exposure - Attrition WWC (2008) Benchmarks for attrition tolerance Newark 19.6% overall 5.6% differential

  14. Exposure - Attendance Number of unexcused absences by analytic group Group 1 = 1 year of potential exposure (6,7, 8 year 1; 6 year 2) Group 2 = 1 year of potential exposure - 6th graders only (years 1,2) Group 3 = 2 years of potential exposure – 7th graders only (year 2) Group 4 = 2 years of potential exposure – 8th graders only (year 2) Group 5 = 2 years of potential exposure – 7th + 8th graders (7,8 year 2) No significant differences b/t Treatment and Control students

  15. Exposure – No-Shows Intention to Treat (ITT) vs. Treatment on the Treated (TOT) • Removing T students who didn’t receive T would bias the data • But keeping them in underestimates effects • Issue of real world implementation vs. ideal implementation Policymakers want to know TOT, Researchers need to report ITT Solution – The Bloom Adjustment

  16. The Bloom Adjustment Adjusts the effects of an intervention upwards by the treatment group no-show rate AllSubjectEffect = γ*NoShowEffect + (1-γ)TreatSubjectEffectAssuming the effect per no-show is zero, then: AS = γ * 0 + (1- γ)TS AS = (1- γ)TS Therefore: TS = AS / (1- γ)

  17. Example: Striving Readers Student sample divided into 5 analytic groups

  18. Striving Readers Example ITT effect sizes compared to Bloom Adjusted (year 2)

  19. Review • Adherence = receipt of materials + accurate delivery • 4 steps: identify, measure, score, analyze • Receipt • Attrition • Attendance • No Shows – Bloom Adjustment

  20. For more information… • Bloom, H. (1984). Accounting for No-Shows in experimental evaluation designs. Evaluation Review, 8, 225-246. • Durlak, J.A., & DuPre, E.P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327-350. • Hill, L.G., Maucione, K,. & Hood, B.K. (2007). A focused approach to assessing program fidelity. Prevention Science, 8, 25-34. • Mowbray, C. Holter, M. Teague, G., & Bybee, D. (2003). Fidelity Criteria: Development, Measurement, and Validation. American Journal of Evaluation, 24, 315-340. • What Works Clearinghouse. (2008). WWC Procedures and Standards Handbook. Available online at http://ies.gov/ncee/wwc/references

  21. On the Web Department of Elementary and Secondary Education Striving Readers webpage http://www.ed.gov/programs/strivingreaders/index.html

More Related