1 / 9

Assessing the Effectiveness of a Science and Mathematics Teacher Development Program through Use of Virtual Comparison G

Assessing the Effectiveness of a Science and Mathematics Teacher Development Program through Use of Virtual Comparison Groups. John Cronin, the Kingsbury Center at Northwest Evaluation Association Jeff C. Marshall, Clemson University Y. Xiang, the Kingsbury Center at NWEA NARST 2009 Conference.

harley
Télécharger la présentation

Assessing the Effectiveness of a Science and Mathematics Teacher Development Program through Use of Virtual Comparison G

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assessing the Effectiveness of a Science and Mathematics Teacher Development Program through Use of Virtual Comparison Groups John Cronin, the Kingsbury Center at Northwest Evaluation Association Jeff C. Marshall, Clemson University Y. Xiang, the Kingsbury Center at NWEA NARST 2009 Conference

  2. Challenge • Context: by increasing the quantity and quality of inquiry-based instruction being facilitated, we expect that student achievement will also increase. • Goal: to find solid measures of student achievement that adequately control for confounding variables.

  3. Professional Development Overview • Yearlong PDI for middle school math and science teachers • 2 cohort school per year • 8 days in summer • 4 days follow-up during the year • observations and interactions with individual teachers on a weekly basis • PDI-2—second year leadership institute to encourage sustainability in the schools

  4. Strengths of Using MAP • MAP is aligned to state standards in mathematics and science (minimizes noise introduced when assessment is not well aligned to the learning objectives). • Single version of MAP is used throughout our state (permits use of results across districts). • Predictive validity between the MAP assessment and state assessments is generally quite high (Cronin, Kingsbury, Dahlin, Adkins, & Bowe, 2007; Northwest Evaluation Association, 2005a).   • MAP is an adaptive assessment (NWEA, 2003). It also makes it easier to study sub-domains of a content area, because a group of several hundred students will provide sufficient numbers of item responses on a large array of items to produce rich, meaningful results. • MAP uses a Rasch-scaled (Smith, 2001) item pool rather than a scaled test form. • MAP uses a cross-grade scale with robust growth norms (NWEA,2005b). This facilitates more accurate measurement of student growth across time.

  5. Evaluation Design • Study Group • Composed of students who are taught by participating district teachers (large diverse, urban district) who participate in the PDI. New cohorts of teachers will be followed as additional study groups. • Two Comparison groups • #1: students from participating district teachers who did not participate in the PDI. • #2: a Virtual Comparison Group of students matched to the students of the study group teachers. 

  6. Criteria for VCG(Virtual Control Group) • Each study group student matched to 51 virtual students. (21 students in VCG minimum) • Each VCG student must • have an overall scaled score within one point of their study group student. • have been tested within +/- 7 days of their study group student. • have come from a school with a Free and Reduced Lunch participation rate that is within 5% of the study group student’s school. • come from a school with the same urban/rural designation. • have the same gender and ethnic designation as the study group student.

  7. Power of Design • Pre-post intervention measurement of the study group • helps controls for any effect introduced by the study group teachers’ prior performance. • Pre-post intervention measurement of the students’ of non-participating educators • helps control for a school or school system effect. • Use of Virtual Comparison Groups • helps control for effects that might be a product of variance in the student cohorts. • Continued collection of data for two years after completion of the program checks whether any affect found for the program is persistent. • permits investigation of whether there is a “J-curve” effect associated with this kind of intervention (Erb & Stevenson, 1999). If the reform is effective, student outcomes will improve in the long run, provided that sufficient time is allowed to overcome the J-curve effect (Yore, Anderson, & Shymansky, 2005).

  8. Contact Info: • John Cronin: email— john.cronin@nwea.org • Jeff C. Marshall: email— marsha9@clemson.edu • Y. Xiang: email— yun.xiang@nwea.org • Website: www.clemson.edu/iim ; www.nwea.org/research.asp

More Related