1 / 27

A Model Description By Benjamin Ditkowsky, Ph.D.

Student Growth Models for Principal and Student Evaluation. A Model Description By Benjamin Ditkowsky, Ph.D. It’s Not Always Easy To See What Our Students Can And Can Not Do. In Addition, Sometimes Some Things Just Don’t Match.

tscruggs
Télécharger la présentation

A Model Description By Benjamin Ditkowsky, Ph.D.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Student Growth Models for Principal and Student Evaluation A Model Description By Benjamin Ditkowsky, Ph.D.

  2. It’s Not Always Easy To See What Our Students Can And Can Not Do

  3. In Addition, Sometimes Some Things Just Don’t Match

  4. Can We Just Use The Same Test At Two Points In Time (Use Gain Scores)? • There are some problems with gain scores • All tests are made up of a true score and measurement error xi = xiTRUE + ei • Error is always present, but we don’t know how much error is present • When we subtract one score from another (i.e., a gain score) error in the final score is greater than in either of the tests from which the error originates • xgain = x2TRUE– x1TRUE+ e2 + e1 (true differences are subtracted but error propagates)

  5. Error! There Are Many Reasons Other Than The Test That Student Scores Vary From One Test To Another

  6. Complex Solutions Exist, But with complexity comes other concerns Do Complex Solutions improve teaching and learning? Do Complex Solutions maintain a focus on the connection on standards? Do Complex Solutions applied to our data violate basic statistical assumptions? Do Complex Solutions focus on details or big ideas? The overly complex statistical models have not been shown to be substantially more effective than less complex models

  7. Accountability Should Be • Trustworthy • For a solution to be trustworthy it should meet all of the assumptions upon which it is based • Useable • For a solution to be useable, it should help teachers and principals adjust instruction and intervention in real time • Accessible • For a solution to be accessible it should be transparent

  8. We Will Use Student Data For Evaluation

  9. We Have Lots Of Data Although, we don’t necessarily have the same data for all students… • Complex solutions have complex algorithms to estimate missing information. • For most students… • we have enough information • we can infer the rest.

  10. Complex Statistical Analysis Is Unnecessary quacks like a duck walks like a duck it probably is a duck looks like a duck swims like a duck Given sufficient evidence, complex analysis is unnecessary If it …

  11. We Know What We Expect We Use Cut Scores To Categorize Level of Achievement, and Progress • Cut Scores are Based on Expected Performance • Test performance is categorized as • Exceeding Standards, • Meeting Standards, • Below Standards, • Academic Warning • Progress is defined by our assignment of the value or worth of improvement in the categorical performance of test scores from one time to the next

  12. Cut Scores Separate Performance Levels ISAT Scores are broken down into performance designations (Academic Warning, Below Standards, Meets Standards, Exceeds Standards) Typical Growth in Language Development on ACCESS has been established (W.I.D.A. Consortium, 2009)is calculated based on entry performance level and change in composite score (typical low, and high range growth) R-CBM Scores reliably predict state test scores (Silberglitt & Hintze, 2005), cut scores predicting outcomes have been established by aimsweb and within Illinois (i.e., Below Basic, Questionable, Proficient, confidently proficient) M-CAP scores predict State Test scores, and typical performance is established across the township Benchmark assessments can divide score into performance categories

  13. We Are Explicit About Our Values Exceptional Growth (2) as well as maintenance of exceptional performance is highly valued, thus any score-pair that moves up two or more categories or ending in the highest performance category is worth two points. Proficient Growth (1) is defined as any score-pair that moves up one category or is maintained i.e., scores growth at a rate commensurate with the increasing expectations of the Meeting Expectations category. Inadequate Growth (-1) is defined as a score-pair in which the performance category at time 1 is higher than the performance category at time 2, or growth from time 1 to time 2 is not sufficient to move a student from the below expectations category into the meeting expectations category. Unsatisfactory Growth (-2) is defined as a score-pair in which the performance category drops by two categories from time 1 to time 2 or ends in the Academic Warning Category. Growth is defined as a change in performance category from Time 1 to Time 2

  14. A Value Table Weights Growth

  15. Miguel Is A 2nd Grade Student The Data we have for him include: • Type I • ACCESS from grade K and 1 • Aimsweb - Fall and Winter • RCBM • MCAP (not in example) • Type II • Vocabulary Matching Fall and Winter • Avenues Pre-Post Assessments for ELLs • Type III • Pre-Post Classroom Assessments

  16. Miguel Is A 2nd Grade Student Each ACCESS pair is assigned a growth value based on the comparison of scores and the expectations for growth • Miguel is in second grade. • Baseline: In K his language proficiency level was 1.8 • Expectations: The low to high range for growth was 44 to 90 • Scores: 297 – 209 = 88His gain was 88, less than 90 • Category designation: His growth level is considered in the proficient range Excellent growth Proficient growth Needs Improvement Unsatisfactory (WIDA, March 2009)

  17. Miguel Is A 2nd Grade Student Fall performance was Below Basic Expectations In Winter, Miguel’s performance was in the proficient range • Miguel is in second grade. • Baseline: In Fall his score 15 WRC indicated Below Basic Performance • Expectations: Winter Proficient Score is 65 WRC • Growth: In Winter, Miguel scored in the proficient range, his movement up two categories (Below to Proficient) is considered excellent growth • Category designation: His growth level is considered in the excellent range MeasuredEffects.com, 2010 ISAT Cut Scores

  18. Miguel Is A 2nd Grade Student Relatively low scores are in the proficient range in the fall, Miguel’s score of 2 is considered proficient In Winter, expectations for VM are higher, and Miguel’s score increased sufficiently to remain on target. • Miguel is in second grade. • Baseline: In Fall his score 2 WRC indicated his performance was typical in the Meets category • Expectations: Winter Proficient range is from 7 to 10 • Growth: In Winter, Miguel scored in the proficient range, his performance indicated that his growth was consistent with expectations for his grade level (Meets to Meets) • Category designation: His growth level is considered in the proficient range Local Normative Values

  19. Miguel Is A 2nd Grade Student The Avenue’s pre test is broken into 6 levels designated in 3 levels (Beginning, Intermediate and Advanced) • Miguel is in second grade. • Baseline: In Fall his proficiency level was 1 indicating early beginning language proficiency • Expectations: Winter Intermediate language range is from 3 to 4 • Growth: In Winter, Miguel’s language level increased from level 1 to 2, though not up to the Intermediate range, his scores demonstrated growth • Category designation: His growth level is considered in the proficient range

  20. A Demonstration Of Proficient Growth • Each score, for each second grade student in the targeted ELL subgroup has been examined and categorized based on Cut Scores. • Each available score-pair has been reviewed and growth has been categorized and weighted. • Individual ratings were calculated • The overall group rating was calculated j mMiguel = m[1,2,1,1] = 1 Rating = m[1, -1, 1, 2, 1] = 1 1 Proficient Growth

  21. Alternate Illustration: Why Use More Than Two Administrations Of The Same Test From One Source It is possible that different tests indicate different patterns, without changing the overall rating While there may be some areas where progress is worth further investigation, for the purposes of evaluating and categorizing; overall, across measures, academic growth is occurring at an acceptable rate. j The differences in patterns by test may be diagnostically important, but insufficient for high stakes evaluative purposes English Language proficiency is growing at a rate above what is expected Automaticity with basic skills in Reading may need to improve Automaticity with basic skills in Math may be on track Instructional Vocabulary may be insufficient for continued growth in grade level material Growth English Language Proficiency is increasing at an adequate rate

  22. Timmy Is A 4th Grade Student What Data Might We Use For Him? The data we might have for him include: • Type I • ISAT from grade 3 and 4 • Aimsweb - Fall and Winter • RCBM • MCAP • Type II • Vocabulary Matching • Type III • Pre-Post Classroom Assessments

  23. Gertrude Is A 7th Grade Student What Data Might We Use For Her? The data we might have for her include: • Type I • ISAT from grade 3 and 4 • Aimsweb - Fall and Winter • RCBM • MCAP • Type II • Vocabulary Matching • Prentice Hall Benchmark Assessments • Type III • Pre-Post Classroom Assessments

  24. Setting Student Data Goals For Principal Evaluation By February 201x, given available Type I and Type II assessments administered at two points in time school wide, students at school name will demonstrate an increase in the proportion making adequate progress from 35% to 40%. Goals can be set to increase the amount of progress made

  25. Setting Student Data Goals For Principal Evaluation By February 201x, given available Type I and Type II assessments administered at two points in time school wide, students identified as define cohort at school name will demonstrate an increase in the proportion making adequate progress from 68% to 80%. Goals can be set to increase the proportion of students making adequate progress made for a particular subgroup

  26. Setting Student Data Goals For Principal Evaluation By February 201x, given available Type I and Type II assessments administered at two points in time school wide, students identified as define cohort at school name will demonstrate a decrease in the proportion making unsatisfactory progress from 8% to 4%. Goals can be set to decrease the proportion of students not making adequate progress for a particular subgroup

  27. Setting Student Data Goals For Principal Evaluation By February 201x, given available Type I and Type II assessments administered at two points in time school wide, students at school name will demonstrate proficient or excellent growth as defined by the convergence and magnitude of data classified with district defined value tables. Goals can be set to achieve an overall rating for the demonstration of student growth

More Related