1 / 36

Going Green – Jigsaw Approach

Going Green – Jigsaw Approach. Each Analysis Team will present a summary of the feedback on their topic from the pilot, with recommendations. Individually, we will consider the information just presented and brainstorm ways to Go Green. Going Green – Jigsaw Approach.

turner
Télécharger la présentation

Going Green – Jigsaw Approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Going Green – Jigsaw Approach Each Analysis Team will present a summary of the feedback on their topic from the pilot, with recommendations. Individually, we will consider the information just presented and brainstorm ways to Go Green.

  2. Going Green – Jigsaw Approach After the last presentation and reflection, we will take a break and then use the NCDPI Retreat Planning Doc (http://bit.ly/NCDPIRetreat) for Going Green. As we work, please join a planning group and start prioritizing for Year 1.

  3. Evaluator Survey and Descriptive Feedback Evaluator Survey (1-9) Descriptive Feedback (Rounds 1 and 2) 27 Respondents for survey • Team members: • Christie Lynch Ebert (Lead) • Mike Martin • Nadine McBride • Tara Patterson

  4. Question 1 (N=27)

  5. Question 2 (74% spent 2-5 hours evaluating submissions)

  6. Question 3 (85% spent 0-3 hours)

  7. Question 4 (70.3% Confident/very confident)

  8. Question 5 • What signs should an Evaluator look for to score a submission as Exceeds Expected Growth? (25 responses) – see narrative • Summary indicators from narrative: • Exemplary/significant growth in relation to CO • Demonstration of exceeding standards • Quality of submission, meaningful tasks, details in submission descriptions • Concordance between: goals/standards; topic/resources/activities; time and results • Comparison to unpacking documents • Connection from teacher to student’s growth very clear (extraordinary and beyond meeting expectation of CO – can not be quantified in data alone) • Complete analysis of how teaching strategies impacted student growth (rubric or checklist) • Evidence of above and beyond scope of what was taught in lesson

  9. Question 5 • What signs should an Evaluator look for to score a submission as Exceeds Expected Growth? (25 responses) – see narrative • Summary indicators from narrative: (continued) • National board quality descriptions in writing and significant evidence in student performance • Aligned instruction and assessment with objective leading to student surpassing normal expectations • Analysis of: pre-assessment and gaps; teaching strategies and articulation of purposeful use; how teaching strategies impacted student growth (not just end product as evidence without meaningful, authentic analysis) • Depends on CO – ex. Move from Novice Mid to Novice High – not just wtithin the same proficiency level • Depth to evaluation • Goes beyond the norm according to the standard. • Alignment: all required info., especially narrative included; ID of Clarifying Objective • Targeted objective was exceeded and thoroughly explained and demonstrated with and without words.

  10. Question 5 • What signs should an Evaluator look for to score a submission as Exceeds Expected Growth? (25 responses) – see narrative • Summary indicators from narrative: (continued) • Videos, pictures, student work, self-assessments, teacher rubrics pre and post • Comparison between more than one student - how students were pushed to grow; how the middle of the class grew, and how the low student grew. Look at more than 2-3 students for credibility. Would be nice to see a whole class rubric score than pick out 3-4 students in the class to highlight and show work. • Look for growth over time and compare to context information and work samples; view the submission holistically • Look at every aspect of evidence 1 and 2, how excellent they are and how clearly the growth can be determined

  11. Question 6 (85% 0-10 hours)

  12. Question 7

  13. Question 7 (What issues did you have with the submissions you evaluated?) • Narrative responses: • Examples did not adequately address CO • Too many log-in issues, could not get to submissions • Everything needs to be on one page – need to be able to see as reading descriptions, flipping back and forth was frustrating • Student sample had incorrect answers of information as a result of teacher providing incorrect instruction • Too many Cos identified to focus on growth • Able to see growth but lack of meaningful analysis of how teacher influenced student’s growth • Unclear measurement devices • Submitter not skilled in writing about evidence • Some CO’s not developmentally appropriate • Submitter only submitted one evidence which makes it hard to evaluate

  14. Question 8: guidelines • What additional information or materials would have helped you better evaluate the submissions? (23 responses – see narrative) • Guidelines • Complete background information on duration of course, minutes or hours per day in course, number of days of instruction per week, hours completed prior to collecting pre and post evidence • Concise instructions on what information and how to submit • Standard form of questions • Running monologue is not conducive to evaluation • Suggest specific questions asked of each teacher and then space for additional info. as needed

  15. Question 8: guidelines • What additional information or materials would have helped you better evaluate the submissions? (23 responses – see narrative) • Guidelines • Training on what to focus on and a checklist for feedback – writing anything down in feedback allows too much leeway for untrained teachers/evaluators • More segments of student submissions (beginning, mid-point, end) • Information asked for is more than sufficient if submitter provides enough data and detail; teachers should not expect to meet expectations based on narrative alone • Consistent tools to measure every student • Too vague, too many CO’s – helpful to have continuity of CO’s and proficiency levels

  16. Question 8: guidelines • What additional information or materials would have helped you better evaluate the submissions? (23 responses – see narrative) • Guidelines • Rubric with examples that meet, exceed, or do not meet expectations • Checklist of things to look for when evaluating • Clear criteria for submitters – work samples not enough – how did activities lead to student growth? Articulation of learning styles and complete checklist or rubric for post assessment (missing from samples) • Clearly defined parameters about what “exceed” would look like; more guidelines for “not met” • Printable guide on evaluations • Content-specific points related to scoring and agreement on how to score

  17. Question 8: format • What additional information or materials would have helped you better evaluate the submissions? (23 responses – see narrative) • Format • Samples need common format (jpg, doc, pdf, etc.) • Consistency in acceptable formats (e.g. length of submission and file formats) • Need video to see growth in dance (not pictures) • Easier format to look up submissions and evidence • All materials in one location • Samples labeled correctly • Samples need to be viewed side by side to judge growth • Accessibility to samples (blocked YouTube site) • Easier format

  18. Question 9 • Now that you have completed the pilot, what would you do differently to improve the evaluation process? (23 responses – see narrative) • Specific training for evaluators • Clear, concise instructions on what constitutes evidence; how in-depth should evidence be? • What duration of time is a fair measuring stick for a WL (Arts Ed, HL) teacher? • Can evidence of student growth include products with teacher feedback? • Software – access easier, linear and step-by-step • Process: clear examples needed throughout process • Size and validity of evidence in portfolio – even in worst case, isn’t it possible to show 3 or so students have grown?

  19. Question 9 (continued) • Now that you have completed the pilot, what would you do differently to improve the evaluation process? (23 responses – see narrative) • Who will train principals/administrators? • Improve platform, make process anonymous, provide evaluators training; provide specific feedback items • Utilize trained reviewers for success • Reviewers must be content specific (e.g. you can not ask a choral music teacher to review a band portfolio) • I am very concerned about K-8 teachers with heavy loads (700-900 students weekly) – How do we adjust for inequities across the state – student contact time, resources, etc?

  20. Question 9 (continued) • Now that you have completed the pilot, what would you do differently to improve the evaluation process? (23 responses – see narrative) • Platform should have been evaluated prior to this pilot – very frustrating and upsetting; improve user-friendliness • It is all very subjective • Assign CO’s at the beginning of the year and mandate that all submissions for that level class focus on these CO’s – teacher could pick students and work samples for pre-determined areas. • Guidelines more in-depth; nice to have specific or several different CO’s to choose from • Process needs to be clearer and subject-specific

  21. Descriptive feedback (roundS 1 and 2) • Document created and posted on google site with Scoring guide and descriptive feedback • We have received numerous inquiries about piloters desiring feedback on their submissions so one action should be to determine how/what might be shared • Based on analysis and conversation with team, it is recommended that we either have: (see examples on following slides) • 1. Optional descriptive feedback • 2. No descriptive feedback (this was advised from legal counsel in TN) • 3. Pull-down menu of descriptive feedback or rationale (may need to continue to be built over time) – current data does not support generalized options so will need continued work in this area • 4. Slider scale with criteria listed for rating.

  22. 1. OPTIONAL DESCRIPTIVE FEEDBACK GRID This grid may be completed by the evaluator and would be accessible to the submitter. The feedback does not impact the score. [reminder about professional language and clear, objective statements]

  23. 2. No descriptive feedback • This was an issue discussed with Dru on May 20, 2013. • TN legal counsel advised against providing descriptive feedback. • Some effort may be underway to collect feedback to create generalized drop-down or similar means to provide feedback.

  24. 3. Pull down menu: Descriptive feedback/rationale

  25. 4. SLIDER with criteria • Growth Range • Does Not Meet Expected Growth The student sample did not provide adequate evidence of student growth in relationship to the identified Clarifying Objective(s). • Meets Expected Growth The student sample provided adequate evidence of student growth in relationship to the identified Clarifying Objective(s). • Exceeds Expected Growth The student sample significantly exceeded growth expectations in relationship to the identified Clarifying Objective(s).

  26. PROs and cons of providing feedback

  27. TRAINING recommendations • Submission: • Recommend using the descriptive feedback to inform training components for developing evidence collections. • Questions: Is or will this be tied to Pay for Performance? Are we setting our teachers up to not be able to receive additional pay because “exceeds” is so difficult to define? Is it possible to receive an exceeds rating in extremely limited teaching situations? • Evaluation: • Recommend giving examples for evaluators to score and then compare with an expert rating. • Include rationale for why score was selected. (Your score, expert’s score, why?)

  28. Examples with expert rationale

  29. Recommendations based on analysis • Evaluation: • 74% of evaluators spent 2-5 hours reviewing submissions during pilot • Estimate apprx. 10 hours of man-time for each teacher collection of 5 evidences (2 points in time) to be reviewed by 2 separate reviewers X apprx. 8500 Teachers = 85,000 hours of review • Example: If each reviewer reviewed 8 collections, this would = 40 hours • State or regional review committees will need to take into account proportions for numbers of teachers in each content area and grade span: • 1,900 – World Language teachers • 1,200 – Healthful Living teachers • 5,400  - Arts teachers

  30. Recommendations based on analysis

  31. EXAMPLE: TN

  32. RECOMMENDATIONS BASED ON ANALYSIS • Narrative comments identify need for decision making on various topics, such as: • Content-specific evaluation • Consideration for various teaching situations (K-8) • Clear, precise process (training materials for submitters and evaluators with content-specific examples) • Functional online platform • Guidance for selecting/alignment with CO’s • Guidance and focus of narrative in submission process • Parameters for rating categories and subject-specific examples of each rating • Other

  33. Going Green – Jigsaw Approach Individually, consider the feedback and recommendations just presented and brainstorm ways to Go Green with the Analysis of Student Work (ASW) Evaluation Process

  34. Planning Groups Form & Prioritize for Year 1 Implementation Needs

  35. Year 1 Implementation

  36. 1:00 – 3:45 p.m. Work Sessions for Planning GroupsJoin a group at http://bit.ly/NCDPIRetreat

More Related