1 / 51

Summative Assessment: Rubrics and Tests

Outcomes. Apply a systematic process for creating a test blueprintIdentify attributes of effective test questionsExplain the advantages and disadvantages of different types of test questions Assess the quality of tests and test itemsCreate samples of effective questions. What type of assessment?

mandell
Télécharger la présentation

Summative Assessment: Rubrics and Tests

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Summative Assessment: Rubrics and Tests Effective Teaching and Learning Baker College

    2. Outcomes Apply a systematic process for creating a test blueprint Identify attributes of effective test questions Explain the advantages and disadvantages of different types of test questions Assess the quality of tests and test items Create samples of effective questions

    3. What type of assessment? Procedural knowledge Declarative knowledge

    4. Test Writing Process Planning the test Writing test items Selecting test items Formatting the test Assessing the test Revising the test Using the test After the test

    5. Planning the Test Content Blueprint Learning outcomes Weight Length Types of items Number of items

    6. Test Blueprint

    7. Types of Items Recognition True-false Multiple-choice Multiple-answer Matching Ordering Recall Short Answer Completion Essay

    8. Average Response Time

    9. Writing Test Items Simple and direct wording Avoid jargon Avoid trivia items Match items to learning outcomes Each item has an agreed upon correct answer Write more questions than you will need

    10. Multiple-choice Items Stem Direct question Incomplete statement Responses One correct answer Multiple distracters

    11. Stem Clearly worded One idea Avoid the use of negatives Enough information to answer the question Direct questions preferred Blanks at the end of the stem Include words repeated in all responses

    12. Responses 3-5 per item Avoid all of the above and none of the above Grammatically correct with stem Similar length and structure Avoid absolute words Listed in a logical order Mutually exclusive and not overlapping

    13. Distracters Plausible Common misconceptions Logical misinterpretations Clichs Partial answers Technical terms or jargon

    14. Example What is the minimum number of responses for a multiple-choice item? A) 2 B) 3 C) 4 D) 5

    15. Application Example What problem exists in the following multiple-choice stem: ________ is the most common type of test item. Absolute words should be avoided in the stem. The stem contains more than one idea or concept. Not enough information is presented to answer the question. The fill-in the blank should come at the end.

    16. Analysis and Evaluation Example Stem

    17. Analysis and Evaluation Example Responses Based on the process described in Effective Classroom Tests, how would you judge this answer? A) EXCELLENT (all steps in the right order with correct, clear, and complete descriptions) B) GOOD (all stages correct in the right order, but the descriptions are not as complete as they should be). C) MEDIOCRE (one or two stages are missing, OR the stages are in the wrong order, OR the explanations are not complete, OR the explanations are irrelevant) D) UNACCEPTABLE (one or more stages are missing AND the explanations are not complete AND/OR they are irrelevant)

    18. Poor Question 1 Good multiple choice items: A) are easy to write B) can only test memorized content C) are better than essay items D) there is no such thing E) can test a wide range of content

    19. Poor Question 2 Which of the following characteristics is not true of completion test items but is an important distinguishing attribute of matching tests, multiple-choice questions, and true-false items? A) They are objective test items. B) They require knowledge recognition but not production. C) Much more difficult to construct.

    20. Poor Question 3 Which of the following statements is FALSE? A) Misfeasance is the improperly doing of an illegal act. B) Nonfeasance is improperly doing a legal act. C) Nonfeasance is the failure to do an act that one must do legally. D) Misfeasance is the failure to PROPERLY do an act that one has a duty to perform. E) None of the above.

    21. Poor Question 4 __________ is/are the best method to determine if students have learned something. A) Comprehensive Exam B) Homework Assignments C) Pop Quizzes D) Research Paper

    22. Selecting Test Items Outcome Weight x # Questions by Type = #Questions of Each Type for Outcome

    23. Formatting the Test Group items by type Sort items by increasing difficulty Add instructions Review layout and pagination Write answer key

    24. Assessing the Test Self 2-3 days after writing the test Clarity Clues in items to other items Non-expert Clarity Contextual clues Peer Content Weighting to outcomes Answer key Students Clarity Content

    25. Test Taking Procedures Use of notes or other materials Time limits

    26. After the Test Item Analysis Areas for review Test revisions

    27. Activity Write two test questions on any topic One question should be an example of a good test item One question should be an example of a poorly written test item

    28. Share Share your two questions with a partner Can they determine which is good and which is bad? Can they explain what makes one poorly written? As a team, how can you fix the poorly written questions?

    29. Intermission

    30. Outcomes Determine what characteristics are important in evaluating student work Evaluate rubrics, analytic scales, and other evaluation methods Describe the contents of a good rubric Identify rubrics already in use at Baker College Begin work on a rubric for a class

    31. What is a Rubric? A rubric is a scoring tool or guide that lists the specific criteria and the ranges for multiple levels of achievement for a piece of work or performance. A rubric consists of a set of well-defined factors and criteria describing the dimensions of an assignment to be assessed or evaluated.

    32. Parts of a Rubric Scale (columns) Dimensions (rows) Criteria descriptions (cells) Reference a sample rubric handout at this point.Reference a sample rubric handout at this point.

    33. Benefits of Rubrics Communicates the instructors expectations Streamlines the process for feedback to the student Facilitates equitable grading Standardizes assessment across different instructors

    34. Uses for Rubrics Papers Presentations Projects Essays Homework Case Studies Participation/Class Discussion Portfolios

    35. Types of Rubrics Analytic Page 11 Holistic Page 12 and 13 Check List Page 14 Scoring Guide Page 15 Show examples of all 4Show examples of all 4

    36. Creating a Rubric Identify components/outcomes of the assignment Determine a scale Add criteria Assign points Set component weights (optional) Assess the rubric Test and revise

    37. Activity Split into groups of 3-4 Determine team roles Select an assignment that needs a rubric Can be a specific assignment, such as a research paper for ENG 102 Can be of a more general nature such as a class presentation

    38. Step 1: Identify Components List 5 major objectives/outcomes of the assignment Write these items as the row headers of the sheet provided

    39. Step 2: Determine a Scale Aim for 3-5 levels Can use an odd or even number of items Use the headings on the next slide for ideas Write these as column headings on the sheet provided

    40. Potential Column Headings Outstanding | Accomplished | Proficient | Developing | Beginning Accomplished | Average | Developing | Beginning Excellent | Good | Needs Improvement | Unsatisfactory Exceptional | Acceptable | Marginal | Unacceptable Expert | Practitioner | Apprentice | Novice Professional | Adequate | Needs Work | Youre Fired Exceeds Expectation | On Target | Beginning Exemplary | Competent | Developing High | Medium | Low Outstanding | Proficient | Shows Potential

    41. Step 3: Add Criteria Create descriptions for each level of performance for each criteria in the cells of the rubric Bullet points Paragraphs Write these criteria in the cells of the sheet provided

    42. Step 4: Assign Points Assign points for each level of performance Can use either of the following: Discrete values (5, 4, 3, 2, 1) Ranges (10-9) for each level Indicate the point value on the sheet provided Normally placed with the scale

    43. Step 5: Set Component Weights Allows for different levels of importance Spelling/grammar more or less important than content? Determine if weights are necessary for your rubric Assign weights accordingly See example on page 17-18 of the handout

    44. Step 6: Assess the Rubric Assess your rubric using a metarubric See examples on page 11-15 of your handout Conduct a peer review Ask one or two other instructors to review your rubric Provide time for student review Allow students to ask questions and make comments

    45. Group Project Trade rubrics with another group Assess the rubric using a metarubric from page 11-15

    46. Discussion What metarubric(s) did you use? Why? What did you see on the other teams rubric that you liked? Could you understand the assignment easily by reviewing the rubric?

    47. Step 7: Implement and refine Refine your rubric based on feedback from other instructors and students Make notes each time you use the rubric for continuous improvement purposes Share with others

    48. Rubric Reliability & Validity Reliability the likelihood that a given measurement procedure will yield the same description of a given phenomena if the measurement is repeated. Validity the extent to which a specific measurement provides data that relate to commonly accepted meanings of a particular concept. Babbie, 1986

    49. Reliability Requires Instructor should reach same conclusion each time Different instructors should reach similar conclusion (interrater reliability)

    50. Interrater Reliability Independently score a set of student samples Review responses for consistent and inconsistent responses Discuss and reconcile inconsistencies Repeat with second group of samples Maki, 2004, p. 127

    51. Validity Requires Reliability Comprehensiveness Cover all outcomes Economy Space is usually limited, so be selective about what goes into the rubric Balanced scoring and weighting

    52. Discussion and Questions

More Related