1 / 14

Improving Student Learning Through the Use of Multisource Assessment & Feedback

Improving Student Learning Through the Use of Multisource Assessment & Feedback. Jack McGourty Peter Dominick Columbia University Mary Besterfield-Sacre, Larry Shuman, and Harvey Wolfe University of Pittsburgh. Multisource Assessment.

Patman
Télécharger la présentation

Improving Student Learning Through the Use of Multisource Assessment & Feedback

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving Student Learning Through the Use of Multisource Assessment & Feedback Jack McGourty Peter Dominick Columbia University Mary Besterfield-Sacre, Larry Shuman, and Harvey Wolfe University of Pittsburgh

  2. Multisource Assessment A formal process that provides critical information from several sources, such as peers, self, and instructors, on student learning outcomes and specific behaviors and skills, affording the student a better understanding of personal strengths and areas in need of improvement

  3. Theoretical Considerations • Control Theory (Carver & Scheier, 1981) • Goal Setting Theory (Locke & Latham, 1990) • Both theories focus on: • Goal-directed behaviors • Use of feedback to achieve goals • Self-regulating/monitoring behaviors • Use of external comparisons

  4. Classroom Application • Computerized survey with student and administrator’s guidebooks • (Team Developer - Published by John Wiley & Sons) • Competency-based with two default databases: basic team skills and EC 2000 learning outcomes • Self/peer ratings as to frequency in which learning outcomes are observed • Provides individual/team/class feedback

  5. Team Developer Screen

  6. Survey Process • Instructor establishes student teams • Creates survey questions; typically 40 plus items • Best practice is to provide students with an understanding of learning outcomes in general, and Team Developer Process prior to administration • Administer 2x per semester; after 3-4 weeks of team interaction and at the end of the project • Reports distributed after administration • Work with students on action plans for improvement

  7. Feedback Reports • Students • individualized report showing comparisons between self-ratings and aggregate team member ratings • improvement between interim & final • Faculty • aggregate data for teams • comparison by section/class

  8. Student Report ITEM RATINGS BY DIMENSION Self Team COMMUNICATION Helping to sustain and environment where people feel free to speak candidly, articulating ideas clearly and concisely, listening and demonstrating an understanding of other’s perspectives. Active Listener 3.8 3.8 Listens attentively to other team members without interrupting 4.0 4.1 Conveys interest in what others are saying 5.0 3.3 Provides others with constructive feedback 4.0 4.1 Restates what has been said to show understanding 3.0 3.7 Clarifies what others have said to ensure understanding 3.0 3.4

  9. Research Areas • Psychometric properties • Impact on student learning • Triangulation with other methods • Correlation among several measures of same targeted cohort (3-course sequence) of University of Pittsburgh IE students • Self/peer scores on Team Developer, Pre-Post Attitudinal Survey responses, Course Evaluations, Intellectual Development, Concept Maps, and Learning Styles Inventory

  10. Psychometric Properties • Reliability • Inter-rater - .37 to .72 • Internal Consistency - .78 - .94 • Validity • Content – expert ratings of team items • Criterion – correlations with grades - .21-.54 • Convergent/Discriminant - .55 & .34

  11. Student Results • Selected data from classroom applications: • Self ratings tend to be inflated in first administration of measurement instrument - some worry about overall variation • Early correlation between self and peer ratings minimal ( r = .31) • Gap between self and peer ratings narrows with multiple administrations (r = .67) equals learning • Correlation between faculty & peer ratings (r = .74)

  12. Learning Styles, Gender, & Self/Peer Ratings • Sensing and Intuitive Learners rate their opposites lower on various learning attributes • Males rate females higher on team attributes than other males. Females rate other females higher than they rate males.

  13. Implications for the Classroom • Need to link course objectives to assessment processes • Using pre-formulated items for a specific course – attribute research • Communication regarding process to all participants • Post-feedback processes • Results used for grading purposes

  14. Benefits for LearningMultisource Assessment & Feedback • Provides multiple measures adding to validity of information • Students motivated to decrease gaps between self-assessment and external evaluation • Significant increase in skills and behaviors precipitated by feedback • Reinforces criticality of learning outcomes • Prepares for industry assessment practices • Develops critical evaluation skills for life long learning

More Related