1 / 27

PICA Response Rates

PICA Response Rates. Mark Troy – Data and Research Services – metroy@tamu.edu. Three issues about response rates in online evaluations. How low are they? How concerned should faculty and administration be about low rates? Can response rates be improved?. What are the response rates?.

unity
Télécharger la présentation

PICA Response Rates

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PICA Response Rates Mark Troy – Data and Research Services – metroy@tamu.edu

  2. Three issues about response rates in online evaluations • How low are they? • How concerned should faculty and administration be about low rates? • Can response rates be improved?

  3. What are the response rates?

  4. Response rates: paper vs. PICA • PICA average • 50%-60% • Paper average • 70%-80% • 100% response rate • Less than 10% of courses, whether evaluated with paper forms or PICA achieve 100% responses • Extremely low response rate • Less than 10% of courses, whether evaluated with paper forms or PICA have response rates lower than 10%

  5. PICA Response Rate by Department

  6. How concerned should faculty and administration be about low response rates?

  7. The problems with low response rate. • Non-response bias • The bias that arises when students who do not respond have different course experiences than those who do respond. • Random effects • If the response rate is low, the mean is susceptible to the influence of extreme scores, either high or low.

  8. Common expressions of non-response bias • Only those who really like you and those who really hate you will do the evaluations • You will get responses only from the students who like you. • Only the students who hate you will bother to respond. • The students who really hate you will choose not to play the game at all.

  9. The effects of non-response bias • If only those who like you and those who hate you respond, the means might not change (assuming equal numbers of both groups), but the shape of the distribution and the variance of the scores would change. • If only those who like you respond, the mean should increase and the distribution and variance would change. • If only those who hate you respond, the mean should decrease and the distribution and variance would change.

  10. Are there consistent mean differences between paper and online ratings? No significant mean differences were observed. An upward trend is visible in the chart, but that began before the switch from paper to PICA.

  11. Are there consistent mean differences between paper and online ratings? Paper PICA No mean differences were observed. The standard deviations did not change significantly after the switch to PICA.

  12. Are there distributional differences? Same department, consecutive semesters. Overall, this was a good course. Percent of students responding Strongly agree Agree Undecided Disagree Strongly disagree

  13. TAMUG Response Distribution Fall 2011- paper Spring 2012 - PICA

  14. What differences will instructors be able to see in their ratings after switching to PICA? • A comparison of courses taught by the same instructor in consecutive semesters showed that half of the ratings went up after the switch to PICA and half went down. • Overall, there was no significant difference in means of paper evaluations compared to PICA evaluations. • Significant mean differences were observed in 3 out of 70 courses (which would be expected by chance alone.) • The three significant differences were observed in courses with less than 30% response rate in PICA.

  15. Class Size and Response Rates • The correlation between response rate and class size for paper administration = -.161 (Fall 2009) • The correlation between response rate and class size for PICA administration = -.056 (Fall 2009) • Conclusion: Although the relationship between class size and response rate is weak, there is a greater tendency for response rates to decrease as class size increases with paper administration than with PICA.

  16. Random effects • Most differences between means can be attributed to chance variation and not to the method of evaluation. • Response rate does not appear to have an impact on means. • EXCEPTION: Extreme scores, whether high or low, exert greater influence on the mean when the number of responses is low. • Equally true for 100% response rate from a class of ten as for 10% response rate from a class of 100.

  17. Summary, the Effect of Low Response Rate • 1. Non-response bias is a systematic effect that, if present, would appear as a difference in mean ratings, or as an increase in the variance of the ratings, and as a change in the distributions of the scores. Such differences are not seen. (Although that does not mean the non-response bias doesn’t exist.) • 2. Random effects. In any evaluation, some ratings will be higher than others and some will be lower. The more responses, the less influence the extremes, either high or low, will have on the mean. A low response rate allows an unusually high or low rating to pull the average in its direction, therefore higher response rates are better than lower response rates. These random effects are often seen with low response rates.

  18. Can response rates be increased? • YES • The average response rate per department tends to increase over time as faculty and students attain familiarity with the system. • There are some strategies faculty can use to increase response rate.

  19. The big differences between online and paper evaluations • Motivation • Students need to be motivated to do online evaluations • Students do not need to be motivated to do paper evaluations • Attendance • Students need to be in class to do paper evaluations • Students do not need to be in class to do online evaluations

  20. Students’ motivations to do evaluations • Research on student evaluations in higher education shows: • Students’ most desired outcome from evaluations is improvement in teaching. • Students’ second desired outcome is improvement in course content. • Students’ motivation to submit evaluations is severely impacted by their expectation that the faculty and administration pay attention to the evaluations. • The bottom line is that students submit evaluations if they believe their opinions will be valued and considered.

  21. Getting Students to Respond: What works Survey conducted of a random sample of students in classes that used PICA, Fall 2008. 95 students submitted an appraisal online (Response), 94 students did not submit an appraisal although requested to do so (Non-response).

  22. Observations • Receiving an invitation from MARS to do the evaluation has no impact on the response rate. • By a factor of 3 to 1, students are more likely to submit an evaluation: • If the request comes from their instructor. • If their instructor discusses the importance of the evaluation. • If their instructor tells how the evaluations have been used or will be used to improve the course. • Incentives are less effective than discussing the importance and the use of the evaluations.

  23. Why students fail to submit online evaluations • A survey of students who had not submitted evaluations online in PICA, though requested to do so found the following reasons for not submitting. • Forgot — 48% • Missed the deadline — 26% • No other reason was given by more than 10% • The bottom line is that frequent reminders are important.

  24. Mid-term evaluations • Texas A&M faculty have an opportunity to conduct mid-term evaluations (Actually fifth-week evaluations) in PICA • The purpose of the mid-terms is to provide formative information for improving the course at a point that changes are possible. • An examination of response rates at the end of the term shows that courses for which there was a mid-term evaluation have on average a 10% higher response rate than other courses on end-of-term evaluations. • The likely explanation for the effect on response rate is that faculty who do mid-terms can demonstrate to students that their opinions are considered.

  25. Incentives • Incentives appear to be not as effective a motivator as the intrinsic motivation to help improve teaching and the course. • A token incentive, however, can reinforce intrinsic motivation by communicating the importance of the evaluation.

  26. Incentives, continued • The Faculty Senate (Resolution FS 27.122) opposes granting academic credit for evaluations because such credit is not related to any learning outcomes, so it could have the effect that two students who learned the same amount would receive different grades. • Some faculty offer a token incentive to the entire class if some threshold of response rate is reached, thereby communicating the importance of the evaluation, but not disadvantaging a student who chooses not to do an evaluation.

  27. 3 Steps To Increasing Response Rate

More Related