1 / 35

Art Graesser, Tanner Jackson, Matthew Ventura, James Mueller, Xiangen Hu, and Natalie Person

The Impact of Conversational Navigational Guides on the Learning, Use, and Perceptions of Users of a Web Site. Art Graesser, Tanner Jackson, Matthew Ventura, James Mueller, Xiangen Hu, and Natalie Person.

quinn-tran
Télécharger la présentation

Art Graesser, Tanner Jackson, Matthew Ventura, James Mueller, Xiangen Hu, and Natalie Person

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Impact of Conversational Navigational Guides on the Learning, Use, and Perceptions of Users of a Web Site Art Graesser, Tanner Jackson, Matthew Ventura, James Mueller, Xiangen Hu, and Natalie Person This research was directly supported by contracts from ONR and IDA, and was partially supported from grants by NSF and the DoD.

  2. Outline(The next 20 minutes) • HURAA • Experiment • Results • Conclusions

  3. What is HURAA? • HURAA(Human Use Regulatory Affairs Advisor ) is a web-based facility that provides help and training on the ethical use of human subjects in research. • HURAA is based on documents and regulations from United States Federal agencies.

  4. HURAA (Features) • Hypertext • Multimedia • Animated navigational guide • Lessons with case-based and explanation-based reasoning • Help modules • Context-sensitive FAQs (Point & Query) • Glossaries • Archives • Natural language queries

  5. HURAA (Modules) • Introduction • Historical Overview • Lessons • Explore Issues • Explore Cases • Decision Consequences • Query IRB Documents

  6. Outline • HURAA • Experiment • Results • Conclusions

  7. Experiment • 155 students • Between subjects: • Navigational Guides • Full • Voice • Text • None • Within subjects: • Two Phases • Acquisition phase • Test phase

  8. Experiment (Guides) • Full guide (agent and voice)

  9. Experiment (Guides) • Full guide (agent and voice) • Voice guide (synthesized speech only)

  10. Experiment (Guides) • Full guide (agent and voice) • Voice guide (synthesized speech only) • Print guide (text message only)

  11. Experiment (Guides) • Full guide (agent and voice) • Voice guide (synthesized speech only) • Print guide (text message only) • No guide (no navigational guidance)

  12. Experiment(Predictions) • If guidance is important: • Full, Print, Voice > None • Given guidance: • If speech medium is most effective • Full, Voice > Print • If text medium is most effective • Print > Full, Voice • If the agent/persona effect is most effective • Full > Voice • If the agent/persona effect is distracting • Voice > Full

  13. Experiment (Acquisition phase) • Introduction - Flash intro used to hook the user and acquaint them with the system. • Lessons – 4 example cases used to teach/test the seven critical issues.

  14. Lessons 1

  15. Lessons 2

  16. Lessons 3

  17. Experiment (Acquisition phase) • Introduction – Flash intro used to hook the user and acquaint them with the system. • Lessons – 4 example cases used to teach/test the seven critical issues. • Search Task – Users answer 4 specific questions, designed to use different modules in the system (specifically the query documents module).

  18. Search IRB Documents

  19. Experiment (Test phase) • Memory – Free and cued recall tested core ideas from the Introduction and Lesson material. A Cloze procedure was used to test memory for key words. • Issue Comprehension – As a transfer test from the Lesson module, users read two sample cases and provided respective ratings of problematic issues. • Perception Ratings – Users provided ratings on the system (e.g. “You learned a lot about human subjects protections.”).

  20. Outline • HURAA • Experiment • Results • Conclusions

  21. Results (Core ideas) Table 1. Proportion correct

  22. Results (Problematic issues) Table 2. Proportion correct

  23. Results (Search task) Table 3. Proportion correct Table 4. Completion Time (min)

  24. Results (Time on task) Table 5. Mean minutes on task

  25. Results (Perception ratings) Table 6. Mean ratings (1 to 6, higher is better)

  26. Outline • HURAA • Experiment • Results • Conclusions

  27. Conclusions • No significant results from any of the dependent measures. • This null result is incompatible with any of our previously stated predictions. • A practical implication of the result is that the animated conversational agent did not facilitate learning, usage, or perceptions of the interface.

  28. Conclusions • Perhaps, due to the well structured nature of the site, a navigational guide was superfluous. • An agent could possibly provide help in a more complex environment. • The value of a navigational guide may increase as a function of the complexity, ambiguity, and perplexity of the system.

  29. Conclusions • Animated conversational agents have proven to be effective when they deliver information and learning material in monologues and tutorial dialogues. • Perhaps there are special conditions when a navigational guide of some form will be helpful, whether it be print, voice, or a talking head. • However, these precise conditions have yet to be discovered and precisely specified in the literature.

  30. The Impact of Conversational Navigational Guides on the Learning, Use, and Perceptions of Users of a Web Site Art Graesser, Tanner Jackson, Matthew Ventura, James Mueller, Xiangen Hu, and Natalie Person This research was directly supported by contracts from ONR and IDA, and was partially supported from grants by NSF and the DoD.

More Related