1 / 54

(Speech and Affect in Intelligent Tutoring) Spoken Dialogue Systems

(Speech and Affect in Intelligent Tutoring) Spoken Dialogue Systems. Diane Litman Computer Science Department and Learning Research and Development Center www.cs.pitt.edu/~litman. Outline. Introduction The ITSPOKE System and Corpora Spoken versus Typed Dialogue Tutoring

kerri
Télécharger la présentation

(Speech and Affect in Intelligent Tutoring) Spoken Dialogue Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. (Speech and Affect in Intelligent Tutoring) Spoken Dialogue Systems Diane Litman Computer Science Department and Learning Research and Development Center www.cs.pitt.edu/~litman

  2. Outline • Introduction • The ITSPOKE System and Corpora • Spoken versus Typed Dialogue Tutoring • Recognizing and Adapting to Student State • Current Directions and Summary

  3. Natural Language Processing • The field of Natural Language Processing (NLP),orComputational Linguistics (CL), or Human Language Technology (HLT), is primarily concerned with the creation of computer programs that perform useful and interesting tasks with human languages. • enable computers to interact with humans using natural language (Prof. Litman) • serve as useful adjuncts to humans in tasks involving language by providing services such as automatic translation (Prof. Hwa), summarization and question-answering (Prof. Wiebe) • The foundations of the field are in computer science, artificial intelligence, linguistics, mathematics and statistics, electrical engineering, and psychology. • Studying NLP involves studying natural languages, formal representations, and algorithms for their manipulation. • See nlp.cs.pitt.edu for further details on NLP at Pitt

  4. Spoken Dialogue Research Group • www.cs.pitt.edu/~litman/itspoke.html • Current Projects • Monitoring Student State in Tutorial Spoken Dialogue • Adding Speech to a Text-Based Dialogue Tutor • Tutoring Scientific Explanations via Natural Language Dialogue • TuTalk: Infrastructure for authoring and experimenting with natural language dialogue in tutoring systems and learning research • Natural Language Processing Technology for Guided Study of Bioinformatics

  5. Spoken Dialogue Research Group (cont.) • PhD Students (CS and ISP) • Hua Ai (simulated users for reinforcement learning) • Amruta Purandare (unsupervised clustering for topic tracking) • Mihai Rotaru (machine learning, speech analysis, affective dialogue systems) • Art Ward (dialogue coherency and learning) • Also 1 Undergraduate, 2 Postdocs, and Programmer • Alumni • Beatriz Maeireizo-Tokeshi (Computer Science MS Project: Applying Co-training for Predicting Student Emotions with Spoken Dialogue Data, 2005)

  6. Spoken Dialogue Tutoring: Motivation • Working hypothesis regarding learning gains • Human Dialogue > Computer Dialogue > Text • Most human tutoring involves face-to-face spoken interaction, while most computer dialogue tutors are text-based • Can the effectiveness of dialogue tutorial systems be further increased by using spoken interactions?

  7. Potential Benefits of Speech • Self-explanation correlates with learning and occurs more in speech • Speech contains prosodic information, providing new sources of information for dialogue adaptation • Spoken computational environments may prime a more social interpretation that enhances learning • Potential for hands-free interaction

  8. Spoken Tutorial Dialogue Systems • Recent tutoring systems have begun to add spoken language capabilities • However, little empirical analysis of the learning ramifications of using speech

  9. Outline • Introduction • The ITSPOKE System and Corpora • Spoken versus Typed Dialogue Tutoring • Recognizing and Adapting to Student State • Current Directions and Summary

  10. ITSPOKE:Intelligent Tutoring SPOKEn Dialogue System • Back-end is text-based Why2-Atlas tutorial dialogue system (VanLehn et al., 2002) • Student speech digitized from microphone input; Sphinx2 speech recognizer • Tutor speech played via headphones/speakers; Cepstral text-to-speech synthesizer

  11. Spoken Tutoring Corpora • Human-Human Corpus • 14 students, 128 physics problems (dialogues), 5948 student turns, 5505 tutor turns • Student and tutor turns were manually transcribed for content and segmented for boundaries

  12. Spoken Tutoring Corpora (cont.) • Computer-Human Corpus • 20 students, 100 physics problems (dialogues) , 2445 student turns, 2967 tutor turns • Noisy student turn boundaries and transcriptions extractable from ITSPOKE logs • Content of student turns also manually transcribed

  13. ITSPOKE Corpora ComparisonHuman-Human Human-Computer …3.5 minutes into session… Tutor: I didn't understand. What force acts on all objects within close proximity to earth? Student: uh the force of gravity Tutor: Fine. Besides the Earth's gravitational force, are there any other forces acting on the person, keys, and elevator after the release? Student: no Tutor: Yep. Thus, keep in mind that after the release, the only force on the person, keys, and elevator is the force of gravity. If gravity is the only force acting on an object, will it be moving or staying still? Student: it will be moving Tutor: Fine. Not only are the person, keys, and elevator moving, they have only gravitational forces on them. When an object is falling and has only gravitational force on it, it is said to be in what? Student: freefall Tutor: Good. Thus, after the release the only force on the keys and man is the downward force of earth's gravity, so they are in freefall. Before the release, what is the relationship between the man's velocity and the keys' velocity? Student: they're equal Tutor: Correct. We know that after the release, the keys, the person, and the elevator are in freefall. Is freefall acceleration near earth a constant? Or does it vary depending upon which object is in freefall? Student: it's a constant …1.3 minutes into session… Student: Can I ask you questions? Tutor: Yes Student: Is that ok? Tutor: Mm-hm Student: Um from what we learned from the last problem they're gonna have the same- the keys and you have the same acceleration right? Tutor: Yes Student: So that means they'll both hit the bottom at the same time. But I don't understand where- I don't understand if you're in the elevator- Tutor: You see Student: Where are you going to-? Tutor: The uh let me uh the key uh- the person holds the key in front of- Student: Their face yeah- Tutor: Uh his or her face and then lets it uh uh it is let go so the the the question relates to the relative position of the key and the face uh- Student: So the key and the face- Tutor: Uh how will they- as they go- as they both fall what will be their relative position? That is the question. Student: (sigh)

  14. Outline • Introduction • The ITSPOKE System and Corpora • Spoken versus Typed Dialogue Tutoring • Recognizing and Adapting to Student State • Current Directions and Summary

  15. Project: Adding Spoken Language to a Text-Based Dialogue Tutor Spoken Versus Typed Human and Computer Dialogue Tutoring Diane Litman, Carolyn Penstein Rosé, Kate Forbes-Riley, Kurt VanLehn, Dumisizwe Bhembe, and Scott Silliman • Proceedings of the Seventh International Conference on Intelligent Tutoring Systems (2004) • International Journal of Artificial Intelligencein Education (to appear)

  16. Research Questions • Given that natural language tutoring systems are becoming more common, is it worth the extra effort to develop spokenrather than text-based systems? • Given the current limitations of speech and natural processing technologies, how do computer tutors compare to the upper bound performance of human tutors?

  17. Common Experimental Aspects • Students take a physics pretest • Students read background material • Students use web interface to work through up to 10 problems with either a computer or a human tutor • Students take a posttest • 40 multiple choice questions, isomorphic to pretest

  18. Human Tutoring: Experiment 1 • Same human tutor, subject pool, physics problems, web interface, and experimental procedure across two conditions • Typed dialoguecondition (20 students, 171 dialogues/physics problems) • Strict turn-taking enforced • Spoken dialoguecondition (14 students, 128 dialogues/physics problems) • Interruptions and overlapping speech permitted • Dialogue history box remains empty

  19. Typed versus Spoken Tutoring: Overview of Analyses • Tutoring and Dialogue Evaluation Measures • learning gains • efficiency • Correlation of Dialogue Characteristics and Learning • do dialogue means differ across conditions? • which dialogue aspects correlate with learning in each condition?

  20. Learning and Training Time Key: statistical trend statistically significant

  21. Discussion • Students in both conditions learned during tutoring (p=0.000) • The adjusted posttest scores suggest that students learned more in the spoken condition (p=0.053) • Students in the spoken condition completed their tutoring in less than half the time (p=0.000)

  22. Dialogue Characteristics Examined • Motivated by previous learning correlations with student language production and interactivity (Core et al., 2003; Rose et al.; Katz et al., 2003) • Average length of turns (in words) • Total number of words and turns • Initial values and rate of change • Ratios of student and tutor words and turns • Interruption behavior (in speech)

  23. Human Tutoring Dialogue Characteristics (means)

  24. Discussion • For every measure examined, the means across conditions are significantly different • Students and the tutor take more turns in speech, and use more total words • Spoken turns are on average shorter • The ratio of student to tutor language production is higher in text

  25. Learning Correlations after Controlling for Pretest

  26. Discussion • Measures correlating with learning in the typed condition do not correlate in the spoken condition • Typed results suggest that students who give longer answers, or who are inherently verbose, learn more • Deeper analyses needed (requires manual coding) • e.g., do longer student turns reveal more explanation? • results need to be further examined for student question types, substantive contributions, etc.

  27. Computer Tutoring: Experiment 2 • Same as Experiment 1; however • only 5 problems (dialogues) per student • pretest taken after background reading • strict turn taking enforced in both conditions • Typed dialogue condition (23 students, 115 dialogues) • Why2-Atlas • Spoken dialoguecondition (20 students, 100 dialogues) • ITSPOKE • (noisy) speech recognition output rather than actual student utterances

  28. Results: Learning and Training Time • Students in both conditions learned during tutoring (p=0.000) • Students learned the same in both conditions (p=0.950) • Students in the typed condition completed their tutoring in less time than in the spoken condition (p=0.004)

  29. Results: Dialogue Characteristics and Correlations with Learning • Means across conditions are no longer significantly different for many measures • total words produced by students • average length of student turns and initial verbosity • ratios of student to tutor language production • Different measures again correlate with learning • Speech: student language production • Text: less subdialogues/KCD • Degradation due to speech does not correlate!

  30. Recap • Human Tutoring: spoken dialogue yielded significant performance improvements • Greater learning gains • Reduced dialogue time • Many differences in superficial dialogue characteristics • Computer Tutoring: spoken dialogue made little difference • No change in learning • Increased dialogue time • Fewer dialogue differences

  31. Outline • Introduction • The ITSPOKE System and Corpora • Spoken versus Typed Dialogue Tutoring • Recognizing and Adapting to Student State • Current Directions and Summary

  32. Project: Monitoring Student State in Tutorial Spoken Dialogue Predicting Student Emotions in Computer-Human Tutoring Dialogues Diane J. Litman and Kate Forbes-Riley • Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (2004) • Speech Communication (to appear)

  33. Monitoring Student State (motivation) Tutor26: Let me ask my question in a more general way. What happens to the velocity of a body when there is no force acting on it? Student27: dammit(ASR: it is) Tutor28 : Could you please repeat that? Student29 : same(ASR: i same) Tutor30 : Right. And what happens to the HORIZONTAL velocity of a body when there is no HORIZONTAL force acting on it? Student31: zero(ASR: the zero) Tutor32 : Well... OK, let's consider how forces in the vertical direction affect the motion of a body in a horizontal direction in a different scenario <…omitted…> Student33: oh gravity you already said this(ASR: beats gravity you're exert less) Tutor34: Fine. Are there any other forces acting on the apple as it falls? Student35: no why are you doing this again(ASR: no y and to it yes) Tutor36: Correct. The only force acting on the apple as it falls to the earth is the earth's gravitational force. In which direction does gravitational force act? Student37: downward you computer(ASR: downward you computer)

  34. Methodology Emotion Annotation Machine Learning Experiments • extract linguistic features from student turns • use different feature sets to predict emotions • significant reduction of baseline error

  35. Emotion Annotation Scheme • ‘Emotion’: emotions/attitudes that may impact learning • Annotation of Student Turns • Emotion Classes negative e.g. uncertain, bored, irritated, confused, sad positive e.g. confident, enthusiastic neutral no weak or strong expression of negative or positive emotion

  36. Example Annotated Excerpt ITSPOKE: What happens to the velocity of a body when there is no force acting on it? Student: dammit (NEGATIVE) ASR: it is ITSPOKE : Could you please repeat that? Student: same (NEUTRAL) ASR: i same

  37. Feature Extraction per Student Turn • Three feature types • Acoustic-prosodic • Lexical • Identifiers • Research questions • Relative predictive utility of acoustic-prosodic, lexical and identifier features Impact of speech recognition • Comparison across computer and human tutoring

  38. Feature Types (1) Acoustic-Prosodic Features • 4 pitch (f0) : max, min, mean, standard dev. • 4 energy (RMS) : max, min, mean, standard dev. • 4 temporal: turn duration (seconds) pause length preceding turn (seconds) tempo (syllables/second) internal silence in turn (zero f0 frames)  available to ITSPOKE in real time

  39. Feature Types (2) Word Occurrence Vectors • Human-transcribed lexical items in the turn • ITSPOKE-recognized lexical items

  40. Feature Types (3) Identifier Features • student number • student gender • problem number

  41. Summary of Results (Computer Tutoring)

  42. Comparison with Human Tutoring - In human tutoring dialogues, emotion prediction (and annotation) is more accurate and based on somewhat different features

  43. Recap • Recognition of annotated student emotions in spoken computer and human tutoring dialogues, using multiple knowledge sources • Significant improvements in predictive accuracy compared to majority class baselines • A first step towards implementing emotion prediction and adaptation in ITSPOKE

  44. Outline • Introduction • The ITSPOKE System and Corpora • Spoken versus Typed Dialogue Tutoring • Recognizing and Adapting to Student State • Current Directions and Summary

  45. Recent Directions • Manual coding of “deeper” dialogue phenomena • Proceedings Artificial Intelligence in Education (2005) • Analysis beyond the turn level • Natural Language Engineering (to appear) • Learning, emotion, and speech recognition • Proceedings of Interspeech (2005) • System adaptation to user emotion • Proceedings Discourse and Dialogue (2005) • Pre-recorded (human) versus synthesized (machine) voice • submitted

  46. Summary • Goal: an empirically-based understanding of the implications of adding speech and affective computing to dialogue tutors • Accomplishments • ITSPOKE • Collection and analysis of two spoken tutoring corpora • Comparisons of typed and spoken tutorial dialogues • Models for emotion prediction • Results will impact the design of future systems incorporating speech, by highlighting the performance gains that can be expected, and the requirements for their achievement

  47. Thank You! Questions? Interested? Take my seminar this spring.

More Related