1 / 26

Inferring Tutorial Dialogue Structure with Hidden Markov Modeling

Inferring Tutorial Dialogue Structure with Hidden Markov Modeling. Kristy Elizabeth Boyer Eun Young Ha Robert Phillips Michael D. Wallis Mladen A. Vouk James C. Lester. Introduction: Dialogue Structure. Photo by Doc Ross. Learning Dialogue Structure.

jbriggs
Télécharger la présentation

Inferring Tutorial Dialogue Structure with Hidden Markov Modeling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Inferring Tutorial Dialogue Structure with Hidden Markov Modeling Kristy Elizabeth Boyer Eun Young Ha Robert Phillips Michael D. Wallis Mladen A. Vouk James C. Lester

  2. Introduction: Dialogue Structure Photo by Doc Ross

  3. Learning Dialogue Structure • Useful for task and dialogue act prediction and classification (e.g. Bangalore, Di Fabbrizio & Stent 2008) • Topic modelling in multi-party discourse (Purver, Kording, Griffiths, & Tenenbaum 2006) • Probabilistic content models (Barzilay & Lee 2004)

  4. Dialogue Structure in Tutoring • Inform design of systems (e.g. Forbes-Riley, Rotaru, Litman & Tetreault 2007) • Find effective dialogue policies (Tetreault & Litman 2008) • Manual approaches still used (e.g. Cade, Copeland, Person, D’Mello 2008)

  5. Human Tutorial Dialogue • Highly effective • Natural model for informing tutorial dialogue system policies • Holds insight for cognitive and affective processes during learning

  6. Problem Statement Given: • Dialogue Corpus • Dialogue Act Annotation (Manual) Construct: • Learned Dialogue Structure Model

  7. Corpus Collection 43 Tutoring Sessions 4864 Utterances

  8. Corpus

  9. Dialogue Structure Tutor Lecture Tutor Evaluation Collaborative Problem Solving Student Reflection

  10. Model Structure Hidden state (Dialogue mode) mt mt+1 mt+2 at at+1 at+2 Observations (Dialogue acts)

  11. Dialogue Act Tagging • Question – Where should I declare i? • Evaluation Question – How does that look? • Statement – You need a closing brace. • Grounding – Ok. • Extra-Domain – You may use your book. • Positive Feedback – Yes, that’s right. • Lukewarm Feedback – Sort of. • Negative Feedback – No, that’s not right.

  12. General HMM Structure 1 0 2

  13. Learning N=# of hidden states • For N ranging from 2 to 15, train many HMMs • For each initializiation of an HMM, ten-fold cross-validate it on the corpus • Compute the Akaike Information Criterion (AIC) using the average log-likelihood fit for each N • Choose N with the highest AIC; take best-fit model from among all of size N

  14. Best-Fit HMM

  15. Model Structure Hidden state (Dialogue mode) mt mt+1 mt+2 at at+1 at+2 Observations (Dialogue acts) Student: “Where is i?” Tutor: “On line 3.” Tutor: “Yep, you found it.”

  16. Model Structure Hidden state (Dialogue mode) mt mt+1 mt+2 ot ot+1 ot+2 Observations (Adjacency Structures*) *Adjacency Structure = Adjacency Pair (Schlegoff & Sacks 1973) ∨ Individual Dialogue Act

  17. Identifying Significant Adjacency Pairs For every pair of dialogue acts with different speakers, apply χ2 test across corpus to determine whether P(acti+1 | acti) > P(acti+1 | ¬ acti)

  18. Statistically Significant Adjacency Pairs

  19. Significant Adjacency Pair Examples • EvaluationQuestionS, PositiveFeedbackT • GroundingS, GroundingT • Extra-DomainS, Extra-DomainT • EvaluationQuestionT, StatementS • QuestionS, StatementT • NegativeFeedbackS, GroundingT p < 0.0001

  20. Adjacency Pair Joining

  21. Best-Fit HMM on Adjacency Structures

  22. Discussion • HMMs trained in unsupervised fashion • Meaningful dialogue modes emerged • Provide concise probabilistic summary of the nature of the tutorial interaction • Adjacency structure model more intuitive; slightly better log-likelihood fit • Captures certain dependencies while allowing probabilistic transitions too

  23. Future Work • Compare this HMM to other types of dialogue structure models on problems of interest (e.g., prediction) • Create different HMM “profiles” that summarize more- or less-effective tutoring sessions (Soller & Stevens 2007) • Enhance the HMM with task state information or utterance features

  24. Conclusion • Models of human-human tutorial dialogue structure are valuable for • Informing design of tutorial dialogue management systems • Gaining insight into the processes at work in learning through tutoring • Unsupervised HMM learning can provide descriptive insight into this dialogue structure • Adjacency pair analysis may enhance such probabilistic models

  25. Acknowledgments

More Related