1 / 1

( current & future work )

1.0. 1.0. 0.8. 0.8. 0.6. 0.6. natives. non-natives. 0.4. 0.4. 0.2. 0.2. avg.lik. = 0.9. avg.lik. = 0.9. proposed model. avg.lik. = 0.8. 0.0. 0.0. 0. 0. 20. 20. 40. 40. 60. 60. 80. 80. 100. 100. avg.lik. = 0.7. avg.lik. = 0.8. avg.lik. = 0.6. current heuristic.

gavivi
Télécharger la présentation

( current & future work )

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 1.0 1.0 0.8 0.8 0.6 0.6 natives non-natives 0.4 0.4 0.2 0.2 avg.lik. = 0.9 avg.lik. = 0.9 proposed model avg.lik. = 0.8 0.0 0.0 0 0 20 20 40 40 60 60 80 80 100 100 avg.lik. = 0.7 avg.lik. = 0.8 avg.lik. = 0.6 current heuristic avg.lik. = 0.6 avg.lik. = 0.7 proposed model probability of task success probability of task success current heuristic avg.lik. = 0.5 avg.lik. = 0.5 average word-error rate average word-error rate word-error-rate word-error-rate constructing accurate beliefs in spoken dialog systems Dan Bohus, Alexander I. Rudnicky Computer Science Department, Carnegie Mellon University ( current & future work ) ( current & future work ) user response analysis k-hypotheses + other abstract 3 a 1 We propose a data-driven approach for constructing more accurate beliefs over concept values in spoken dialog systems by integrating information across multiple turns in the conversation. The approach bridges existing work in confidence annotation and correction detection and provides a unified framework for belief updating. It significantly outperforms heuristic rules currently used in most spoken dialog systems. • how do users respond to correct and incorrect confirmations? belief representation • k hypotheses + other • multinomial generalized linear model system actions: all actions • explicit confirmation • implicit confirmation • unplanned impl. confirmation • request[system asks for the value for a concept] • unexpected update[system received a value for a concept, without asking for it, e.g. as a result of a misrecognition or the user over-answering or attempting a topic shift] explicit confirmation initial heuristic proposed model(basic feature set) 30.83% 30% 20% proposed model(basic + priors) 16.17% 10% 7.86% 6.06% • how often users correct the system? 5.52% oracle 0% implicit confirmation problem request 2 98.14% 30.46% As a prerequisite for increased robustness and making better decisions, dialog systems must be able to accurately assess the reliability of the information they use. Typically, recognition confidence scores provide an initial assessment for the reliability of the information obtained from the user. Ideally, a system should leverage information available in subsequent turns to update and improvethe accuracy of it’s beliefs. belief updating problem: explicit confirmation (correct value) 26.16% 12% 22.69% 21.45% S: starting at what time do you need the room? U: [STARTING AT TEN A M / 0.45] starting at ten a.m. start-time = {10:00 / 0.45} S: did you say you wanted the room starting at ten a.m.? U: [GUEST UNTIL ONE / 0.89] yes until noon start-time = {?} 9.64% 20% 9.49% 17.56% models. results 8% 4 6.08% 10% 4% 0% 0% model Confupd(thC) ← M (Confinit(thC), SA(C), R) unexpectedupdates unplanned implicit confirmation implicit confirmation (correct value) • logistic model tree [one for each system action] • 1-level deep, root splits on answer-type (YES / NO / other) • leaves contain stepwise logistic regression models • sample efficient, feature selection • good probability outputs (minimize cross entropy between model predictions and reality) 80.00% 15.49% 15.15% features • added prior information on concepts • priors constructed manually 14.02% 12.95% S: for when do you need the room? U: [NEXT THURSDAY / 0.75] next Thursday date = {2004-08-26 / 0.75} S: a room for Thursday, August 26th … starting at what time do you need the room? U: [FIVE TO SEVEN P_M / 0.58] five to seven p.m. date = {?} 45.03% 12% 10.72% 40% 8% 25.66% 19.23% 20% 4% 0% 0% given an initial belief over a concept Belieft(C), a system action SA(C) and a user response R, compute the updated belief Belieft+1(C) features estimated impact on task success b implicit confirmation (incorrect value) belief representation: • how does the accuracy of the belief updating model affect task success? • relates the accuracy of the belief updates to overall task success through a logistic regression model • accuracy of belief updates: measured as AVG-LIK of the correct hypothesis • word-error-rate acts as a co-founding factor • model: P(Task Success=1) ← α + β•WER + γ•AVG-LIK • fitted model using 443 data-points (dialog sessions) • β, γ capture the impact of WER and AVG-LIK on overall task success S: how may I help you? U: [THREE TO RESERVE A ROOM / 0.65] I’d like to reserve a room start-time = {15:00 / 0.65} S: starting at three p.m. … for which day do you need the conference room? U: [CAN YOU DETAILS TIME / NONUNDER.(0.0)] I need a different time start-time = {?} • most accurately: probability distribution over the set of possible values • but: system is not likely to “hear” more than 3 or 4 conflicting values • in our data, the maximum number of hypotheses for a concept accumulated through interaction was 3; the system heard more than 1 hypothesis for a concept in only 6.9% of cases • compressed belief representation: k hypotheses + other • for now, k = 1: top hypothesis + other [see current and future work for extensions] • for now, only updates after system confirmation actions evaluation initial (error rate in system beliefs before the update) heuristic (error rate in system beliefs after the update – using the heuristic update rules) proposed model (error rate of the proposedlogistic model tree) oracle (oracle error rate) implicit confirmation explicit confirmation unplanned implicit confirmation compressed belief updating problem: 31.15 30.40 given an initial confidence score for the top hypothesis h for a concept C -Confinit(thC), construct an updated confidence score for the hypothesis h - Confupd(thC)- in light of the system confirmation action SA(C) and the follow-up user response R 30% 30% 20% 23.37 15.40 14.36 20% 20% 12.64 16.15 10.37 15.33 10% 10% 10% 8.41 3.57 2.71 dataset 3 0% 0% 0% conclusion Roomline 5 • user study with the RoomLine spoken dialog system • 46 participants (1st time users) • 10 scenario-based interactions each • 449 dialogs • 8278 turns • corpus transcribed and annotated • phone-based, mixed initiative system for conference room reservations • access to live schedules for 13 rooms in 2 buildings • size, location, a/v equipment • proposed a data-driven approach for constructing more accurate beliefs in task-oriented spoken dialog systems • bridge insights from detection of misunderstandings and corrections into a unified belief updating framework • model significantly outperforms heuristics currently used in most spoken dialog systems using information from n-best lists c • currently: using only the top hypothesis from the recognizer • next: extract more information from n-best list or lattices

More Related