1 / 16

Does History Help?

Does History Help? . An Experiment on How Context Affects Crowdsourcing Dialogue Annotation. Elnaz Nouri Computer Science Department University of Southern California Natural Dialogue Group, Institute for Creative Technologies. Crowdsourcing Annotation. Faster (?) Cheaper (?). Quality (?)

oro
Télécharger la présentation

Does History Help?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Does History Help? An Experiment on How Context Affects Crowdsourcing Dialogue Annotation Elnaz Nouri Computer Science Department University of Southern California Natural Dialogue Group, Institute for Creative Technologies

  2. Crowdsourcing Annotation Faster (?) Cheaper (?) Quality (?) Snow(2008)

  3. In Crowdsourcing Dialogue Annotation Tasks • Dialogue data is sequential by nature. • Does providing context from previous parts of the dialogue (e.g. turns) affect the annotation of the target part? Example: Judge the sentiment on the following turn of the dialogue: Person 1: Come on out, honey! I'm telling you look good! Tell her she looks good, tell her she looks good. Person 2: Oh my God, you look so good!

  4. From Seinfeld… WAITRESS : Tuna on toast, coleslaw, cup of coffee. GEORGE: Yeah. No, no, no, wait a minute, I always have tuna on toast. Nothing's ever worked out for me with tuna on toast. I want the complete opposite of on toast. Chicken salad, on rye, un-toasted with a side of potato salad ... and a cup of tea. JERRY: You know chicken salad is not the opposite of tuna, salmon is the opposite of tuna, 'cuz salmon swim against the current, and the tuna swim with it. GEORGE: Good for the tuna! Link to Video

  5. Interesting Questions • General aspect: • Do annotators need context to do each instance of the annotation? • Can we present them with only the needed previous context? • How does context affect stability of the annotation? • Crowdsourcing aspect: • Should we present the whole dialogue to annotator if the compensation rate is low? • Can we consider each annotation task as a stand alone micro task? • Do annotators on Amazon Mechanical Turk read the instructions or the context provided?

  6. So we ran an experiment…

  7. How is it going? I am Bronson from the Hill restaurant. The Idea: A Variable Context Window Size I am Milton. I am from the Vally restaurant. Alright, cool. So looks like we got some good resources on the table. And, uh, we want to find a way that works for both of us. Uh, yeah I agree. I just want to, we want to maximize both of our profits. So what do we have right here?

  8. The Data Set • The “Farmers Market” negotiation dataset • 41 dyadic sessions of negotiation based on instructions • Two restaurant owners are trying to divide some items among themselves

  9. 3 dialogues used: D1 (31 turns), D2 (16 turns), D3 (30 turns) = 77 turns • 5 annotators for each instance: A1, A2, A3, A4, A5 • annotators recruited on Amazon Turk • $0.02 for annotating each instance • “Sentiment Annotation Task”on the turns of the dialogue The Task: Sentiment Analysis Task

  10. Previous Context Window Size =3 Example Stimuli

  11. Example Annotation Result Gold annotation: the whole dialogue was presented to the annotator

  12. Evaluation Method 1: Distance to the Gold Annotation (* shows the minimum distance from Gold annotation)

  13. Evaluation Method 2: Inter-annotator Agreement Hypothesis: higher inter-annotator reliability implies more stability indicator of the optimal context window size The differences between window sizes were not significant according to t-test except for that of 0 window size. * shows the maximum agreement

  14. Conclusions (and considerations) Our results imply that: • the number of previous turns doesn’t really affect the annotation of the target  not necessary to show a big number of previous turns or the whole dialogue. • A context window size of 3 is perhaps enough to do the job. Considerations: • sample size is very small • the nature of the dialogues and the negotiation task might have affect the results • our dataset wasn’t too emotional!  • these are not real negotiation or conversations • the annotation task can also affect the outcome

  15. Future Work Further investigation is needed: • Different datasets • Different annotation tasks • Appropriate metrics for measuring • Suitable baseline annotation for comparison Questions? Please tell me what you think! Your feedback and ideas are sincerely appreciated!

More Related