1 / 59

Analyzing Argumentative Discourse Units in Online Interactions

Analyzing Argumentative Discourse Units in Online Interactions. Debanjan Ghosh, Smaranda Muresan, Nina Wacholder, Mark Aakhus and Matthew Mitsui. First Workshop on Argumentation Mining, ACL June 26, 2014. when we first tried the iPhone it felt natural immediately,. User1.

nan
Télécharger la présentation

Analyzing Argumentative Discourse Units in Online Interactions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analyzing Argumentative Discourse Units in Online Interactions Debanjan Ghosh, Smaranda Muresan, Nina Wacholder, Mark Aakhus and Matthew Mitsui First Workshop on Argumentation Mining, ACL June 26, 2014

  2. when we first tried the iPhone it felt natural immediately, User1 That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re Feeling empowered and comfortable. Feeling empowered and comfortable. User2 I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the Point of frustration. Point of frustration. User3 Argumentative Discourse Units (ADU; Peldszus and Stede, 2013) Segmentation Segment Classification Relation Identification

  3. Annotation Challenges • A complex annotation scheme seems infeasible • The problem of high *cognitive load* (annotators have to read all the threads) • High complexity demands two or more annotators • Use of expert annotators for all tasks is costly

  4. Our Approach: Two-tiered Annotation Scheme • Coarse-grained annotation • Expert annotators (EAs) • Annotate entire thread • Fine-grained annotation • Novice annotators (Turkers) • Annotate only text labeled by EAs

  5. Our Approach: Two-tiered Annotation Scheme • Coarse-grained annotation • Expert annotators (EAs) • Annotate entire thread • Fine-grained annotation • Novice annotators (Turkers) • Annotate only text labeled by EAs

  6. Coarse-grained Expert Annotation Target Post1 Post2 Post2 Post3 Post3 Post4 Callout Pragmatic Argumentation Theory (PAT; Van Eemeren et al., 1993) based annotation

  7. ADUs: Callout and Target • A Calloutis a subsequent action that selects all or some part of a prior action (i.e., Target) and comments on it in some way. • A Targetis a part of a prior action that has been called out by a subsequent action.

  8. Target when we first tried the iPhone it felt natural immediately, User1 That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re Feeling empowered and comfortable. Feeling empowered and comfortable. User2 Callout I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the Point of frustration. Point of frustration. User3 Callout

  9. More on Expert Annotations and Corpus • FiveAnnotators were free to choose any text segment to represent an ADU • Four blogs and their first one-hundred comment sections are used as our argumentative corpus • Android (iPhone vs. Android phones) • iPad (usability of iPad as a tablet) • Twitter (use of Twitter as a micro-blog platform) • Job Layoffs (layoffs and outsourcing)

  10. Inter Annotator Agreement (IAA) for Expert Annotations • P/R/F1 based IAA (Wiebe et al., 2005) • exact match (EM) • overlap match (OM) • Krippendorff’s (Krippendorff, 2004)

  11. Issues • Different IAA metrics have different outcome • It is difficult to infer from IAA that what segments of the text are easier or harder to annotate

  12. Our solution: Hierarchical Clustering We utilize a hierarchical clustering technique to cluster ADUs that are variant of a same Callout • Clusters with 5 and 4 annotators shows Callouts that are plausibly easier to identify • Clusters selected by only one or two annotators are harder to identify

  13. Example of a Callout Cluster

  14. Motivation for a finer-grained annotation • What is the nature of the relation between a Callout and a Target? • Can we identify finer-grained ADUs in a Callout?

  15. Our Approach: Two-tiered Annotation Scheme • Coarse-grained annotation • Expert annotators (EAs) • Annotate entire thread • Fine-grained annotation • Novice annotators (Turkers) • Annotate only text labeled by EAs

  16. Novice Annotation: task 1 T T CO CO T T Agree/Disagree/Other CO CO This is related to annotation of agreement/disagreement (Misra and Walker, 2013; Andreas et al., 2012) identification research.

  17. Target when we first tried the iPhone it felt natural immediately, User1 That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re Feeling empowered and comfortable. Feeling empowered and comfortable. User2 Callout I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the Point of frustration. Point of frustration. User3 Callout

  18. More from Agree/DisagreeRelation Label • For each Target/Callout pair we employed five Turkers • Fleiss’ Kappa shows moderate agreement between the Turkers • 143 Agree/153 Disagree/50 Other data instance • We run preliminary experiments for predicting the relation label (rule based, BoW, Lexical Features…) • Best results (F1): 66.9% (Agree) 62.9% (Disagree)

  19. Novice Annotation: task 2 T S R CO 2: Identifying Stance vs. Rationale Difficulty This is related to identification of justification task (Biran and Rambow, 2011)

  20. That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re That’s very true Feeling empowered and comfortable. User2 I disagree that the iPhone just “felt natural immediately” I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the Point of frustration. User3 Stance Rationale

  21. Examples of Callout/Target pairs with difficulty level (majority voting)

  22. Difficulty judgment (majority voting)

  23. Conclusion • We propose a two-tiered annotation scheme for argument annotation for online discussion forums • Expert annotators detect Callout/Target pairs where crowdsourcing is employed to discover finer units like Stance/Rationale • Our study also assists in detecting the text that is easy/hard to annotate • Preliminary experiments to predict agreement/disagreement among ADUs

  24. Future Work • Qualitative analysis of the Callout phenomenon to process finer-grained analysis • Study the different use of the ADUs on different situations • Annotation on different domain (e.g. healthcare forums) and adjust our annotation scheme • Predictive modeling of Stance/Rationale phenomenon

  25. Thank you!

  26. Example from the discussion thread User2 User3 Stance Rationale

  27. Predicting the Agree/DisagreeRelation Label • Training data (143 Agree/153 Disagree) • Salient Features for the experiments • Baseline: rule based (`agree’, `disagree’) • Mutual Information (MI): MI is used to select words to represent each category • LexFeat: Lexical features based on sentiment lexicons (Hu and Liu, 2004), lexical overlaps, initial words of the Callouts… • 10-fold CV using SVM

  28. Predicting the Agree/Disagree Relation Label (preliminary result) • Lexical features result in F1 score between 60-70% for Agree/Disagree relations • Ablation tests show initial words of the Callout is the strongest feature • Rule-based system show very low recall (7%), which indicates a lot of Target-Callout relations are *implicit* • Limitation – lack of data (in process of annotating more data currently…)

  29. # of Clusters for each Corpus • Clusters with 5 and 4 annotators shows Callouts that are plausibly easier to identify • Clusters selected by only one or two annotators are harder to identify

  30. Target User1 Callout1 User2 Callout2 User3

  31. Target User1 Callout1 User2 Callout2 User3

  32. Fine-GrainedNovice Annotation T T E.g., Relation Identification CO CO E.g., Agree/Disagree/Other T T Finer-Grained Annotation CO CO E.g., Stance &Rationale

  33. Motivation and Challenges Post1 Segmentation Segment Classification Relation Identification Post2 Post3 Post4 Argumentative Discourse Units (ADU; Peldszus and Stede, 2013)

  34. Why we propose a two-layer annotation? • A two-layer annotation schema • Expert Annotation • Five annotators who received extensive training for the task • Primary task includes selecting discourse units from user’ posts (argumentative discourse units: ADU) • Peldszus and Stede (2013 • Novice Annotation • Use of Amazon Mechanical Turk (AMT) platform to detect the nature and role of the ADUs selected by the experts

  35. Annotation Schema for Expert Annotators • Call Out • A Callout is a subsequent action that selects all or some part of a prior action (i.e., Target) and comments on it in some way. • Target • A Target is a part of a prior action that has been called out by a subsequent action

  36. Motivation and Challenges • User generated conversational data provides a wealth of naturally generated arguments • Argument mining of such online interactions, however, is still in its infancy…

  37. Detail on Corpora • Four blog posts and the responses (e.g. first 100 comments) from Technorati between 2008-2010. • We selected blog postings in the general topic of technology, which contain many disputes and arguments. • Together they are denoted as – argumentative corpus

  38. Motivation and Challenges (cont.) • A detailed single annotation scheme seems infeasible • The problem of high *cognitive load* (e.g. annotators have to read all the threads) • Use of expert annotators for all tasks is costly • We propose a scalable and principled two-tier scheme to annotate corpora for arguments

  39. Annotation Schema(s) • A two-layer annotation schema • Expert Annotation • Five annotators who received extensive training for the task • Primary task includes a) segmentation, b) segment classification, and c) relation identification lecting discourse units from user’ posts (argumentative discourse units: ADU) • Novice Annotation • Use of Amazon Mechanical Turk (AMT) platform to detect the nature and role of the ADUs selected by the experts

  40. Example from the discussion thread

  41. A picture is worth…

  42. Motivation and Challenges • Segmentation • Segment Classification • Relation Identification • Argument annotation includes three tasks (Peldszus and Stede, 2013)

  43. Summary of the Annotation Schema(s) • First stage of annotation • Annotators: expert (trained) annotators • A coarse-grained annotation scheme inspired by Pragmatic Argumentation Theory (PAT; Van Eemeren et al., 1993) • Segment, label, and link Callout and Target • Second stage of annotation • Annotators: novice (crowd) annotators • A finer-grained annotation to detect Stance and Rationale of an argument

  44. Expert Annotation Expert Annotators • Segmentation • Labeling • Linking • Peldszus and Stede(2013) • Five Expert (trained) annotators detect two types of ADUs • ADU: Callout and Target Coarse-grained annotation

  45. The Argumentative Corpus 2 1 4 3 Blogs and comments extracted from Technorati (2008-2010)

  46. Novice Annotations: Identifying Stance and Rationale Crowdsourcing • Identify the task-difficulty (very difficult….very easy) • Identify the text segments (Stance and Rationale)

  47. Novice Annotations: Identifying the relation between ADUs Crowdsourcing

  48. More on Expert Annotations • Annotators were free to chose any text segment to represent an ADU Splitters Lumpers

  49. Novice Annotation: task 1 1: Identifying the relation (agree/disagree/other) This is related to annotation of agreement/disagreement (Misra and Walker, 2013; Andreas et al., 2012) and classification of stances (Somasundaran and Wiebe, 2010) in online forums.

  50. ADUs: Callout and Target

More Related