1 / 80

Ph.D. Defense V.G.Vinod Vydiswaran Department of Computer Science

Can you believe what you read online?: Modeling and Predicting Trustworthiness of Online Textual Information. Ph.D. Defense V.G.Vinod Vydiswaran Department of Computer Science University of Illinois at Urbana-Champaign November 30 th , 2012. Thanks…. Web content: structured and free-text.

lorne
Télécharger la présentation

Ph.D. Defense V.G.Vinod Vydiswaran Department of Computer Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Can you believe what you read online?: Modeling and Predicting Trustworthiness of Online Textual Information Ph.D. Defense V.G.VinodVydiswaran Department of Computer Science University of Illinois at Urbana-Champaign November 30th, 2012

  2. Thanks…

  3. Web content: structured and free-text Are all these pieces of information equally trustworthy?

  4. Reputed sources can make mistakes Killian document controversy Anyone can make a mistake…

  5. Sources/claims may mislead on purpose Abortion Services 3%

  6. Many (manual) fact verification sites… … but they may not agree!!!

  7. Obama removes U.S. flag from his plane Majority voting not a solution. Understanding text matters!

  8. Spin doctors on climate change

  9. Every coin has two sides Presenting contrasting viewpoints may help People tend to be biased, and may be exposed to only one side of the story Confirmation bias Effects of filter bubble For intelligent choices, it is wiser to also know about the other side What is considered trustworthy may depend on the person’s viewpoint

  10. Thesis statement • Goal: Verify free-text claims. • Approach: Build algorithmic and computational models to compute trustworthiness of textual claims, by: • Incorporating the contextin which a source expresses a claim to improve trust computation • Utilizing evidence from other sourcesin the network • Exploring use of various data sources to achieve this goal, including user-generated content • Helping users validate claims through credible information

  11. Is drinking alcohol good for the body? “A review article of the latest studies looking at red wine and cardiovascular health shows drinking two to three glasses of red wine daily is good for the heart.” Dr. Bauer Sumpio, M.D. , Prof., Yale School of Medicine Journal of American College of Surgeons,2005 “Women who regularly drink a small amount of alcohol — less than a drink a day — are more likely to develop breast cancer in their lifetime than those who don’t.” Dr. Wendy Chen, Asst. Prof. (Medicine), Harvard Medical School Journal of the American Medical Association (JAMA), 2011 Which one is more trustworthy? What factors affect this decision? What do other sources/documents say about this issue?

  12. Actors in the trustworthiness story Source Claim Users Drinking alcohol is good for the body. Dr. Bauer Sumpio Dr. Wendy Chen Evidence “A review article of the latest studies looking at red wine and cardiovascular health shows drinking two to three glasses of red wine daily is good for the heart.” News Corpus Women who regularly drink a small amount of alcohol — less than a drink a day — are more likely to develop breast cancer in their lifetime than those who don’t. Data Medical sites Forums ClaimVerifier Blogs

  13. Thesis contribution Free-text claims Source Claim Users Understand factors that affect user’s decision Evidence Incorporate textual evidence into trust models Building blocks to design a novel system for users to validate textual claims Data ClaimVerifier

  14. Building a claim verification system HCI Issues Algorithmic/Computational Issues [CIKM’12, ASIS&T’12] [KDD’11, ECIR’12] Source Claim How to assign truth values to textual claims? Users Are sources trustworthy? How to present evidence? How to address user bias? Evidence Data/Language Understanding How to build trust models that make use of evidence? [KDD-DMH’11] How to find relevant pieces of evidence ? What kind of data can be utilized? Data ClaimVerifier

  15. Key takeaway messages Source Claim Text can be used to learn Source trustworthiness: may be used as priors. Users 1 Source expertise and contrasting viewpoints help. 4 Evidence Knowing why a claim is true is important. Data 3 Information from blogs and forums helpful too. ClaimVerifier 2

  16. Thesis components (and talk outline) 1 4 3 1 2 3 2 4 5 5 Measuring source trustworthiness Using forums as data source Content-based trust propagation models Interface design factors affecting credibility and learning Building ClaimVerifier

  17. 1 Identify trustworthy websites (sources) Case study: Medical websites Joint work with ParikshitSondhi and ChengXiangZhai[ECIR 2012]

  18. Problem Statement • For a (medical) website • What features indicate trustworthiness? • How can you automate extracting these features? • Can you learn to distinguish trustworthy websites from others?

  19. Trustworthiness of medical websites HON code Principles • Authoritative • Complementarity • Privacy • Attribution • Justifiability • Transparency • Financial disclosure • Advertising policy Our model (automated) • Link-based features • Transparency • Privacy Policy • Advertising links • Page-based features • Commercial words • Content words • Presentation • Website-based features • Page Rank

  20. Research questions HON code principles link, page, site features Yes Learned SVM and used it to re-rank results • For a (medical) website • What features indicate trustworthiness? • How can you automate extracting these features? • Can you learn to distinguish trustworthy websites from others? • Bias results to prefer trustworthy websites?

  21. Use classifier to re-rank results +8.5% Relative

  22. 2 Understanding what is written (evidence) Case study: Scoring medical claims based on health forums [KDD 2011 Workshop on Data Mining for Medicine and Healthcare]

  23. Scoring claims via community knowledge Claim DB Claim DB Claim DB Claim DB Claim Essiac tea is an effective treatment for cancer. Chemotherapy is an effective treatment for cancer. Evidence & Support DB

  24. Treatment effectiveness based on forums Ignored Treatment claims Source Claim Users QUESTION: Which treatments are more effective than others for a disease? Evidence Forum posts describing effectiveness of the treatment Data Health forums and medical message boards ClaimVerifier

  25. Key steps • Relation Retrieval • Query Formulation • Parse result snippet • Find sentiment expressed in snippet • Score snippets • Aggregate them to get claim score A Collect relevant evidence for claims B Analyze the evidence documents C Score and aggregate evidence Ranked treatment claims

  26. Treatment claims considered

  27. Results: Ranking valid treatments • Datasets • Skewed: 5 random valid + all invalid treatments • Balanced: 5 random valid + 5 random invalid treatments • Finding: Our approach improves ranking of valid treatments, significant in Skewed dataset.

  28. Measuring site “trustworthiness” Trustworthiness should decrease Database score Ratio of degradation

  29. Over all six disease test sets • As noise added to the claim database, the overall score reduces. • Exception: Arthritis, because it starts off with a negative score

  30. Conclusion: Scoring claims using forums 2 It is feasible to score trustworthiness claims using signal from million of patient posts: “wisdom of the crowd” We scored treatment posts based on subjectivity of opinionwords, and extended the notion to score databases It is possible to reliably leverage this general idea of validating knowledge through crowd-sourcing

  31. 3 Content-Driven Trust Propagation Framework Case study: News Trustworthiness [KDD 2011]

  32. Problem: How to verify claims? Trust Sources Evidence Claims Claim 1 “Essiac tea treats cancer.” “Obama removed U.S. flag.” Passages that give evidence for the claim Web Sources

  33. Incorporating Text in Trust Models Sources Evidence Claims Claim 1 Claim 2 . . . Claim n 1. Textual evidence 2. Supports adding IE accuracy, relevance,similarity between text Free-text claims Special case:structured data

  34. Computing Trust scores • Veracity of a claim depends on • the evidence documents for the claim and their sources. • Trustworthiness of a source is based on the claims it supports. • Confidence in an evidence document depends on source trustworthiness and confidence in other similar documents. Trust scores computed iteratively

  35. Relationship to other trust models • TruthFinder [Yin, Han, & Yu, 2007], • Sums, Investment [Pasternack& Roth, 2010] • Structured claims • No context Mutually exclusive claims, constraints[Pasternack & Roth, 2011] Structure on sources, groups [Pasternack & Roth, 2011] Source copying [Dong, Srivastava, et al., 2009]

  36. Application: Trustworthiness in News Sources Evidence Claims Claim 1 Biased news coverage on a particular topic or genre? News media (or reporters) News stories Which news stories can you trust? Whom can you trust? How true is a claim?

  37. Using Trust model to boost retrieval +6.3% Relative Documents are scored on a 1-5 star scale by NewsTrust users. This is used as golden judgment to compute NDCG values.

  38. Conclusion: Content-driven Trust models 3 • The truth value of a claimdepends on its source as well as on evidence. • Evidence documents influence each other and have differentrelevance to claims. • We developed a computational framework that associates relevant stories (evidence) to claims and sources. • Global analysis of this data, taking into account the relationsbetween stories, their relevance, and their sources, allows us to determine trustworthiness values over sources and claims. • Experiments with News Trustworthinessshows promising results on incorporating evidence in trustworthiness computation

  39. Where are we? 1 4 3 1 2 3 2 4 Measuring source trustworthiness Using forums as data source Content-based trust propagation models Interface design factors affecting credibility and learning

  40. Do human biases affect trustworthiness? 4 Source Claim Users How to present evidence? How to address user bias? Evidence BiasTrust Data ClaimVerifier

  41. 4 BiasTrust: Understanding how users perceive information … How presentation of evidence documents affects readership and learning Joint work with Peter Pirolli, PARC [CIKM ’12, ASIS&T ’12]

  42. Can claim verification be automated? Traditional search Lookup pieces of evidence only on relevance Users search for a claim Lookup pieces of evidence supporting and opposing the claim Evidence search ClaimVerifier

  43. Evidence Search • Original Goal: Given a claim c, find truth value of c. • Modified Goal:Given a claim c, don’t tell me if it is true or not, but show evidence • Simple claims: “Drinking milk is unhealthy for humans.” • Find relevant documents • Determine polarity • Evaluate trust • Baselines: Popularity, Expertise ranking • Via information network (who says what else…)

  44. Challenges in presenting evidence Natural Language Understanding 5 Information Retrieval Human Computer Interaction • What is a good evidence? • Simply written, addresses claims directly? • Avoids redundancy? • Helps with polarity classification? • Helps in evaluating trust? • How to present results that best satisfy users? • What do users prefer – information from credible sources or information that closely aligns to their viewpoint? • Does the judgment change if credibility/ bias information is visible to the user?

  45. BiasTrust: Research Questions What do people trust when learning about a topic – information from credible sources or information that aligns with their bias? Does display of contrasting viewpoints help? Are (relevance) judgments on documents affected by user bias? Do the judgments change if credibility/ bias information is visible to the user?

  46. BiasTrust: User study task setup • Participants asked to learn more about a “controversial” topic • Participants are shown quotes (documents) from “experts” on the topic • Expertise varies, is subjective • Perceived expertise varies much more • Participants are asked to judge if quotes are biased, informative, interesting • Pre- and post-surveys measure extent of learning

  47. Many “controversial” topics Health Science Politics Education • Is milk good for you? • Is organic milk healthier? Raw? Flavored? • Does milk cause early puberty? • Are alternative energy sources viable? • Different sources of alternative energy • Israeli – Palestinian Conflict • Statehood? History? Settlements? • International involvement, solution theories • Creationism vs. Evolution? • Global warming

  48. Factors studied in the user study Contrastive viewpoint scheme Single viewpoint scheme vs. Quit Show me more passages Show me a passage from an opposing viewpoint Multiple documents / screen Single document / screen vs. Show me more passages Does contrastive display help / hinder in learning Do multiple documents per page have any effect? Does sorting results by topic help? Quit

  49. Factors studied in the user study (2) • Effect of display of source expertise on • readership • which documents subjects consider biased • which documents subjects agree with Experiment 1: Hide source expertise Experiment 2: Vary source expertise • Uniform distribution: Source expertise between 1 and 5 stars • Bi-modal distribution: Source expertise either 1 star or 3 stars

  50. Interface variants • Possibly to study them in groups • SINgle vs. MULtiple documents/screen • BIModal vs. UNIform rating scheme

More Related