1 / 77

Search and the New Economy Session 5 Mining User-Generated Content

Search and the New Economy Session 5 Mining User-Generated Content. Prof. Panos Ipeirotis. Today’s Objectives. Tracking preferences using social networks Facebook API Trend tracking using Facebook Mining positive and negative opinions Sentiment classification for product reviews

nili
Télécharger la présentation

Search and the New Economy Session 5 Mining User-Generated Content

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Search and the New EconomySession 5Mining User-Generated Content Prof. Panos Ipeirotis

  2. Today’s Objectives • Tracking preferences using social networks • Facebook API • Trend tracking using Facebook • Mining positive and negative opinions • Sentiment classification for product reviews • Feature-specific opinion tracking • Economic-aware opinion mining • Reputation systems in marketplaces • Quantifying sentiment using econometrics

  3. Top-10, Zeitgeist, Pulse, … • Tracking top preferences have been around for ever

  4. Online Social Networking Sites • Preferences listed and easily accessible

  5. Facebook API • Content easily extractable • Easy to “slice and dice” • List the top-5 books for 30-year old New Yorkers • List the book that had the highest increase across female population last week • …

  6. Demo

  7. Today’s Objectives • Tracking preferences using social networks • Facebook API • Trend tracking using Facebook • Mining positive and negative opinions • Sentiment classification for product reviews • Feature-specific opinion tracking • Economic-aware opinion mining • Reputation systems in marketplaces • Quantifying sentiment using econometrics

  8. Customer-generated Reviews • Amazon.com started with books • Today there are review sites for almost everything • In contrast to “favorites” we can get information for less popular products

  9. Questions • Are reviews representative? • How do people express sentiment?

  10. Helpfulness of review(by other customers) Rating(1 … 5 stars) Review

  11. Do People Trust Reviews? • Law of large numbers: single review no, multiple ones, yes • Peer feedback: number of useful votes • Perceived usefulness is affected by: • Identity disclosure: Users trust real people • Mixture of objective and subjective elements • Readability, grammaticality • Negative reviews that are useful may increase sales! (Why?)

  12. Are Reviews Representative? counts 1 2 3 4 5 counts counts 1 2 3 4 5 1 2 3 4 5 What is the Shape of the Distribution of Number of Stars? counts  Guess? 1 2 3 4 5

  13. Observation 1: Reporting Bias counts 1 2 3 4 5 Why? Implications for WOM strategy?

  14. Possible Reasons for Biases • People don’t like to be critical • People do not post if they do not feel strongly about the product (positively or negatively)

  15. Observation 2: The SpongeBob Effect versus SpongeBob Squarepants Oscar

  16. Oscar Winners 2000-2005 Average Rating 3.7 Stars

  17. SpongeBob DVDs Average Rating 4.1 Stars

  18. And the Winner is… SpongeBob! If SpongeBob effect is common, then ratings do not accurately signal the quality of the resource

  19. What is Happening Here? • People choose movies they think they will like, and often they are right • Ratings only tell us that “fans of SpongeBob like SpongeBob” • Self-selection • Oscar winners draw a wider audience • Rating is much more representative of the general population • When SpongeBob gets a wider audience, his ratings drop

  20. Effect of Self-Selection: Example • 10 people see SpongeBob’s 4-star ratings • 3 are already SpongeBob fans, rent movie, award 5 stars • 6 already know they don’t like SpongeBob, do not see movie • Last person doesn’t know SpongeBob, impressed by high ratings, rents movie, rates it 1-star Result: • Average rating remains unchanged: (5+5+5+1)/4 = 4 stars • 9 of 10 consumers did not really need rating system • Only consumer who actually used the rating system was misled

  21. Bias-Resistant Reputation System • Want P(S) but we collect data on P(S|R) S = Are satisfied with resource R = Resource selected (and reviewed) • However, P(S|E)  P(S|E,R) E = Expects that will like the resource • Likelihood of satisfaction depends primarily on expectation of satisfaction, not on the selection decision • If we can collect prior expectation, the gap between evaluation group and feedback group disappears • whether you select the resource or not doesn’t matter

  22. Bias-Resistant Reputation System Before viewing: I think I will: Love this movie Like this movie It will be just OK Somewhat dislike this movie Hate this movie After viewing: I liked this movie: Much more than expected More than expected About the same as I expected Less than I expected Much less than I expected Skeptics Everyone else Big fans

  23. Conclusions • Reporting bias and Self-selection bias exists in most cases of consumer choice • Bias means that user ratings do not reflect the distribution of satisfaction in the evaluation group • Consumers have no idea what “discount” to apply to ratings to get a true idea of quality • Many current rating systems may be self-defeating • Accurate ratings promote self-selection, which leads to inaccurate ratings • Collecting prior expectations may help address this problem

  24. OK, we know the biases • Can we get more knowledge? • Can we dig deeper than the numeric ratings? • “Read the reviews!” • “They are too many!”

  25. Independent Sentiment Analysis • Often we need to analyze opinions • Can we provide review summaries? • What should the summary be?

  26. Basic Sentiment classification • Classify full documents (e.g., reviews, blog postings) based on the overall sentiment • Positive, negative and (possibly) neutral • Similar but also different from topic-based text classification. • In topic-based classification, topic words are important • Diabetes, cholesterol  health • Election, votes  politics • In sentiment classification, sentiment words are more important, e.g., great, excellent, horrible, bad, worst, etc. • Sentiment words are usually adjectives or adverbs or some specific expressions (“it rocks”, “it sucks” etc.) • Useful when doing aggregate analysis

  27. Can we go further? • Sentiment classification is useful, but it does not find what the reviewer liked and disliked. • Negative sentiment does not mean that the reviewer does not like anything about the object. • Positive sentiment does not mean that the reviewer likes everything • Go to the sentence level and featurelevel

  28. Extraction of features • Two types of features: explicit and implicit • Explicit features are mentioned and evaluated directly • “The pictures are very clear.” • Explicit feature: picture • Implicit features are evaluated but not mentioned • “It is small enough to fit easily in a coat pocket or purse.” • Implicit feature: size • Extraction: Frequency based approach • Focusing on frequent features (main features) • Infrequent features can be listed as well

  29. Identify opinion orientation of features • Using sentiment words and phrases • Identify words that are often used to express positive or negative sentiments • There are many ways (dictionaries, WorldNet, collocation with known adjectives,…) • Use orientation of opinion words as the sentence orientation, e.g., • Sum: • a negative word is near the feature, -1, • a positive word is near a feature, +1

  30. Two types of evaluations • Direct Opinions: sentiment expressions on some objects/entities, e.g., products, events, topics, individuals, organizations, etc • E.g., “the picture quality of this camera is great” • Subjective • Comparisons: relations expressing similarities, differences, or ordering of more than one objects. • E.g., “car x is cheaperthancar y.” • Objective or subjective • Compares feature quality • Compares feature existence

  31. Visual Summarization & Comparison + Summary _ Picture Battery Zoom Size Weight + Comparison _ Digital camera 1 Digital camera 1 Digital camera 2

  32. Example: iPod vs. Zune

  33. Today’s Objectives • Tracking preferences using social networks • Facebook API • Trend tracking using Facebook • Mining positive and negative opinions • Sentiment classification for product reviews • Feature-specific opinion tracking • Economic-aware opinion mining • Reputation systems in marketplaces • Quantifying sentiment using econometrics

  34. Comparative Shopping in e-Marketplaces

  35. Customers Rarely Buy Cheapest Item

  36. Are Customers Irrational? BuyDig.com gets Price Premium (customers pay more than the minimum price) $11.04

  37. Price Premiums @ Amazon Are Customers Irrational (?)

  38. Why not Buying the Cheapest? You buy more than a product • Customers do not pay only for the product • Customers also pay for a set of fulfillment characteristics • Delivery • Packaging • Responsiveness • … Customers care about reputation of sellers! Reputation Systems are Review Systems for Humans

  39. Example of a reputation profile

  40. Basic idea Conjecture: Price premiums measure reputation Reputation is captured in text feedback Examine how text affects price premiums(and do sentiment analysis as a side effect)

  41. Outline • How we capture price premiums • How we structure text feedback • How we connect price premiums and text

  42. Data Overview • Panel of 280 software products sold by Amazon.com X 180 days • Data from “used goods” market • Amazon Web services facilitate capturing transactions • No need for any proprietary Amazon data

  43. Data: Secondary Marketplace

  44. Data: Capturing Transactions Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 time We repeatedly “crawl” the marketplace using Amazon Web Services While listingappears  item is still available  no sale

  45. Data: Capturing Transactions Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10 time We repeatedly “crawl” the marketplace using Amazon Web Services When listingdisappearsitem sold

  46. Data: Transactions Capturing transactions and “price premiums” Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10 time Item sold on 1/9 When item is sold, listing disappears

  47. Data: Variables of Interest Price Premium • Difference of price charged by a seller minus listed price of a competitor Price Premium = (Seller Price – Competitor Price) • Calculated for each seller-competitor pair, for each transaction • Each transaction generates M observations, (M: number of competing sellers) • Alternative Definitions: • Average Price Premium (one per transaction) • Relative Price Premium (relative to seller price) • Average Relative Price Premium (combination of the above)

  48. Price premiums @ Amazon

  49. Average price premiums @ Amazon

More Related