1 / 35

Andrew G. West Wikimania `11 – August 5, 2011

Andrew G. West Wikimania `11 – August 5, 2011. Autonomous Detection of Collaborative Link Spam. Big Idea. Design a framework to detect link spam additions to wikis /Wikipedia, including those employing: (1) Subtlety; aims at link persistence ( status quo )

Télécharger la présentation

Andrew G. West Wikimania `11 – August 5, 2011

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Andrew G. West Wikimania `11 – August 5, 2011 Autonomous Detection of Collaborative Link Spam

  2. Big Idea Design a framework to detect link spam additions to wikis/Wikipedia, including those employing: (1) Subtlety; aims at link persistence (status quo) (2) Vulnerabilities of recent literature [1] And use this tool’s functionality to: (1) Autonomously undo obvious spam (i.e., a bot) (2) Feed non-obvious, but questionable instances to human patrollers in a streamlined fashion

  3. Outline • Motivation • Corpus construction • Features • Performance • Live implementation • Demonstration

  4. External Link Spam • Any external link (i.e., not wikilink) which violates the subjective link policy [2] is external link spam: • Issues with the destination URL (the landing site). For example: Commercial intent, shock sites, non-notable sources. • Issues with presentation. For example: Putting a link atop article. [[http://www.google.com Visit Google!]] Article: Visit Google!

  5. Research Motivations

  6. Motive: Social Spam • Not entirely different from link spam in other UGC/ collaborative applications • Much literature [3] • But wikis/WP are unique: • Not append-only; less formatting constraints • Massive traffic in a single installation. • Large topic space: Spam relevant to landing site. • Mitigation entirely community-driven

  7. Motive: Incentive • Much research on wikivandalism (link spam is a subset) • But most vandalism is “offensive” or “nonsense” [4] • Not well incentivized; whereas link spam likely serves poster self-interest (e.g., monetary, lobbying, etc.) • Thus, more aggressive/optimal attack tactics expected • In [1], examined the status quo nature of WP link spam: • “Nuisance”: order of magnitude less frequent than vandalism. See also [[WP:WPSPAM]]. • Less sophistication than seen in other spam domains • Subtlety: Even spam links follow conventions. Perhaps an attempt to deceive patrollers/watch-list: Doesn’t work

  8. Motive: Vulnerabilities • Novel attack model [1], exploits human latency and appears economically viable: • High-traffic page targets Peak views/second • Prominent placement • Script-driven operation via autonomously attained privileged accounts • Distributed • Other recent concerns: • Declining labor force [5] • XRumer (blackhat SEO) [6] Calculated Jan.-Aug. 2010 Highest avg. views/day

  9. Corpus Construction

  10. Corpus: Spam/Ham • SPAM edits are those that: • Added exactly one link to an HTML document • The only changes made in the edit are the link addition and its immediate supporting “context” • Were rolled-back by a privileged editor. Where rollback is a tool used only for “blatantly unproductive edits” • HAM edits are those: • Added by a privileged user • Meeting criteria (1) and (2) above By relying on implicit actions, we save time and allow privileged users to define spam on case-by-case basis

  11. Corpus: Context (1) Because the link was the ONLY change made. The privileged user’s decision to roll-back that edit speaks DIRECTLY to that link’s inappropriateness.

  12. Corpus: Context (2)

  13. Corpus: Collection human 2,865 1,095 238,371 188,210 All links collected Links to HTML doc. Was rolled back only link SPAM “context” LBL NUM# PER% Added by priv. user human HAM only link SPAM 1,095 18.4% “context” 4,867 50,108 4,867 81.6% HAM • ≈2 months of data collection in early 2011 (en.wiki) • Done in real-time viva the STiki framework [7] • Also retrieved the landing site for each link • Be careful of biasing features!

  14. Features

  15. Features • 55+ features implemented and described in [8] • For brevity, focus on features that: (1) Are strong indicators, and (2) have intuitive presentation • Three major feature groups: • (1) Wikipedia-driven • (2) Landing-site driven • (3) Third-party based

  16. Features: Wiki (1) • Examples of Wikipedia features: • URL: Length, sub-domain quantity, TLD (.com, .org, etc.) • Article: Length, popularity, age • Presentation: Where in article, citation, description length • URL History: Addition quantity, diversity of adding users • Metadata: Revision comment length, time-of-day

  17. Features: Wiki (2) • Long URLs are good URLs: • www.example.com vs. www.example.com/doc.html • Former more likely to be promotional • Spam is rarely used in citations • Advanced syntax implies advanced editors

  18. Features: Wiki (3) TLDs with greater admin. control tend to be good. Also correlates well with registration cost ≈ 85% of spam leaves no “revision comment” vs. < 20% of ham

  19. Features: Wiki (4) An article’s non-citation links “stabilize” with time(Non-cites tend to have their own section at article bottom)

  20. Features: Site • We fetch and process the HTML source of the landing site • Spam destinations marginally more profane/commercial (SEO?) • Re-implement features from a study of email spam URLs [9] • Opposite results from that work • TAKEAWAY: Subtlety and link diversity impair this analysis

  21. Features: 3rd Party (1) Two third-party sources queried: • Alexa WIS [10]: Data from web-crawling and toolbar. Including traffic data, WHOIS, load times… • Google Safe Browsing Project: Lists suspected phishing /malware hosts • Google lists produce NO corpus matches: • So worthless during learning/training • But a match is unambiguously bad… • …. So bring into force via statically authored rules

  22. Features: 3rd Party (2) • #1 weighted feature • At median, ham has 850 BLs, spam has 20 BLs (40× difference). • Intuitive: basis for search-engine rank • Continent of WHOIS registration. • Asia especially poor • Other good, non-intuitive Alexa feats.

  23. Performance

  24. Perform: ADTrees val =0.0 Example ADTree: • Features turned into ML model via ADTree algorithm • Human-readable • Enumerated features • In practice… • Tree-depth of 15 • About 35 features • Evaluation performed using standard 90/10 cross validation. BACKLINKS > 200 Y: +0.8 N: -0.4 IS_CITE == TRUE COMM_LEN> 0 N: -0.1 Y: +0.2 Y: +0.6 N: -0.6 … … … … if (final_value > 0): HAM if (final_value < 0): SPAM

  25. Perform: Piecewise • Obviously, much better than random predictions (status quo) • Wikipedia-driven features weighed most helpful • But also must consider robustness issues

  26. Perform: All

  27. Live Implementation

  28. Live: Architecture <HTML>… </HTML> STiki Client Wikipedia Spam-scoring engine STiki Services Scoring/ADTree IRC #enwiki# STiki Client Edit Queue STiki Client Wiki-DB Likely vandalism Alexa Likely vandalism Fetch Edit Likely vandalism ------------------- Likely innocent Safe-Browse Display Queue Mainte--nance Wikipedia Article Landing site Classify 3rd-party If spam, revert and warn • Bringing live is trivial via IRC and implemented ADTree BotHandler • But how to apply the scores: • Autonomously (i.e., a bot); threshold; approval-pending if(score) < thresh: then: elseREVERT • Prioritized human-review via STiki [7] • Priority-queue used in crowd-sourced fashion

  29. Live: Demonstration Software Demonstration open-source [7], [[WP:STiki]]

  30. Live: Discussion • Practical implementation notes • Multiple links: Score individually, assign worst • Dead links (i.e., HTTP 404): Reported to special page • Non-HTML destinations: Omit corresponding features • Static rules needed to capture novel attack vectors • Features are in place: Page placement, article popularity, link histories, diversity quotients, Safe Browsing lists. • Pattern matches result in arbitrarily high scores • Offline vs. online performance • Bot performance immediate; should be equal • Difficult to quantify decline with prioritized humans (STiki)

  31. Live: Gamesmanship • How would an attacker circumvent our system? • Content-optimization (need for robust features) • TOCTTOU attacks (i.e., redirect after inspection) • Rechecking on an interval is very expensive • But a practical concern; LTA case [[WP:UNID]] • Crawler redirection • Determine our system’s IP; serve better content • A more distributed operational base • Denial-of-service (overloading system with links) + STiki • Solutions to these kinds of problems remain future work

  32. References [1] A. G. West, J. Chang, K. Venkatasubramanian, O. Sokolsky, and I. Lee. Link spamming Wikipedia for profit. In CEAS ‘11, September 2011. [2] Wikipedia: External links. http://en.wikipedia.org/wiki/Wikipedia:External_links. [3] P. Heymann, G. Koutrika, and H. Garcia-Molina. Fighting spam on social web sites: A survey of approaches and future challenges. IEEE Internet Comp., 11(6), 2007. [4] R. Priedhorsky, J. Chen, S. K. Lam, K. Panciera, L. Terveen, and J. Riedl. Creating, destroying, and restoring value in Wikipedia. In GROUP ’07, 2007. [5] E. Goldman. Wikipedia’s labor squeeze and its consequences. Journal of Telecommunications and High Technology Law, 8, 2009. [6] Y. Shin, M. Gupta, and S. Myers. The nuts and bolts of a forum spam automator. In LEET’11: 4th Wkshp. on Large-Scale Exploits and Emergent Threats, 2011. [7] A.G. West. STiki: A vandalism detection tool for Wikipedia. http://en.wikipedia.org/wiki/Wikipedia:STiki. Software, 2010. [8] A.G. West, A. Agarwal, P. Baker, B. Exline, and I. Lee. Autonomous Link Spam Detection in Purely Collaborative Environments. In WikiSym '11. [9] A. Ntoulas, M. Najork, M. Manasse, and D. Fetterly. Detecting spam web pages through content analysis. In WWW’06: World Wide Web Conference, 2006. [10] Alexa web information service. http://aws.amazon.com/awis/.

  33. Backup slides (1)

  34. LINKS Prevention Detection Backup slides (2) Blacklists Patrollers Watchlisters Readers Immediate Seconds Mins./Days ∞ Latency: LEFT: Pipeline of typical Wikipedia link spam detection, including both actors and latency RIGHT: log-log plot showing average daily article views versus article popularity rank. Observe the power-law distribution. A spammer could reach large percentages of viewership via few articles.

  35. Backup slides (3) ABOVE: Evaluating time-of-day (TOD) and day-of-week (DOW) AROUND: Evaluating feature strength. All features here are “Wikipedia-driven”.

More Related