Download
slide1 n.
Skip this Video
Loading SlideShow in 5 Seconds..
Robert Meusel Christian Bizer PowerPoint Presentation
Download Presentation
Robert Meusel Christian Bizer

Robert Meusel Christian Bizer

74 Vues Download Presentation
Télécharger la présentation

Robert Meusel Christian Bizer

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. International Internet Preservation ConsortiumGeneral Assembly 2014, ParisMining a Large Web Corpus Robert MeuselChristian Bizer

  2. The Common Crawl

  3. Hyperlink Graphs Knowledge about the structure of the Web can be used to improve crawling strategies, to help SEO experts or to understand social phenomena.

  4. HTML-embedded Data on the Web Markup Syntaxes • Microformats • RDFa • Microdata Several million websites semantically markup the content of their HTML pages. Data snippets within info boxes

  5. Relational HTML Tables • In a corpus of 14B raw tables, 154M are „good“relations (1.1%) HTML Tables over semi-structured data which can be used to build up or extend knowledge bases as DBPedia. • Cafarella, et al.: WebTables: Exploring the Power of Tables on the Web. VLDB 2008.

  6. The Web Data Commons Project • Has developed an Amazon-based framework for extracting data from large web crawls • Capable to run on any cloud infrastructure • Has applied this framework to the Common Crawl data • Adaptable to other crawls • Results and framework are publicly available • http://webdatacommons.org Goal: Offer an easy-to-use, cost efficient, distributed extraction framework for large web crawls, as well as datasets extracted out of the crawls.

  7. Extraction Framework AWS SQS AWS S3 4: Download file 3: Request file-reference AWS EC2 Instance AWS EC2 Instance 5: Extract & Upload AWS EC2 Instance 2: Launch instances Master 6: Collect results 1: Fill queue automated manual

  8. Extraction Worker AWS S3 • Filter: • Reduce Runtime • Mime-Type filter • Regex detection of content or meta-information .(w)arc Download file Filter • Worker: • Written in Java • Process one page at once • Independent from other files and workers AWS S3 Worker Worker output Upload output file WDC Extractor

  9. Web Data Commons – Extraction Framework • Written in Java • Mainly tailored for Amazon Web Services • Fault tolerant and cheap • 300 USD to extract 17 billion RDF statements from 44 TB • Easy customizable • Only worker has to be adapted • Worker is a single process method processing one file each time • Scaling is automated by the framework • Access Open Source Code: • https://www.assembla.com/code/commondata/ Alternative: Hadoop Version, which can run on any Hadoop cluster without Amazon Web Services.

  10. Extracted Datasets • Hyperlink Graph • HTML-embedded Data • Relational HTML Tables Hyperlink Graph HTML-embedded Data Relational HTML Tables

  11. Hyperlink Graph • Extracted from the Common Crawl 2012 Dataset • Over 3.5 billion pages connected by over 128 billion links • Graph files: 386 GB http://webdatacommons.org/hyperlinkgraph/ http://wwwranking.webdatacommons.org/

  12. Hyperlink Graph Discovery of evolutions in the global structure of the World Wide Web. • Degrees do not follow a power-law • Detection of Spam pages • Further insights: • WWW‘14: Graph Structure in the Web – Revisited (Meusel et al.) • WebSci‘14: The Graph Structure of the Web aggregated by Pay-Level Domain (Lehmberg et al.)

  13. Hyperlink Graph Discovery of important and interesting sites using different popularity rankings or website categorization libraries Websites connected by at least ½ Million Links

  14. HTML-embedded Data Markup Syntaxes More and more Websites semantically markup the content of their HTML pages. RDFa Microformats Microdata

  15. Websites containing Structured Data (2013) • Web Data Commons- Microformat, Microdata, RDFa Corpus • 17 billion RDF triples from Common Crawl 2013 • Next release will be in winter 2014 585 million of the 2.2 billion pages contain Microformat, Microdata or RDFa data (26.3%). 1.8 million websites (PLDs) out of 12.8 million provide Microformat, Microdata or RDFa data (13.9%) http://webdatacommons.org/structureddata/

  16. Top Classes Microdata (2013) • schema = Schema.org • dv = Google‘s Rich Snippet Vocabulary

  17. HTML Tables In corpus of 14B raw tables, 154M are “good” relations (1.1%). Cafarella (2008) • Classification Precision: 70-80% • Cafarella, et al.: WebTables: Exploring the Power of Tables on the Web. VLDB 2008. • Crestan, Pantel: Web-Scale Table Census and Classification. WSDM 2011.

  18. WDC - Web Tables Corpus • Large corpus of relational Web tables for public download • Extracted from Common Crawl 2012 (3.3 billion pages) • 147 million relational tables • selected out of 11.2 B raw tables (1.3%) • download includes the HTML pages of the tables (1TB zipped) • Table Statistics • Heterogeneity: Very high. http://webdatacommons.org/webtables/

  19. WDC - Web Tables Corpus • Attribute Statistics 28,000,000 different attributelabels • Subject Attribute Values 1.74 billion rows 253,000,000 different subject labels

  20. Conclusion Three factors are necessary to work with web-scale data: • Thanks to Common Crawl, this data is available • Like Amazon or other on-demand cloud-services • The Web Data Commons Framework, or standard tools like Pig • Cost evaluation on task-base, but the WDC framework has turned out to be cheaper Availability of Crawls Availability of cheap, easy-to-use infrastructures Easy to adopt scalable extraction frameworks

  21. Questions • Please visit our website: www.webdatacommons.org • Data and Framework are available as free download • Web Data Commons is supported by: