1 / 26

Data Intensive Query Processing for Large RDF Graphs Using Cloud Computing Tools

Data Intensive Query Processing for Large RDF Graphs Using Cloud Computing Tools. Mohammad Farhan Husain, Latifur Khan, Murat Kantarcioglu and Bhavani Thuraisingham Department of Computer Science The University of Texas at Dallas IEEE 2010 Cloud Computing May 11, 2011 Taikyoung Kim

fergal
Télécharger la présentation

Data Intensive Query Processing for Large RDF Graphs Using Cloud Computing Tools

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Intensive Query Processing for Large RDF Graphs Using Cloud Computing Tools Mohammad Farhan Husain, Latifur Khan, Murat Kantarcioglu and BhavaniThuraisingham Department of Computer Science The University of Texas at Dallas IEEE 2010 Cloud Computing May 11, 2011 Taikyoung Kim SNU IDB Lab.

  2. Outline Introduction Proposed Architecture MapReduce Framework Results Conclusions and Future Works

  3. Introduction • With the explosion of semantic web technologies, the need to store and retrieve large amounts of data is common • The most prominent standards are RDF and SPARQL • Current frameworks do not scale for large RDF graphs • E.g. Jena, Sesame, BigOWLIM • Designed for a single machine scenario • Only 10 million triples can be processed in a Jena in-memory(2GB) model

  4. Introduction • A distributed system can be built to overcome the scalability and performance problems of current Semantic Web frameworks • However, there is no distributed repository for storing and managing RDF data • Distributed database system or relational databases are available • Performance and Scalability issues • Possible to construct a distributed system from scratch • A better way is to use Cloud Computing framework or generic distributed storage system • Just tailor it to meet the needs of semantic web data

  5. Introduction • Hadoop is an emerging Cloud Computing tools • Open source • High fault tolerance • Great reliability • MapReduce programming model • We introduce a schema to store RDF data in Hadoop • Our goal is to answer SPARQL queries as efficiently as possible using summary statistics about the data • Choose the best plan based on a cost model • The plan determines the number of jobs and also their sequence and inputs

  6. Introduction HDFS – Hadoop Distributed File System • Contributions • Design a storage scheme to store RDF data in HDFS* • Device an algorithm which determines the best query plan for SPARQL query • Build a cost model for query processing plan • Demonstrate that our approach performs better than Jena

  7. Outline Introduction Proposed Architecture MapReduce Framework Results Conclusions and Future Works

  8. Proposed Architecture <owl:Classrdf:ID="AdministrativeStaff"> <rdfs:label>administrative staff worker</rdfs:label> <rdfs:subClassOfrdf:resource="#Employee" /> </owl:Class> AdministrativeStaffrdfs:subClassOf Employee • Data Generation and Storage • Use the LUBM dataset (benchmark datasets) • Generate RDF/XML serialization format • Convert the data to N-Triples to store data • One RDF triple in one line of a file • File Organization • Do not store the data in a file since • A file is the smallest unit of input to a MapReduce job • A file is always read from the disk (No cache) • Divide the data into multiple smaller files

  9. Proposed Architecture John | rdf:type | Student James | rdf:type | Professor John | ub:advisor | James John | ub:takesCourse | DB “rdf-type” file “ub-advisor” file “ub-takesCourse” file John | Student James | Professor John | James John | DB • Predicate Split (PS) • Divide the data according to the predicates • Can cut down the search space if the query has no variable predicate • Name the files with predicates • e.g) predicate p1:predgo into a file named p1-pred

  10. Proposed Architecture “rdf-type” file rdf-type_Student “ub-advisor” file ub-advisor_Projessor John | URI1 John John | Student James | Professor URI1 | Professor John rdf-type_Professor James URI1 • Predicate Object Split (POS) • Split Using Explicit Type Information of Object • Deviderdf-type file into as many files as the number of distinct objects the rdf:type predicate has • Split Using Implicit Type Information of Object • Keep intact all literal objects • URI objects move into their respective file named as predicate_type • The type information of a URI object can be retrieved from the rdf-type_* files

  11. Proposed Architecture • Space benefits • Special case • Search all the files having leaf types of the subtree rooted at that type node • E.g. type-FullProfessor, type-AssociateProfessor, etc.

  12. Outline Introduction Proposed Architecture MapReduce Framework Results Conclusions and Future Works

  13. MapReduce Framework • Challenges • Determine the number of jobs needed to answer a query • Minimize the size of intermediate files • Determine number of reducers • Use Map phase for selection and Reduce phase for join • Often require more than one job • No inter-process communication • Each job may depend on the output of the previous job

  14. MapReduceFrameworkInput Files Selection P: predicate, O: object • Select all files when • P: variable & O: variable & O: has no type info. • O: concrete • Select all predicate files having object of that type when • P: variable & O: has type info. • Select all files for the predicate when • P: concrete & O: variable & O: has no type info. • Select the predicate file having objects of that type when • Query has type information of the object • Select all subclasses which are leaves in the subtree rooted at the type node when • Type associated with a predicate is not a leaf in the ontology tree

  15. MapReduceFrameworkCost Estimation for Query Processing Line 1 Line 2 Line 3 Line 4 • Definition 1 (Conflicting Joins, CJ) • A pair of joins on different variables sharing a triple pattern • JoinA(Line1&Line3), JoinB(Line3&Line4)  CJ (Line3) • Definition 2 (NonConflicting Joins, NCJ) • A pair of joins not sharing any triple pattern • A pair of joins sharing a triple pattern and the joins are on same variable • Join1(Line1&Line3), Join2(Line2&Line4)  NCJ

  16. MapReduceFrameworkCost Estimation for Query Processing MI :cost of Map Input phase MO :cost of Map Output phase RI :cost of Reduce Input phase RO :cost of Reduce Output phase • Map Input phase (MI) • Read the triple patterns from the selected input file • Cost equals to the total number of triple patterns in each selected file • Map Output phase (MO) • No bound variable case (e.g. [?X ub:worksFor ?Y]) • MO cost = MI cost (All of the triple patterns are transformed into key-value pairs) • Bound variable case (e.g. [?Y ub:subOrganizationOf <http://www.U0.edu>]) • Use summary statistics for selectivity • The cost is the result of bound component selectivity estimation

  17. MapReduceFrameworkCost Estimation for Query Processing • Reduce Input Phase (RI) • Read Map output via HTTP and then sort it by key values • RI cost = MO cost • Reduce Output Phase (RO) • Deal with performing the joins • Use the join triple pattern selectivity summary statistics (No longer used) • For the intermediate jobs, take an upper bound on the Reduce Output

  18. MapReduceFrameworkQuery Plan Generation • Need to determine the best query plan • Possible plans to answer a query has different performance (time & space) • Plan Generation • Greedy approach • Simple • Generates a plan very quickly • No guarantee for best plan • Exhaustively search approach (ours) • Generate all possible plans

  19. MapReduceFrameworkQuery Plan Generation Line 1 Line 2 Line 3 Line 4 job1 job2 C A A 1 X 2 Y C 3 X X,Y C B Y Y A D D B B 4 Y D Y Triple Pattern Graph Join Graph • Plan Generation by Graph Coloring • Generate all combinations • For a job, select a subset of NCJ • Dynamically determine the number of jobs • Once the plan is generated, determine the cost using the cost model

  20. Outline Introduction Proposed Architecture MapReduce Framework Results Conclusions and Future Works

  21. ResultsComparison with Other Frameworks • Performance comparison between • Our framework • Jena In-Memory and SDB model • BigOWLIM • System for testing Jena and BigOWLIM • 2.80 GHz quad core processor • 8GB main memory (BigOWLIM needed 7 GB for billion triples dataset) • 1 TB disk space • Cluster of 10 nodes • Pentium IV 2.80 GHz processor • 4GB main memory • 640 GB disk space

  22. ResultsComparison with Other Frameworks • Jena In-Memory Model worked well for small datasets • Became slower as the dataset size grew and eventually run out of memory • BigOWLIM has significantly higher loading time than ours • It builds indexes and pre-fetches triples to main memory • Hadoop cluster takes less than 1 minute to start up • Excluding loading time, ours is faster when there is no bound object

  23. ResultsComparison with Other Frameworks As the size of the dataset grows, the increase in time to answer a query does not grow proportionately

  24. ResultsExperiment with Number of Reducers • As increase the number of reducers, queries are answered faster • The sizes of map output of query 1, 12 and 13 are so small • Can process with one reducer

  25. Conclusions and Future Works • We proposed • a schema to store RDF data in plain text files • An algorithm to determine the best processing plan to answer a SPARQL query • A cost model to be used by the algorithm • Our system is highly scalable • Query answer time does not increase as much as data size grow • We will extend the work in the future • Build a cloud service based on the framework • Investigate the skewed distribution of the data • Experiment with heterogeneous cluster

  26. Thank you Question?

More Related