1 / 22

Deep Learning of RDF rules

Semantic Machine Learning. Deep Learning of RDF rules. Outline. Deep Learning motivation Semantic Reasoning Deep learning of RDFS rules Results Discussion. 2. Deep Learning motivation.

lcarter
Télécharger la présentation

Deep Learning of RDF rules

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Semantic Machine Learning Deep Learning of RDF rules

  2. Outline • Deep Learning motivation • Semantic Reasoning • Deep learning of RDFS rules • Results • Discussion 2

  3. Deep Learning motivation After a few decades of low popularity relative to other machine learning methods, neural networks are trending again with the deep learning movement. Other machine learning techniques rely on the selection of pertinent features of the data, Deep neural networks learn the best features for each task. 3

  4. Deep Learning Auto-encoder • Input data = target data • Learning efficient encodings. • Typically for dimensionality reduction. 4

  5. Deep Learning MNIST example • MNIST (Mixed National Institute of Standards and Technology database) large database of handwritten digits. • Learning pen strokes with auto-encoder 5

  6. Deep LearningPanoply of Applications: Computer vision • Convolutional nets: Loosely based on (what little) we know about the visual cortex • Deeper layers learn more complex features: • Edges → Shapes → Objects 6

  7. Deep LearningPanoply of Applications: ImageNet Challenge • ImageNet is an image database organized according to the WordNet hierarchy • Classification error dropping from 25.8 % in 2011 to 3.46 % in 2015 7

  8. Deep Learning Recurrent neural networks • Convolutional nets are more suitable for static images. To learn from sequential data (Texts, Speech, Videos), we typically use Recurrent Neural Networks. • Neural networks with loops, so the previous inputs affect the current output. • BackPropagation Through Time (BPTT) 8

  9. Deep Learning Sequence-To-Sequence Learning • When input and target are sequences, such in machine translation, we use Sequence-to-sequence training. • Encoding RNN and decoding RNN 9

  10. Rules Learning Main Idea • Input: a, transitive_property, b, b, transitive_property, c • Target: a, transitive_property, c 10

  11. Semantic Web • Semantic Web: Extends the Web of documents to the Web of Data, where computers can reason about the data. • Semantic Web Stack: A set of Standards and technologies that aims to realize this vision 11

  12. RDFS reasoning • Inferring new triples from existing facts. Example: <http://swat.cse.lehigh.edu/onto/univ-bench.owl#advisor> <http://www.w3.org/2000/01/rdf-schema#range> <http://swat.cse.lehigh.edu/onto/univ-bench.owl#Professor> . <https://tw.rpi.edu/web/person/bassemmakni> <http://swat.cse.lehigh.edu/onto/univ-bench.owl#advisor> <https://tw.rpi.edu//web/person/JimHendler> . <https://tw.rpi.edu//web/person/JimHendler> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://swat.cse.lehigh.edu/onto/univ-bench.owl#Professor> . • RDFS Materialization: Given an RDF Graph, generate all the inferred triples. 12

  13. Approach • Goal: • Design recurrent networks able to learn the entailments of RDFS rules. • Use these networks to generate the materialization of an RDF graph. • Steps: • Preparing the RDFS ground-truth • Recurrent network design • Encoding/Decoding • Incremental materialization 13

  14. Preparing the RDFS ground-truth For each pattern we collected 10 thousands samples from Dbpedia and the BBC SPARQL endpoint. 14

  15. Recurrent network design Using Keras: a deep learning framework that facilitates the design of neural network models and can run models using Theano or TensorFlow as a back end. 15

  16. Encoding decoding • One hot encoding (global): We assign an ID for each RDF resource in the full ground truth, and we use these IDs to generate one hot encoded vectors for each input and output. This leads to very big vectors and learning becomes very slow. • Word2Vec: We encode the full ground truth using the Word2Vec algorithm where we treat each triple as one sentence. Good performance in the training set, but suffers from overfitting. • One hot encoding (local): A simpler approach that performs much better with good accuracy and learning speed is to encode each input separately. 16

  17. Incremental materialization • After the training of our network on the DBpedia+BBC ground truth, we use the network to materialize the LUBM graph. • LUBM: Benchmark ontology describing the academic world concepts and relations. • Algorithm: • Running a SPARQL query to collect all rules patterns against an RDF store hosting the LUBM graph. • Encoding the collected triples. • Using the trained network to generate new triples encodings. • Decoding the output to obtain RDF triples. • Insert the generated triples back in the RDF store. • We repeat these steps till no new inferences are generated. 17

  18. Incremental materialization 18

  19. Results • Training accuracy: From the data collected in the DBpedia ground truth, we use 20% for validation test. The training process takes less than 10 minutes with 10 iterations over the data, and we reach 0.998 validation accuracy 19

  20. Results • LUBM1 is generated using one university, and contains 100 thousands triples. OWLIM generates 41 113 thousands inferred triples. • When running our incremental materialization, we needed 3 iterations to achieve the stagnation of the materialization. We missed 369 triples • (RDFS axioms). 20

  21. Conclusions • Our prototype proves that sequence-to-sequence neural networks can learn RDFS and be used for RDF graph materialization. • The advantages of our approach: • Our algorithm is natively parallel, and can profit from the advances in the deep learning frameworks and the GPU speeds. • Deployment on neuromorphic chips. Our lab is in the process of getting access to the IBM TrueNorth chip, and we are planning to run our reasoner on TrueNorth chip. 21

More Related