1 / 22

Question Classification using Support Vector Machine

Question Classification using Support Vector Machine. Dell Zhang National University of Singapore Wee Sun Lee National University of Singapore SIGIR2003. Introduction (1/3). What a current information retrieval system or search engine can do is just “ document retrieval ” .

ernie
Télécharger la présentation

Question Classification using Support Vector Machine

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Question Classification using Support Vector Machine Dell Zhang National University of Singapore Wee Sun Lee National University of Singapore SIGIR2003

  2. Introduction(1/3) • What a current information retrieval system or search engine can do is just “ document retrieval ” . • Given some keywords it only returns the relevant documents that contain the keywords . • However , what a user really wants is often a precise answer to a question . • “ What was the first American in space ? ” user wants the answer “ Alan Shepard ” , but not to read through lots of documents that contains the words “ first ” ,” American ” and “ space ” etc .

  3. Introduction (2/3) • In order to correctly answer a question , usually one needs to understand what the question ask for . • Question Classification ,i.e., putting the questions into several semantic categories , can not only impose some constraints on the plausible answer but also suggest different processing strategies . • For instance , if the system understands that the question “ What was the first American in space ? ” ask for a person name , the search space of plausible answers will be significantly reduced . • The accuracy of question classification is very important to the overall performance of the question answer system .

  4. Introduction (3/3) • Although it is possible to manually write some heuristic rules for question classification , it usually requires tremendous amount of tedious work . • In contrast , a machine learning approach can automatically construct a high performance question classification program which leverages thousands or more features of questions . • However , there are only a few papers describing machine learning approaches to question classification , and some of them are pessimistic .

  5. Question Classification • Question Classification means putting the question into several semantic categories . • Here only the TREC-style question ,i.e., open-domain factual questions are considered . • We follow the two – layered question taxonomy , which contains 6 coarse grained categories and 50 fine grained categories . • Most question answering systems use a coarse grained category definition .

  6. Datasets • We use the publicly available training and testing datasets provided by USC , UIUC , and TREC . • All these datasets have been manually labeled by UIUC according to the coarse and fine grained categories in Table 1. • There are about 5500 labeled questions randomly divided into 5 training datasets of size 1000 , 2000 , 3000 , 4000 , and 5500 . • The testing dataset contains 500 labeled questions from TREC10 QA track . • For each question , we extract two kinds of features : bag-of-words and bag-of-ngrams . Every question is represented as binary feature vectors , because the term frequency (tf) of each word or ngram in a question usually 0 or 1.

  7. Support Vector Machine (1/2) • Support Vector Machines are linear functions of the form f(x)=<w◦x>+b , where < w◦x >is the inner product between the weight vector w and the input vector x . • SVM can be used as a classifier by setting the class to 1 if f(x) > 0 and to -1 otherwise . • The main idea of SVM is to select a hyper-plane that separates the positive and negative examples while maximizing the minimum margin , where the margin for example Xi is Yi*f(Xi) and Yi belong to {-1,1} . • This correspond to minimizing < w◦w > subject to Yi*(<w◦Xi>+b )>=1 for all i . • Large margin classifiers are known to have good generalization properties .

  8. Support Vector Machine (2/2) • To deal with cases where there may be no separating hyper-plane , the soft margin SVM has been proposed . • The soft margin classifier SVM minimize subject to , where C is a parameter that controls the amount of training errors allowed .

  9. Other Learning Algorithms (1/2) • Besides SVM , we have tried four other machine learning algorithms : Nearest Neighbors (NN) , Naïve Bayes (NB) , Decision Tree (DT) and Sparse Network of Winnows (SNoW) . • NN algorithm is a simplified version of well-known kNN algorithm which has been successfully applied in document classification . • Given an unlabeled instance , the NN algorithm finds its nearest (most similar) neighbors among the training examples , and use dominant class label of these nearest neighbors as its class label . • The similarity between two instances is simply defined as the number of overlapping features between them .

  10. Other Learning Algorithms (2/2) • Naïve Bayes algorithm is regarded as one of the top performing methods for document classification . • Its basic idea is to estimate the parameters of a multinomial generative model , then find the most probable class for a given instance using Bayes’ rule . • Decision Tree (DT) algorithm is a method for approximating discrete valued target function , in which the learned function is represented by a tree of arbitrary degree that classifies instances . • SNoW algorithm is specifically tailored for learning in the presence of a very large number of features and can be used as a general purpose multi-class classifier . The learned classifier is a sparse network of linear function .

  11. Experiment Result (1/2) • We have trained the above learning algorithms on 5 different size training datasets respectively and tested them on the TREC10 questions . • For SVM algorithms , we observed that the bag-of-ngrams features are not much better than the bag-of-words features .

  12. Experiment Result (2/2) • In summary , the experiment results show that with only surface text features the SVM outperforms the other four methods for this task .

  13. Tree Kernel’s Concept (1/4) • We propose to use a special kernel function called tree kernel to enable the SVM to take advantage of the syntactic structures of questions . • For a question , we first parse it into a syntactic tree using a parser , and then we represent it as a vector in a high dimensional space which is defined over tree fragments . • The tree fragments of a syntactic tree are all its sub-trees which include at least one terminal symbol (word) or one production rule . • Implicitly we enumerate all the possible tree fragments 1,2,…,m . These tree fragments are the axis of this m dimensional space .

  14. Tree Kernel’s Concept (2/4) • Every syntactic tree T is represented by an m dimensional vector . • The i-th element Vi(T) is the weight of the tree fragment in the i-th dimension if the tree fragment is in the tree T and zero otherwise . • The tree kernel of two syntactic trees T1 and T2 is the inner product of . • For each tree fragment i , i is weighted by , where s(i) is its size and is the weighting factor for size , d(i) is its depth and is the weighting factor for depth . • The size of a tree fragment is defined as the number of production rules it contains . The depth of a tree fragment is defined as the depth of the tree fragment root in the entire syntactic tree .

  15. Tree Kernel’s Concept (3/4) • We use the question “ What is atom ? ” to illustrate the above concepts .

  16. Tree Kernel’s Concept (4/4) • The advantage of tree kernel is that it can weight the tree fragments according to their depth so as to identify the question focus . • For example “ Which university did the president graduate from ? ” , the focus is “ university ” but not “ president ” , since the depth of “ university ” is less than “president” .

  17. Tree Kernel’s Computation (1/3 ) • We describe how the tree kernel can be computed in polynomial time complexity by dynamic programming . • Consider a syntactic tree T . Let N represent the set of nodes in T . Define the binary indicator function to be 1 if the tree fragment i is seen rooted at a node n and 0 otherwise . And represent the depth of the node n by d(n) . Thus , . • Given two syntactic trees T1 and T2 , the tree kernel k(T1,T2) can be compute as follows :

  18. Tree Kernel’s Computation (2/3 ) • The value of can be computed recursively as follows :

  19. Tree Kernel’s Computation (3/3 ) • According to the dynamic programming idea , we can traverse T1 and T2 in post-order , fill in a |N1| x |N2| matrix of , then sums up the matrix entries . Thus , time complexity of this algorithm turns out to be at most O(|N1||N2|) .

  20. Evaluation (1/2) • To find appropriate parameter values for SVM using tree kernel ( , ) , we run 4-fold cross validation on 1000 training examples . Therefore , =0.1 , =0.9 . • Under the coarse grained category definition , SVM based on tree kernel can bring about 20% errors reduction .

  21. Evaluation (2/2) • Under the fine grained category definition , SVM based on tree kernel brings slightly improvements . • After carefully checking the misclassification , we believe that syntactic tree doesn’t normally contain the information required to distinguish between the various fine categories within a course category . • Nevertheless , if we classify the questions into coarse grained categories first and then classify the questions inside each coarse grained category into its fine grained categories , we may still expect better results using the SVM based on tree kernel .

  22. Conclusion • The main contributions of this paper are as follows . • We show that with only surface text features SVM outperforms four other machine learning methods (NN,NB,DT,SNoW) for question classification . • We found that the syntactic structures of question are really helpful to question classification . • We propose to use a special kernel function called tree kernel to enable the SVM to take advantage of the syntactic structures of questions . And we describe how the tree kernel can be computed efficiently by dynamic programming . • The main future direction of this research is to exploit semantic knowledge (such as WordNet ) for question classification .

More Related