1 / 52

Question Answering as Question-Biased Term Extraction: A New Approach toward Multilingual QA

Question Answering as Question-Biased Term Extraction: A New Approach toward Multilingual QA. Yutaka Sasaki ATR NLP LAB ACL 2005. 摘要. 這篇論文使用 Question-Biased Term Extraction (QBTE) 的方法取代一般 Question Answering (QA) . QA v.s. QBTE

hayley
Télécharger la présentation

Question Answering as Question-Biased Term Extraction: A New Approach toward Multilingual QA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Question Answering as Question-Biased Term Extraction:A New Approach toward Multilingual QA Yutaka Sasaki ATR NLP LAB ACL 2005

  2. 摘要 • 這篇論文使用 Question-Biased Term Extraction (QBTE)的方法取代一般 Question Answering (QA) . • QA v.s. QBTE • 系統使用Maximum Entropy Models做為ML技術,並使用10-fold cross validation評估。 • 實驗得到 • MRR:0.36分 • Top5:0.47分。

  3. 說明 • MRR:Mean Reciprocal Rank,取前n名答案,找出正確答案名次,分數即為名次分之一。 • Top5:前五項答案中,至少包含一個正確答案的比率。

  4. 常見的QA結構 • Question Analyzer • Document Retriever • Answer Candidate Extractor • Answer Selector

  5. 問題組成包含 • Named entities • Numerical expressions • Class names • ……etc

  6. 常見QA的缺點 • 被問題類型限制住 • 對multilingual是一種挑戰,要擷取別種語言的name entities、numerical expressions、class names非常花時間與力氣

  7. 自動建立QA components • 從scratch中利用機器學習自動建立components • 缺點:學習之資料不夠 • Redesign the question type • For example , Chinese or Greek QA

  8. QBTE • 不使用問題類型只使用問題特徵 • 使用Maximum Entropy Models(MEM)從問題特徵、文件特徵、結合的特徵(問題特徵與文件特徵)中抽取答案。

  9. CRL QA Data • 日本1995年Mainichi的報紙 • 2000對日語的問題與正確答案 • 總共有115種類別 • 包含資訊: • QUESTION、Q_TYPE 、NE_TYPE 、CENTER_WORD 、LEVEL 、ANSWER 、DOCNO

  10. <QA> <QAID>CRL-QA2002-00006-01</QAID> <QUESTION>一九五三年十月、日韓会談で日本側の首席代表を務めたのは誰か?</QUESTION> <Q_TYPE>誰</Q_TYPE> <NE_TYPE>人名</NE_TYPE> <CENTER_WORD>首席代表</CENTER_WORD> <LEVEL>A</LEVEL> <A_SET> <ANSWER>久保田貫一郎</ANSWER> <DOCNO>951111007</DOCNO> </A_SET> </QA>

  11. <QA> <QAID>CRL-QA2002-00344-01</QAID> <QUESTION>鉄鋼大手各社の要員削減では転籍、早期退職者への割り増し退職金支払いに伴い特別退職損失が生じたが、三百十億円を計上したのはどこか?</QUESTION> <Q_TYPE>どこ</Q_TYPE> <NE_TYPE>企業名</NE_TYPE> <CENTER_WORD>首席代表</CENTER_WORD> <LEVEL>A</LEVEL> <A_SET> <ANSWER>新日鉄</ANSWER> <DOCNO>951111190</DOCNO> <DOCNO>951111079</DOCNO> </A_SET> </QA>

  12. 準備資料 • Document Set:日本1995年Mainichi的報紙 • Q/A Set : CRL QA data • 此資料有「問題類型」的資訊,但本實驗不使用。

  13. Maximum Entropy Models

  14. Feature Function

  15. Retrieval • 當使用者給一個問題,系統有兩個步驟 • Document Retrieval • Retrieves the top N articles or paragraphs • QBTE • creates input data by combining the question features and documents features, evaluates the input data, and outputs the top 5 answers.

  16. IOB2 • I:如果字在答案的中間 • O:如果字不在答案中 • B:如果字是答案的開始 New Orleans and the Gulf Coast more than six months ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ O O O B I I O O O O

  17. 特徵擷取 • Question Feature Set (QF); • Document Feature Set (DF); • Combined Feature Set (CF);

  18. QF • qw: an enumeration of the word n-grams (1 n N) • qq: interrogative words • qm1: POS1 of words in the question • qm2: POS2 of words in the question • qm3: POS3 of words in the question • qm4: POS4 of words in the question

  19. QF-qw • 如果N=2 • What is CNN ? • What、is、CNN、What-is、is-CNN • 共計5個特徵 • 本篇論文用4-grams

  20. QF-qq • Who • Where • What • how

  21. QF-qm1 • What is CNN ? • wh-adv、verb、noun • 共計3個特徵

  22. QF-qm2 ~ QF-qm4 • POS1-POS4:IPA POS tag (日本ChaSen) • Tokyo • POS1=noun • POS2=propernoun • POS3=location • POS4=general

  23. DF • dw–k,. . .,dw+0,. . .,dw+k: k preceding and following words of the word wi • dm1–k,. . .,dm1+0,. . .,dm1+k: POS1 of k preceding and following words of the word wi • dm2–k,. . .,dm2+0,. . .,dm2+k: POS2 of k preceding and following words of the word wi • dm3–k,. . .,dm3+0,. . .,dm3+k: POS3 of k preceding and following words of the word wi • dm4–k,. . .,dm4+0,. . .,dm4+k: POS4 of k preceding and following words of the word wi

  24. DF-dw 這裡的k取3,也就是dw會有7個特徵。 • 如果k=1 • Twelve men and five youths said to have been inspired by al Qaeda were arrested • 當i=1時:dw+0=Twelve、dw+1=mem • 當i=2時:dw+0=mem、dw-1=Twelve、dw+1=and • 當i=3時:dw+0=and、dw-1=mem、dw+1=five

  25. DF-dm1 • Policechiefdescribesthepoweroftheplot • nounnounverbarticlenounprepositionarticlenoun • 如果k=1,(dm1-k ~ dm1+k) • 當i=1:dm1+0= noun、dm1+1= noun • 當i=2:dm1+0= noun、dm1-1= noun、dm1+1=verb

  26. DF-dm2 ~ DF-dm4 • POS2 ~ POS4

  27. CF • cw–k,. . .,cw+0,. . .,cw+k: matching results (true/false) between each of dw-k,...,dw+k features and any qw feature, • cm1–k,. . .,cm1+0,. . .,cm1+k: matching results (true/false) between each of dm1-k,...,dm1+k features and any POS1 in qm1 features, • cm2–k,. . .,cm2+0,. . .,cm2+k: matching results (true/false) between each of dm2-k,...,dm2+k features and any POS2 in qm2 features, • cm3–k,. . .,cm3+0,. . .,cm3+k: matching results (true/false) between each of dm3-k,...,dm3+k features and any POS3 in qm3 features, • cm4–k,. . .,cm4+0,. . .,cm4+k: matching results (true/false) between each of dm4-k,...,dm4+k features and any POS4 in qm4 features • cq–k,. . .,cq+0,. . .,cq+k: combinations of each of dw–k,...,dw+k features and qw features

  28. CF-cw • qw=President • dw-1=President • cw-1=True • qw=President • dw+0=Chen • cw+0=False • What is CNN ? • CNN is a television station • cw-1=True、cw+0=True、cw+1=False i=2,k=1

  29. CF-cm1 ~ CF-cm2 • POS1 ~ POS4 • dm1±k = qm1±k → True (POS1) • dm2±k = qm2±k → True (POS2) • dm3±k = qm3±k → True (POS3) • dm4±k = qm4±k → True (POS4)

  30. CF-cq • dw-1=President • qw-1=Who • cq-1=President&Who

  31. 再複習一次 • Feature • QF(Question Feature Set) • qw、qq 、qm1 、qm2 、qm3 、qm4 • DF(Document Feature Set) • dw 、dm1 、dm2 、dm3 、dm4 • CF(Combined Feature Set) • cw 、cm1 、cm2 、cm3 、cm4 、cq

  32. 訓練階段

  33. 訓練階段 dw、dm1 、dm2 、dm3 、dm4 、cw 、cm1 、cm2 、cm3 、cm4 、cq (1) (1) Y =O X

  34. 計算機率 Maximum Entropy Models

  35. 執行階段

  36. 執行階段 I、O、B x ‘ (1) ‘ (1) y 計算機率 O O B I I I I O O O O

  37. 實驗 • 使用10-fold cross validation • 得到Top5和MRR的分數 • 判斷答案的正確性使用自動評估與人工評估 • 自動判斷包含 • exact matching • partial matching

  38. 自動判斷 • 舉例: • 誰是教育部長? • Exact matching • 杜正勝…………………………….. • 正勝……………………………….. • Partial matching • 杜正勝部長……………………….. • 正勝……………………………….. lowbound upbound

  39. 實驗結果

  40. 再實驗 • 為了證實QBTE答案是否能排的更高,改變了擷取的段落(文章)數,提高為N=1,3,5,10 • 原本只有取N=1

  41. 比較 • QBTE得到:MRR=0.36 , Top5=0.47 • SAIQA2得到:MRR=0.4 , Top5=0.55 • question analysis, answer candidate extraction, and answer selection modules were independently built from a QA dataset and an NE dataset, • limited to eight named entities, • the QA dataset is not publicly available.

  42. 系統特點 • How many times bigger … ? • 正確答案是two times,但是two也會被判定為正確 • 假設John Kerry是準備好的正確答案,在這個case中,Senator John Kerry也會被判為正確 • 因為此系統沒有被extraction unit限制住

  43. 更進一步實驗 • 若將question type加入qw中…… • 得到的結果也是差不多

More Related