1 / 65

SIMS 290-2: Applied Natural Language Processing

SIMS 290-2: Applied Natural Language Processing Marti Hearst November 29, 2006 Question Answering Today: Introduction to QA A typical full-fledged QA system A very simple system, in response to this An intermediate approach Question/Answer Browse and Build Text Data Mining

Télécharger la présentation

SIMS 290-2: Applied Natural Language Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SIMS 290-2: Applied Natural Language Processing Marti Hearst November 29, 2006

  2. Question Answering • Today: • Introduction to QA • A typical full-fledged QA system • A very simple system, in response to this • An intermediate approach

  3. Question/Answer Browse and Build Text Data Mining A of Search Types Spectrum • What is the typical height of a giraffe? • What are some good ideas for landscaping my client’s yard? • What are some promising untried treatments for Raynaud’s disease?

  4. Beyond Document Retrieval • Document Retrieval • Users submit queries corresponding to their information needs. • System returns (voluminous) list of full-length documents. • It is the responsibility of the users to find information of interest within the returned documents. • Open-Domain Question Answering (QA) • Users ask questions in natural language. What is the highest volcano in Europe? • System returns list of short answers. … Under Mount Etna, the highest volcano in Europe, perches the fabulous town… • A real use for NLP Adapted from slide by Surdeanu and Pasca

  5. Questions and Answers • What is the height of a typical giraffe? • The result can be a simple answer, extracted from existing web pages. • Can specify with keywords or a natural language query • However, most web search engines are not set up to handle questions properly. • Get different results using a question vs. keywords

  6. The Problem of Question Answering Natural language question, not keyword queries What is the nationality of Pope John Paul II? … stabilize the country with its help, the Catholic hierarchy stoutly held out for pluralism, in large part at the urging of Polish-born Pope John Paul II. When the Pope emphatically defended the Solidarity trade union during a 1987 tour of the… Short text fragment, not URL list Adapted from slide by Surdeanu and Pasca

  7. Question Answering from text • With massive collections of full-text documents, simply finding relevant documents is of limited use: we want answers • QA: give the user a (short) answer to their question, perhaps supported by evidence. • An alternative to standard IR • The first problem area in IR where NLP is really making a difference. Adapted from slides by Manning, Harabagiu, Kushmeric, and ISI

  8. People want to ask questions… • Examples from AltaVista query log • who invented surf music? • how to make stink bombs • where are the snowdens of yesteryear? • which english translation of the bible is used in official catholic liturgies? • how to do clayart • how to copy psx • how tall is the sears tower? • Examples from Excite query log (12/1999) • how can i find someone in texas • where can i find information on puritan religion? • what are the 7 wonders of the world • how can i eliminate stress • What vacuum cleaner does Consumers Guide recommend Adapted from slides by Manning, Harabagiu, Kushmeric, and ISI

  9. A Brief (Academic) History • In some sense question answering is not a new research area • Question answering systems can be found in many areas of NLP research, including: • Natural language database systems • A lot of early NLP work on these • Problem-solving systems • STUDENT (Winograd ’77) • LUNAR (Woods & Kaplan ’77) • Spoken dialog systems • Currently very active and commercially relevant • The focus is now on open-domain QA is new • First modern system: MURAX (Kupiec, SIGIR’93): • Trivial Pursuit questions • Encyclopedia answers • FAQFinder (Burke et al. ’97) • TREC QA competition (NIST, 1999–present) Adapted from slides by Manning, Harabagiu, Kushmeric, and ISI

  10. AskJeeves • AskJeeves is probably most hyped example of “Question answering” • How it used to work: • Do pattern matching to match a question to their own knowledge base of questions • If a match is found, returns a human-curated answer to that known question • If that fails, it falls back to regular web search • (Seems to be more of a meta-search engine now) • A potentially interesting middle ground, but a fairly weak shadow of real QA Adapted from slides by Manning, Harabagiu, Kushmeric, and ISI

  11. Question Answering at TREC • Question answering competition at TREC consists of answering a set of 500 fact-based questions, e.g., • “When was Mozart born?”. • Has really pushed the field forward. • The document set • Newswire textual documents from LA Times, San Jose Mercury News, Wall Street Journal, NY Times etcetera: over 1M documents now. • Well-formed lexically, syntactically and semantically (were reviewed by professional editors). • The questions • Hundreds of new questions every year, the total is ~2400 • Task • Initially extract at most 5 answers: long (250B) and short (50B). • Now extract only one exact answer. • Several other sub-tasks added later: definition, list, biography. Adapted from slides by Manning, Harabagiu, Kushmeric, and ISI

  12. Sample TREC questions 1.Who is the author of the book, "The Iron Lady: A Biography of Margaret Thatcher"? 2. What was the monetary value of the Nobel Peace Prize in 1989? 3. What does the Peugeot company manufacture? 4. How much did Mercury spend on advertising in 1993? 5. What is the name of the managing director of Apricot Computer? 6. Why did David Koresh ask the FBI for a word processor? 7. What is the name of the rare neurological disease with symptoms such as: involuntary movements (tics), swearing, and incoherent vocalizations (grunts, shouts, etc.)?

  13. TREC Scoring • For the first three years systems were allowed to return 5 ranked answer snippets (50/250 bytes) to each question. • Mean Reciprocal Rank Scoring (MRR): • Each question assigned the reciprocal rank of the first correct answer. If correct answer at position k, the score is 1/k. 1, 0.5, 0.33, 0.25, 0.2, 0 for 1, 2, 3, 4, 5, 6+ position • Mainly Named Entity answers (person, place, date, …) • From 2002 on, the systems are only allowed to return a single exact answer and the notion of confidence has been introduced.

  14. Top Performing Systems • In 2003, the best performing systems at TREC can answer approximately 60-70% of the questions • Approaches and successes have varied a fair deal • Knowledge-rich approaches, using a vast array of NLP techniques stole the show in 2000-2003 • Notably Harabagiu, Moldovan et al. ( SMU/UTD/LCC ) • Statistical systems starting to catch up • AskMSR system stressed how much could be achieved by very simple methods with enough text (and now various copycats) • People are experimenting with machine learning methods • Middle ground is to use large collection of surface matching patterns (ISI) Adapted from slides by Manning, Harabagiu, Kusmerick, ISI

  15. Example QA System • This system contains many components used by other systems, but more complex in some ways • Most work completed in 2001; there have been advances by this group and others since then. • Next slides based mainly on: • Paşca and Harabagiu, High-Performance Question Answering from Large Text Collections, SIGIR’01. • Paşca and Harabagiu, Answer Mining from OnlineDocuments, ACL’01. • Harabagiu, Paşca, Maiorano: Experiments with Open-Domain Textual Question Answering. COLING’00

  16. Extracts and ranks passages using surface-text techniques Captures the semantics of the question Selects keywords for PR Extracts and ranks answers using NL techniques QA Block Architecture Question Semantics Passage Retrieval Answer Extraction Q Question Processing A Passages Keywords WordNet WordNet Document Retrieval Parser Parser NER NER Adapted from slide by Surdeanu and Pasca

  17. Question Processing Flow Question semantic representation Construction of the question representation Q Question parsing Answer type detection AT category Keyword selection Keywords Adapted from slide by Surdeanu and Pasca

  18. Question Stems and Answer Types Identify the semantic category of expected answers • Other question stems: Who, Which, Name, How hot... • Other answer types: Country, Number, Product... Adapted from slide by Surdeanu and Pasca

  19. Detecting the Expected Answer Type • In some cases, the question stem is sufficient to indicate the answer type (AT) • Why  REASON • When  DATE • In many cases, the question stem is ambiguous • Examples • What was the name of Titanic’s captain ? • What U.S. Government agency registers trademarks? • What is the capital of Kosovo? • Solution: select additional question concepts (AT words) that help disambiguate the expected answer type • Examples • captain • agency • capital Adapted from slide by Surdeanu and Pasca

  20. Answer Type Taxonomy • Encodes 8707 English concepts to help recognize expected answer type • Mapping to parts of Wordnet done by hand • Can connect to Noun, Adj, and/or Verb subhierarchies

  21. Answer Type Detection Algorithm • Select the answer type word from the question representation. • Select the word(s) connected to the question. Some “content-free” words are skipped (e.g. “name”). • From the previous set select the word with the highest connectivity in the question representation. • Map the AT word in a previously built AT hierarchy • The AT hierarchy is based on WordNet, with some concepts associated with semantic categories, e.g. “writer”  PERSON. • Select the AT(s) from the first hypernym(s) associated with a semantic category. Adapted from slide by Surdeanu and Pasca

  22. P ERSON scientist, man of science inhabitant, dweller, denizen performer, performing artist researcher chemist oceanographer dancer American actor westerner islander, island-dweller ballet dancer tragedian actress name researcher What oceanographer What discovered French Hepatitis-B owned vaccine Calypso What researcher discovered the vaccine against Hepatitis-B? What is the name of the French oceanographer who owned Calypso? Answer Type Hierarchy PERSON PERSON Adapted from slide by Surdeanu and Pasca

  23. Evaluation of Answer Type Hierarchy • This evaluation done in 2001 • Controlled the variation of the number of WordNet synsets included in the answer type hierarchy. • Test on 800 TREC questions. Hierarchy coverage Precision score (50-byte answers) 0% 0.296 3% 0.404 10% 0.437 25% 0.451 50% 0.461 • The derivation of the answer type is the main source of unrecoverable errors in the QA system Adapted from slide by Surdeanu and Pasca

  24. Keyword Selection • Answer Type indicates what the question is looking for, but provides insufficient context to locate the answer in very large document collection • Lexical terms (keywords) from the question, possibly expanded with lexical/semantic variations provide the required context. Adapted from slide by Surdeanu and Pasca

  25. Lexical Terms Extraction • Questions approximated by sets of unrelated words (lexical terms) • Similar to bag-of-word IR models Adapted from slide by Surdeanu and Pasca

  26. Keyword Selection Algorithm • Select all non-stopwords in quotations • Select all NNP words in recognized named entities • Select all complex nominals with their adjectival modifiers • Select all other complex nominals • Select all nouns with adjectival modifiers • Select all other nouns • Select all verbs • Select the AT word (which was skipped in all previous steps) Adapted from slide by Surdeanu and Pasca

  27. Keyword Selection Examples • What researcher discovered the vaccine against Hepatitis-B? • Hepatitis-B, vaccine, discover, researcher • What is the name of the French oceanographer who owned Calypso? • Calypso, French, own, oceanographer • What U.S. government agency registers trademarks? • U.S., government, trademarks, register, agency • What is the capital of Kosovo? • Kosovo, capital Adapted from slide by Surdeanu and Pasca

  28. Extracts and ranks passages using surface-text techniques Captures the semantics of the question Selects keywords for PR Extracts and ranks answers using NL techniques Passage Retrieval Question Semantics Passage Retrieval Answer Extraction Q Question Processing A Passages Keywords WordNet WordNet Document Retrieval Parser Parser NER NER Adapted from slide by Surdeanu and Pasca

  29. Passage Extraction Loop • Passage Extraction Component • Extracts passages that contain all selected keywords • Passage size dynamic • Start position dynamic • Passage quality and keyword adjustment • In the first iteration use the first 6 keyword selection heuristics • If the number of passages is lower than a threshold  query is too strict  drop a keyword • If the number of passages is higher than a threshold  query is too relaxed add a keyword Adapted from slide by Surdeanu and Pasca

  30. Passage Retrieval Architecture Passage Quality Keywords Yes Keyword Adjustment No Passage Scoring Passage Ordering Passages Ranked Passages Passage Extraction Documents Document Retrieval Adapted from slide by Surdeanu and Pasca

  31. Passage Scoring • Passages are scored based on keyword windows • For example, if a question has a set of keywords: {k1, k2, k3, k4}, and in a passage k1 and k2 are matched twice, k3 is matched once, and k4 is not matched, the following windows are built: Window 1 Window 2 k1 k2 k3 k2 k1 k1 k2 k3 k2 k1 Window 3 Window 4 k1 k2 k3 k2 k1 k1 k2 k3 k2 k1 Adapted from slide by Surdeanu and Pasca

  32. Passage Scoring • Passage ordering is performed using a radix sort that involves three scores: • SameWordSequenceScore (largest) • Computes the number of words from the question that are recognized in the same sequence in the window • DistanceScore (largest) • The number of words that separate the most distant keywords in the window • MissingKeywordScore (smallest) • The number of unmatched keywords in the window Adapted from slide by Surdeanu and Pasca

  33. Extracts and ranks passages using surface-text techniques Captures the semantics of the question Selects keywords for PR Extracts and ranks answers using NL techniques Answer Extraction Question Semantics Passage Retrieval Answer Extraction Q Question Processing A Passages Keywords WordNet WordNet Document Retrieval Parser Parser NER NER Adapted from slide by Surdeanu and Pasca

  34. Ranking Candidate Answers Q066: Name the first private citizen to fly in space. • Answer type: Person • Text passage: “Among them was Christa McAuliffe, the first private citizen to fly in space. Karen Allen, best known for her starring role in “Raiders of the Lost Ark”, plays McAuliffe. Brian Kerwin is featured as shuttle pilot Mike Smith...” • Best candidate answer: Christa McAuliffe Adapted from slide by Surdeanu and Pasca

  35. Features for Answer Ranking • relNMWnumber of question terms matched in the answer passage • relSP number of question terms matched in the same phrase as the candidate answer • relSS number of question terms matched in the same sentence as the candidate answer • relFP flag set to 1 if the candidate answer is followed by a punctuation sign • relOCTW number of question terms matched, separated from the candidate answer by at most three words and one comma • relSWS number of terms occurring in the same order in the answer passage as in the question • relDTW average distance from candidate answer to question term matches SIGIR ‘01 Adapted from slide by Surdeanu and Pasca

  36. Answer Ranking based on Machine Learning • Relative relevance score computed for each pair of candidates (answer windows) relPAIR = wSWS relSWS + wFP relFP + wOCTW relOCTW + wSP relSP + wSS relSS + wNMW relNMW + wDTW relDTW +threshold • If relPAIR positive, then first candidate from pair is more relevant • Perceptron model used to learn the weights • Scores in the 50% MRR for short answers, in the 60% MRR for long answers Adapted from slide by Surdeanu and Pasca

  37. Evaluation on the Web • test on 350 questions from TREC (Q250-Q600) • extract 250-byte answers Adapted from slide by Surdeanu and Pasca

  38. Can we make this simpler? • One reason systems became so complex is that they have to pick out one sentence within a small collection • The answer is likely to be stated in a hard-to-recognize manner. • Alternative Idea: • What happens with a much larger collection? • The web is so huge that you’re likely to see the answer stated in a form similar to the question Adapted from slides by Manning, Harabagiu, Kusmerick, ISI

  39. Can we make this simpler? • Goal: make the simplest possible QA system by exploiting this redundancy in the web • Use this as a baseline against which to compare more elaborate systems. • The next slides based on: • Web Question Answering: Is More Always Better?Dumais, Banko, Brill, Lin, Ng, SIGIR’02 • An Analysis of the AskMSR Question-Answering System, Brill, Dumais, and Banko, EMNLP’02. Adapted from slides by Manning, Harabagiu, Kusmerick, ISI

  40. AskMSR System Architecture 2 1 3 5 4 Adapted from slides by Manning, Harabagiu, Kusmerick, ISI

  41. Step 1: Rewrite the questions • Intuition: The user’s question is often syntactically quite close to sentences that contain the answer • Where istheLouvreMuseumlocated? • TheLouvreMuseumislocated in Paris • Who createdthecharacterofScrooge? • Charles DickenscreatedthecharacterofScrooge. Adapted from slides by Manning, Harabagiu, Kusmerick, ISI

  42. Query rewriting Classify question into seven categories • Who is/was/are/were…? • When is/did/will/are/were …? • Where is/are/were …? a. Hand-crafted category-specific transformation rules e.g.: For where questions, move ‘is’ to all possible locations Look to the right of the query terms for the answer. “Where is the Louvre Museum located?”  “is the Louvre Museum located”  “the is Louvre Museum located”  “the Louvre is Museum located”  “the Louvre Museum is located”  “the Louvre Museum located is” b. Expected answer “Datatype” (eg, Date, Person, Location, …) When was the French Revolution?  DATE Nonsense,but ok. It’s only a fewmore queriesto the search engine. Adapted from slides by Manning, Harabagiu, Kusmerick, ISI

  43. Query Rewriting - weighting Some query rewrites are more reliable than others. Where is the Louvre Museum located? Weight 5if a match,probably right Weight 1 Lots of non-answerscould come back too +“the Louvre Museum is located” +Louvre +Museum +located Adapted from slides by Manning, Harabagiu, Kusmerick, ISI

  44. Step 2: Query search engine • Send all rewrites to a Web search engine • Retrieve top N answers (100-200) • For speed, rely just on search engine’s “snippets”, not the full text of the actual document Adapted from slides by Manning, Harabagiu, Kusmerick, ISI

  45. Step 3: Gathering N-Grams • Enumerate all N-grams (N=1,2,3) in all retrieved snippets • Weight of an n-gram: occurrence count, each weighted by “reliability” (weight) of rewrite rule that fetched the document • Example: “Who created the character of Scrooge?” Dickens 117 Christmas Carol 78 Charles Dickens 75 Disney 72 Carl Banks 54 A Christmas 41 Christmas Carol 45 Uncle 31 Adapted from slides by Manning, Harabagiu, Kusmerick, ISI

  46. Step 4: Filtering N-Grams • Each question type is associated with one or more “data-type filters” = regular expression • When… • Where… • What … • Who … • Boost score of n-grams that match regexp • Lower score of n-grams that don’t match regexp • Details omitted from paper…. Date Location Person Adapted from slides by Manning, Harabagiu, Kusmerick, ISI

More Related