1 / 34

Lecture 15: Text Databases & Information Retrieval: Part I

Lecture 15: Text Databases & Information Retrieval: Part I. Oct. 18, 2006 ChengXiang Zhai. The Special Role of Textual Information. The most natural way of encoding knowledge Think about scientific literature The most common type of information

harva
Télécharger la présentation

Lecture 15: Text Databases & Information Retrieval: Part I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 15: Text Databases & Information Retrieval: Part I Oct. 18, 2006 ChengXiang Zhai

  2. The Special Role of Textual Information • The most natural way of encoding knowledge • Think about scientific literature • The most common type of information • How much textual information do you produce and consume every day? • The most basic form of information • It can be used to describe other media of information • Most natural to query

  3. What is Text Retrieval (TR)? • There exists a collection of text documents • User gives a query to express the information need • A retrieval system returns relevant documents to users • More often called “information retrieval” or IR • But IR tends to be much broader • May include non-textual information • May include text categorization or summarization…

  4. TR vs. Database Retrieval • Information • Unstructured/free text vs. structured data • Ambiguous vs. well-defined semantics • Query • Ambiguous vs. well-defined semantics • Incomplete vs. complete specification • Answers • Relevant documents vs. matched records • TR is an empirically defined problem

  5. History of TR on One Slide • Birth of TR • 1945: V. Bush’s article “As we may think” • 1957: H. P. Luhn’s idea of word counting and matching • Indexing & Evaluation Methodology (1960’s) • Smart system (G. Salton’s group) • Cranfield test collection (C. Cleverdon’s group) • Indexing: automatic can be as good as manual • TR Models (1970’s & 1980’s) … • Large-scale Evaluation & Applications (1990’s) • TREC (D. Harman & E. Voorhees, NIST) • Google

  6. Formal Formulation of TR • Vocabulary V={w1, w2, …, wN} of language • Query q = q1,…,qm, where qi  V • Document di = di1,…,dimi, where dij  V • Collection C= {d1, …, dk} • Set of relevant documents R(q)  C • Generally unknown and user-dependent • Query is a “hint” on which doc is in R(q) • Task = compute R’(q), an “approximate R(q)”

  7. Computing R(q) • Strategy 1: Document selection • R(q)={dC|f(d,q)=1}, where f(d,q) {0,1} is an indicator function or classifier • System must decide if a doc is relevant or not (“absolute relevance”) • Strategy 2: Document ranking • R(q) = {dC|f(d,q)>}, where f(d,q)  is a relevance measure function;  is a cutoff • System must decide if one doc is more likely to be relevant than another (“relative relevance”)

  8. Document Selection vs. Ranking - - + - - - + - + - + + + - R’(q) R’(q) 1 True R(q) Doc Selection f(d,q)=? - - - 0 - + - + - - + + - + - - - - - - - - - 0.98 d1 + 0.95 d2 + 0.83 d3 - 0.80 d4 + 0.76 d5 - 0.56 d6 - 0.34 d7 - 0.21 d8 + 0.21 d9 - - Doc Ranking f(d,q)=? -

  9. Problems of Doc Selection • The classifier is unlikely accurate • “Over-constrained” query (terms are too specific): no relevant documents found • “Under-constrained” query (terms are too general): over delivery • It is extremely hard to find the right position between these two extremes • Even if it is accurate, all relevant documents are not equally relevant

  10. Ranking is often preferred • Relevance is a matter of degree • A user can stop browsing anywhere, so the boundary is controlled by the user • High recall users would view more items • High precision users would view only a few • Theoretical justification: Probability Ranking Principle [Robertson 77]

  11. Probability Ranking Principle[Robertson 77] • As stated by Cooper • Robertson provides two formal justifications • Assumptions: Independent relevance and sequential browsing “If a reference retrieval system’s response to each request is a ranking of the documents in the collections in order of decreasing probability of usefulness to the user who submitted the request, where the probabilities are estimated as accurately a possible on the basis of whatever data made available to the system for this purpose, then the overall effectiveness of the system to its users will be the best that is obtainable on the basis of that data.”

  12. According to the PRP, all we need is “A relevance measure function f”which satisfiesFor all q, d1, d2, f(q,d1) > f(q,d2) iff p(Rel|q,d1) >p(Rel|q,d2) Most IR research is centered on finding a good f…

  13. The Notion of Relevance Relevance P(d q) or P(q d) Probabilistic inference (Rep(q), Rep(d)) Similarity P(r=1|q,d) r {0,1} Probability of Relevance Regression Model (Fox 83) Generative Model Different inference system Different rep & similarity Query generation Doc generation … Inference network model (Turtle & Croft, 91) Prob. concept space model (Wong & Yao, 95) Vector space model (Salton et al., 75) Prob. distr. model (Wong & Yao, 89) Classical prob. Model (Robertson & Sparck Jones, 76) LM approach (Ponte & Croft, 98) (Lafferty & Zhai, 01a)

  14. Model 1: Relevance = Similarity • Assumptions • Query and document are represented similarly • A query can be regarded as a “document” • Relevance(d,q)  similarity(d,q) • R(q) = {dC|f(d,q)>}, f(q,d)=(Rep(q), Rep(d)) • Key issues • How to represent query/document? • How to define the similarity measure ?

  15. Vector Space Model • Represent a doc/query by a term vector • Term: basic concept, e.g., word or phrase • Each term defines one dimension • N terms define a high-dimensional space • Element of vector corresponds to term weight • E.g., d=(x1,…,xN), xi is “importance” of term i • Measure relevance by the distance between the query vector and document vector in the vector space

  16. VS Model: illustration Starbucks ? ? D2 D9 ? ? D11 D5 D3 D10 D4 D6 Java Query D7 D1 D8 Microsoft ??

  17. What the VS model doesn’t say • How to define/select the “basic concept” • Concepts are assumed to be orthogonal • How to assign weights • Weight in query indicates importance of term • Weight in doc indicates how well the term characterizes the doc • How to define the similarity/distance measure

  18. What’s a good “basic concept”? • Orthogonal • Linearly independent basis vectors • “Non-overlapping” in meaning • No ambiguity • Weights can be assigned automatically and hopefully accurately • Many possibilities: Words, stemmed words, phrases, “latent concept”, …

  19. How to Assign Weights? • Very very important! • Why weighting • Query side: Not all terms are equally important • Doc side: Some terms carry more contents • How? • Two basic heuristics • TF (Term Frequency) = Within-doc-frequency • IDF (Inverse Document Frequency) • TF normalization

  20. TF Weighting • Idea: A term is more important if it occurs more frequently in a document • Formulas: Let f(t,d) be the frequency count of term t in doc d • Raw TF: TF(t,d) = f(t,d) • Log TF: TF(t,d)=1+ln(1+ln(f(t,d))) • “Okapi/BM25 TF”: TF(t,d) = k f(t,d)/(f(t,d)+k(1-b+b*doclen/avgdoclen)) , where k and b are parameters • Normalization of TF is very important!

  21. TF Normalization • Why? • Document length variation • “Repeated occurrences” are less informative than the “first occurrence” • Two views of document length • A doc is long because it uses more words • A doc is long because it has more contents • Generally penalize long doc, but avoid over-penalizing (pivoted normalization)

  22. TF Normalization (cont.) Norm. TF Raw TF “Pivoted normalization”: Using avg. doc length to regularize normalization 1-b+b*doclen/avgdoclen b varies from 0 to 1 Warning: Normalization may be affected by the similarity measure

  23. IDF Weighting • Idea: A term is more discriminative if it occurs only in fewer documents • Formula: IDF(t) = 1+ log(n/k) n – total number of docs k -- # docs with term t (doc freq)

  24. TF-IDF Weighting • TF-IDF weighting : weight(t,d)=TF(t,d)*IDF(t) • Common in doc  high tf  high weight • Rare in collection high idf high weight • Imagine a word count profile, what kind of terms would have high weights?

  25. How to Measure Similarity?

  26. VS Example: Raw TF & Dot Product information retrieval search engine information doc1 Sim(q,doc1)=4.8*2.4+4.5*4.5 Sim(q,doc2)=2.4*2.4 Sim(q,doc3)=0 travel information map travel doc2 government president congress doc3 …… query=“information retrieval” info retrieval travel map search engine govern president congress IDF(faked) 2.4 4.5 2.8 3.3 2.1 5.4 2.2 3.2 4.3 doc1 2(4.8) 1(4.5) 1(2.1) 1(5.4) doc2 1(2.4 ) 2 (5.6) 1(3.3) doc3 1 (2.2) 1(3.2) 1(4.3) query 1(2.4) 1(4.5)

  27. What Works the Best? • Use single words • Use stat. phrases • Remove stop words • Stemming • Others(?) Error (Singhal 2001)

  28. IR Evaluation: Criteria • Effectiveness/Accuracy • Precision, Recall • Efficiency • Space and time complexity • Usability • How useful for real user tasks?

  29. Methodology: Cranfield Tradition • Laboratory testing of system components • Precision, Recall • Comparative testing • Test collections • Set of documents • Set of questions • Relevance judgments

  30. The Contingency Table Action Retrieved Not Retrieved Doc Relevant Retrieved Relevant Rejected Relevant Irrelevant Retrieved Irrelevant Rejected Not relevant

  31. How to measure a ranking? • Compute the precision at every recall point • Plot a precision-recall (PR) curve Which is better? precision x precision x x x x x x x recall recall

  32. Summarize a Ranking • Given that n docs are retrieved • Compute the precision at the rank where each (new) relevant document is retrieved • If a relevant document never gets retrieved, we assume precision=0 • Compute the average over all the relevant documents • This gives us (non-interpolated) average precision, which captures both precision and recall and is sensitive to the rank of each relevant document • Mean Average Precisions (MAP) • MAP = arithmetic mean average precision over a set of topics • gMAP = geometric mean average precision over a set of topics (more affected by difficult topics)

  33. Precion-Recall Curve Out of 4728 rel docs, we’ve got 3212 Recall Precision@10docs about 5.5 docs in the top 10 docs are relevant Breakeven Point (prec=recall) Mean Avg. Precision (MAP) D1 + D2 + D3 – D4 – D5 + D6 - Total # rel docs = 4 System returns 6 docs Average Prec = (1/1+2/2+3/5+0)/4

  34. What You Should Know • Difference and similarity between text retrieval and database query • Why ranking is often preferred • How the vector space model works • How TF-IDF weighting works • How to evaluate an IR system

More Related