1 / 26

Probabilistic Structured Query Methods

Probabilistic Structured Query Methods. Kareem Darwish, Douglas W.Oard Electrical and Computer Engineering Department, College of Information Studies and UMIACS. Abstract. Structured methods for query term replacement rely on separate estimates of replacement of probabilities of term set.

Télécharger la présentation

Probabilistic Structured Query Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probabilistic Structured Query Methods Kareem Darwish, Douglas W.Oard Electrical and Computer Engineering Department, College of Information Studies and UMIACS

  2. Abstract • Structured methods for query term replacement rely on separate estimates of replacement of probabilities of term set. • Statistically term frequency and document frequency to compute a weight for each query term.

  3. Abstract • This paper reviews prior work on structured query techniques and introduces three variants that estimated improvements in retrieval effectiveness are demonstrated for cross-language retrieval and for retrieval based on optical character recognition (OCR) when replacement probabilities are used to estimate both term frequency and document frequency.

  4. Introduction • There are many situations in which it’s desirable to match a query term with different terms in a document, such as stemming, thesaurus expansion and cross-language retrieval. • When the mappings among matching terms are known in advance, the usual approach is to conflate the alternatives during indexing

  5. Introduction • Query-time implementations are necessary when appropriate matching decisions depend on the nature of the query. • Here presently known techniques for query-time replacement are reviewed, new techniques that leverage estimates of replacement probabilities are introduced, and experiment results that demonstrate improved retrieval effectiveness in two applications are presented.

  6. Introduction • CLIR has received more attention than any other query-time replacement problem in recent years. • Query translation research has developed along two broad directions: “dictionary-based” and “corpus-based” techniques. • TF is a measure of aboutness, which has beneficial effects on both precision and recall. • DF is a measure of specificity, and its principal effect is on precision.

  7. Replacement Techniques • Pirkola appears to have been the first to try separately estimating TF and DF for query terms in CLIR, using the InQuery synonym operator to implement what he called “structured queries”. • InQuery’s synonym operator was originally designed to support monolingual thesaurus expansion, so it estimates TF and DF as follows:

  8. Replacement Techniques • where Qi is a query term, Dk is a document term, TFj(Qi) is the term frequency of Qi in document j and Tj(Qi) is the set of known replacements (in CLIR, translations) for the term Dk. • This represents a very cautious strategy in which a high DF for any replacement will result in a high “joint DF” for that query term.

  9. Replacement Techniques • Kwok was the first to introduce a variant to Pirkola’s method: • Another alternative, not previously explored, would be to use the maximum document frequency of any replacement (MDF):

  10. Replacement Techniques • All three techniques treat every known replacement as equally likely. • This risks a somewhat counterintuitive result: introduction of a translation dictionary with improved coverage of rare translations could actually harm retrieval effectiveness. • This exact situation actually arises often with dictionaries built from aligned corpora using statistical methods.

  11. Replacement Techniques • One way to address this problem would be to use a weighted variant of Kwok’s method: • For the experiments reported below, the weight is set to the best available estimate of the replacement probabilities.

  12. Replacement Techniques • Another way of leveraging information about replacement probabilities would be to simply ignore the least likely replacements. • For the experiments reported below, a greedy method was used, with replacements retained in order of decreasing probability until a preset threshold on the cumulative probability was first exceed.

  13. Replacement Techniques • The following combinations were tried:

  14. CLIR • Using the TREC 2002 CLIR track collection, which contains 383,872 articles from the Agence France Press (AFP) Arabic newswire, 50 topic descriptions written in English, and associated relevance judgments. • Five translation resources of three types were combined for this application.

  15. CLIR • The resources were: • Two bilingual term lists that were constructed using two Web-based machine translation systems (Tarjim and Al-Misbar). The two lists covered about 15% of the unique Arabic stems in the TREC collection. • The Salmone Arabic-to-English dictionary, from which we extracted only the translations. The coverage was about 7%. • Two translation probability tables, one for English-to-Arabic and one for Arabic-to-English. The coverage was 29%.

  16. CLIR • These translation resources were combined in the following manner: • All resources that were originally provided as Arabic-to-English were inverted. This process likely introduce some error when inverting the translations probability table. • A uniform distribution was used to assign probabilities to the translations obtained from machine translation systems and the Salmone dictionary. • A uniform distribution was then assumed over the translation resources containing each English term.

  17. CLIR Results • Baseline: one-best query translation

  18. CLIR Results

  19. OCR-Based Retrieval • Previous approaches to the OCR-based retrieval problem have focused primarily on correcting OCR errors or on fuzzy matching techniques. • Using the Zad collection, which was developed at the University of Maryland. • The collection consists of 2,730 documents extracted from Zad AlMe’ad, a printed book for which an accurately character coded electronic version (the “clean text”) is also available.

  20. OCR-Based Retrieval • 25 written topic descriptions. • Term replacement probabilities were estimated using a position-sensitive unigram character distortion model trained on 5,000 words of automatically aligned clean and OCR-degraded text from the Zad collections.

  21. OCR-Based Retrieval • Given a clean word with characters C1..Ci..Cn and the resulting word after OCR degradation D1..Dj..Dm, three probabilities of edit operations would be modeled after alignment:

  22. OCR-Based Retrieval Clean text aligned by SCLITE (using dynamic programming string alignment algorithm) OCR-degraded text back tracing to identify three kinds of operations estimate replacement probabilities

  23. OCR Results

  24. OCR Results

  25. Conclusion • This paper introduced a family of methods for query term replacement that exploit estimates of replacement probabilities. • Inclusion of rare translations in a CLIR application was shown to be problematic for all three methods. • Of the three probabilistic structured query methods, WTF/DF was the winner, yielding both the greatest retrieval effectiveness and the least sensitivity to the threshold tuning.

  26. Future Work • Term weight tuning • Other applications • Structured document indexing (e.g. translation based indexing)

More Related