1 / 16

A Machine Learning Approach for Improved BM25 Retrieval

A Machine Learning Approach for Improved BM25 Retrieval. Krysta M. Svore Microsoft Research One Microsoft Way Redmond, WA 98052 ksvore@microsoft.com. Christopher J. C. Burges Microsoft Research One Microsoft Way Redmond, WA 98052 cburges@microsoft.com.

konane
Télécharger la présentation

A Machine Learning Approach for Improved BM25 Retrieval

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Machine Learning Approach for Improved BM25 Retrieval Krysta M. Svore Microsoft Research One Microsoft Way Redmond, WA 98052 ksvore@microsoft.com Christopher J. C. Burges Microsoft Research One Microsoft Way Redmond, WA 98052 cburges@microsoft.com CIKM’09, November 2–6, 2009, Hong Kong, China. Presenter: SHIH KAI-WUN

  2. Outline • 1. Introduction • 2. Related Work • 3. Document Fields • 4. BM25 • 5. Learning A BM25-Style Function • 6. Data And Evaluation • 7. Experiments And Results • 8. Conclusions And Future Work

  3. Introduction (1/2) • BM25 is arguably one of the most important and widely used information retrieval functions. BM25F is an extension of BM25 that prescribes how to compute BM25 across a document description over several fields. • Recently, it has been shown that LambdaRank is empirically optimal for NDCG and other IR measures.

  4. Introduction (2/2) • Our primary contributions are threefold: • We empirically determine the effectiveness of BM25 for different fields. • We develop a data-driven machine learning model called LambdaBM25 that is based on the attributes of BM25 and the training method of LambdaRank . • We extend our empirical analysis to a document description over various field combinations.

  5. Related Work • A drawback of BM25 and BM25F is the difficulty in optimizing the function parameters for a given information retrieval measure. • There have been extensive studies on how to set term frequency saturation and length normalization parameters. • Our work provides both an extensive study of the contributions of different document fields to information retrieval and a framework for improving BM25-style retrieval.

  6. Document Fields (1/2) • A Web document description is composed of several fields of information. • The document frequency for term t is the number of documents in the collection that contain term t in their document descriptions. • Term frequency is calculated per term and per field by counting the number of occurrences of term t in field F of the document. Field length is the number of terms in the field.

  7. Document Fields (2/2) • The query click field is built from query session data extracted from one year of a commercial search engine’s query log files and is represented by a set of query-score pairs (q, Score(d, q)), where q is a unique query string and Score(d, q) is derived from raw session click data as • The term frequency of term t for the query click field is calculated as where p is the set of query-score pairs.

  8. BM25 (1/2) • BM25 is a function of term frequencies, document frequencies, and the field length for a single field. BM25F is an extension of BM25 to a document description over multiple fields; we refer to both functions as BM25F. • BM25F is computed for document d with description over fields F and query q as The sum is over all terms t in query q. It is the Robertson-Sparck-Jones inverse document frequency of term t:

  9. BM25 (2/2) • We calculate document frequency over the body field for all document frequency attributes. TFt is a term frequency saturation formula: , where f is calculated as , βF accounts for varying field lengths: • BM25F requires the tuning of 2K + 1parameters, whencalculated across K fields, namely k, bF , and wF .

  10. Learning A BM25-Style Function (1/2) • We now describe our simple machine learning rankingmodel that uses the input attributes of BM25F and thetraining method of LambdaRank. • LambdaRank is trained onpairs of documents per query, where documents in a pairhave different relevance labels. • LambdaRank leverages the fact thatneural net training only needs the gradients of the measurewith respect to the model scores, and not the function itself,thus avoiding the problem of direct optimization.

  11. Learning A BM25-Style Function (2/2) • We directly address these challenges by introducing anew machine learning approach to BM25-like retrieval calledLambdaBM25. • LambdaBM25 has the flexibility to learn complex relationships between attributes directly from the data. • We develop our model as follows. We train our model using LambdaRank and the same input attributes as BM25. • We train single- and two-layerLambdaRank neural nets to optimize for NDCG with varying numbers of hidden nodes chosen using the validation set.

  12. Data And Evaluation • Our train/validation/test data contains 67683/11911/12185 queries,respectively. • Each query-URL pair has a human-generated labelbetween 0, meaning d is not relevant to q, and 4, meaningdocument d is the most relevant to query q. • We evaluate using Normalized Discounted Cumulative Gain(NDCG) at truncation levels 1, 3, and 10. Mean NDCGis defined for query q as

  13. Experiments And Results (1/3) • In the upper first threecolumns ofTable 1, We find Title(T), URL(U), and Body(B) are equally effective and popularity fields achieve higher NDCG. In particular, the queryclick field achieves the highest NDCG accuracy. • Table 1 contains results for single-layer LambdaBM25F . Table 1: Mean NDCG accuracy results on the test set for BM25F , 1-layer LambdaBM25F , and 2-layerLambdaBM25F for single fields and multiple field combinations. Statistical significance is determined at the95% confidence level using a t-test. Bold indicates statistical significance over the corresponding BM25Fmodel. Italic indicates statisticalsignificance of the corresponding BM25F model over the LambdaBM25Fmodel. Parentheses indicate no statistically significant difference.

  14. Experiments And Results (2/3) • We train our two-layer LambdaBM25F model anddetermine if it can outperform BM25F . We find the following numbers of hidden nodes tobe best: Title(10), URL(15), Body(15), Anchor(15), Click(5). • The first three lower columns of Table 1 list the results ofBM25F on various field combinations. Table 1: Mean NDCG accuracy results on the test set for BM25F , 1-layer LambdaBM25F , and 2-layerLambdaBM25F for single fields and multiple field combinations. Statistical significance is determined at the95% confidence level using a t-test. Bold indicates statistical significance over the corresponding BM25Fmodel. Italic indicates statisticalsignificance of the corresponding BM25F model over the LambdaBM25Fmodel. Parentheses indicate no statistically significant difference.

  15. Experiments And Results (3/3) • As shown in the lowermiddle columns of Table 1, we find that BM25F performs aswell or better than single-layer LambdaBM25F for all fieldcombinations. • Finally, we train our two-layer LambdaBM25F models using 15 hidden nodes. • For every field combination, as shownin Table 1, LambdaBM25F achieves gains with statisticalsignificance over the corresponding BM25F model.

  16. Conclusions And Future Work • Ourmain contribution is a new information retrieval modeltrained using LambdaRank and the input attributes of BM25called LambdaBM25F. • LambdaBM25F optimizes directly for the chosen target IR evaluation measure and avoids the necessityof parameter tuning. • Our model is general and can potentially act as a frameworkfor modelling other retrieval functions. • In the future we would like to perform more extensivestudies to determine the relative importance of attributes inour model.

More Related