1 / 50

Towards a Game-Theoretic Framework for Information Retrieval

Towards a Game-Theoretic Framework for Information Retrieval. ChengXiang Zhai Department of Computer Science University of Illinois at Urbana-Champaign http://www.cs.uiuc.edu/homes/czhai Email: czhai@illinois.edu. CCIR 2014, Aug. 10, 2014, Kunming, China.

armen
Télécharger la présentation

Towards a Game-Theoretic Framework for Information Retrieval

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Towards a Game-Theoretic Framework for Information Retrieval ChengXiangZhai Department of Computer Science University of Illinois at Urbana-Champaign http://www.cs.uiuc.edu/homes/czhai Email: czhai@illinois.edu CCIR 2014, Aug. 10, 2014, Kunming, China

  2. Search is everywhere, and part of everyone’s life Web Search Desk Search Enterprise Search Social Media Search Site Search … …

  3. Search is also important for big data: make big data small, but more useful Text Mining Decision Support Information Retrieval Big Raw Data Small Relevant Data

  4. Search accuracy matters! # Queries /Day X 1 sec X 10 sec 4,700,000,000 ~13,000,000 hrs ~1,300,000 hrs 1,600,000,000 ~4,400,000 hrs ~440,000 hrs … … ~5,500 hrs 2,000,000 ~550 hrs How can we optimizeall search engines in a general way? Sources: Google, Twitter: http://www.statisticbrain.com/ PubMed: http://www.ncbi.nlm.nih.gov/About/tools/restable_stat_pubmed.html

  5. How can we optimizeall search engines in a general way? However, this is an ill-defined question! What is a search engine? What is an optimal search engine? What should be the objective function to optimize?

  6. Current-generation search engines Document collection k number of queries search engines QueryQ D Retrieval task = rank documents for a query Ranked list Interface = ranked list ( “10 blue links”) Machine Learning  Retrieval Model Optimal Search Engine=optimal score(q,d) Score(Q,D) Objective = ranking accuracy on training data Minimum NLP

  7. Current search engines are well justified • Probability ranking principle [Robertson 77]:returning a ranked list of documents in descending order of probability that a document is relevant to the query is the optimal strategy under two assumptions: • The utility of a document (to a user) is independent of the utility of any other document • A user would browse the results sequentially

  8. Two Justifications of PRP • Optimization of traditional retrieval effectiveness measures • Given an expected level of recall, ranking based on PRP maximizes the precision • Given a fixed rank cutoff, ranking based on PRP maximizes precision and recall • Optimal decision making • Regardless the tradeoffs (e.g., favoring high precision vs. high recall), ranking based on PRP optimizes the expected utility of a binary (independent) retrieval decision (i.e., to retrieve or not to retrieve a document) • Intuition: if a user sequentially examines one doc at each time, we’d like the user to see the very best ones first

  9. Success of Probability Ranking Principle • Vector Space Models: [Salton et al. 1975], [Singhal et al. 1996], … • Classic Probabilistic Models: [Maron & Kuhn 1960], [Harter 1975], [Robertson & Sparck Jones 1976], [van Rijsbergen 1977], [Robertson 1977], [Robertson et al. 1981], [Robertson & Walker 1994], … • Language Models:[Ponte & Croft 1998], [Hiemstra & Kraaij 1998], [Zhai & Lafferty 2001], [Lavrenko & Croft 2001], [Kurland & Lee 2004], … • Non-Classic Logic Models: [van Rijsbergen 1986], [Wong & Yao 1995], … • Divergence from Randomness: [Amati & van Rijsbergen 2002], [He & Ounis 2005], … • Learning to Rank: [Fuhr 1989], [Gey 1994], ... • Axiomatic retrieval framework[Fang et al. 2004], [Fang et al. 2011], … • … Most information retrieval models are to optimize score(Q,D)

  10. Limitations of PRP Limitations of optimizing Score(Q,D) • Assumptions made by PRP don’t hold in practice • Utility of a document depends on others • Users don’t strictly follow sequential browsing • As a result • Redundancy can’t be handled (duplicated docs have the same score!) • Collective relevance can’t be modeled • Heuristic post-processing of search results is inevitable

  11. Improvement: instead of scoring one document, score a whole ranked list • Instead of scoring an individual document, score an entire candidate ranked list of documents [Zhai 02; Zhai & Lafferty 06] • A list with redundant documents on the top can be penalized • Collective relevance can be captured also • Powerful machine learning techniques can be used [Cao et al. 07] • However, scoring is still for just one query: score(Q, ) Optimal SE = optimal score(Q, ) Objective = Ranking accuracy on training data

  12. Limitations of single query scoring • No consideration of past queries and history • No modeling of users • Can’t optimize the utility over an entire session • …

  13. Heuristic solutions  emerging topics in IR • No consideration of past queries and history  Implicit feedback (e.g, [Shen et al. 05] ), personalized search (see, e.g., [Teevan et al. 10]) • No modeling of users  intent modeling (see, e.g. , [Shen et al. 06]), task inference (see, e.g., [Wang et al. 13]) • Can’t optimize the utility over an entire session  Active feedback (e.g., [Shen & Zhai 05]), exploration-exploitation tradeoff (e.g., [Agarwal et al. 09], [Karimzadehgan & Zhai 13]) Can we solve all these problems in a more principled way with a unified formal framework?

  14. Going back to the basic questions… • What is a search engine? • What is an optimal search engine? • What should be the objective function to optimize? • How can we solve such an optimization problem?

  15. Proposed Solution: A Game-Theoretic Framework for IR • Retrieval process = cooperative game-playing • Players: Player 1= search engine; Player 2= user • Rules of game: • Each player takes turns to make “moves” • User makes the first move; system makes the last move • For each move of the user, the system makes a response move • Current search engine: • User’s moves= {query, click}; system’s moves = {ranked list, show doc} • Objective: multiple possibilities • satisfying the user’s information need with minimum effort of user and minimum resource overhead of the system. • Given a constant effort of a user, subject to constraints of system resources, maximize the utility of delivered information to the user • Given a fixed “budget” for system resources, and an upper bound of user effort, maximize the utility of delivered information

  16. Search as a Sequential Game (Satisfy an information need with minimum user effort, minimum resource) (Satisfy an information need with minimum effort) User System A1 : Enter a query Which information items to present? How to present them? Which items to view? Ri: results (i=1, 2, 3, …) Which aspects/parts of the item to show? How? A2 :View item R’: Item summary/preview View more? A3 : Scroll down or click on “Back”/”Next” button

  17. Retrieval Task = Sequential Decision-Making History H={(Ai,Ri)} i=1, …, t-1 Rt =? Rt r(At) Given U, C, At , and H, choose the best Rt from all possible responses to At Query=“light laptop” Click on “Next” button User U: A1 A2 … … At-1 At System: R1 R2 … … Rt-1 The best ranking for the query The best ranking of unseen items C All possible rankings of items in C Info Item Collection All possible rankings of unseen items

  18. Formalization based on Bayesian Decision Theory : Risk Minimization Framework [Zhai & Lafferty 06, Shen et al. 05] User Model Seen items M=(S, U,… ) Information need L(ri,At,M) Loss Function Optimal response: r* (minimum loss) Bayes risk Inferred Observed Observed User: U Interaction history: H Current user action: At Document collection: C All possible responses: r(At)={r1, …, rn}

  19. A Simplified Two-Step Decision-Making Procedure • Approximate the Bayes risk by the loss at the mode of the posterior distribution • Two-step procedure • Step 1: Compute an updated user model M* based on the currently available information • Step 2: Given M*, choose a response to minimize the loss function

  20. Optimal Interactive Retrieval M*1 P(M1|U,H,A1,C) L(r,A1,M*1) R1 A2 M*2 P(M2|U,H,A2,C) L(r,A2,M*2) R2 A3 … User U C Collection A1 • Many possible responses: • query completion • display adaptive summaries • recommendation/advertising • clarification • … • Many possible actions: • type in a query character • scroll down a page • click on any button • … M can be regarded as states in an MDP or POMDP. Thus reinforcement learning will be very useful! (see SIGIR’14 tutorial on dynamic IR modeling [Yang et al. 14]) IR system

  21. Refinement of Risk Minimization Framework • r(At): decision space (At dependent) • r(At) = all possible rankings of items in C • r(At) = all possible rankings of unseen items • r(At) = all possible summarization strategies • r(At) = all possible ways to diversify top-ranked items • r(At) = all possible ways to mix results with query suggestions (or topic map) • M: user model • Essential component: U = user information need • S = seen items • n = “new topic?” (or “Never purchased such a product before”?) • t = user’s task? • L(Rt,At,M): loss function • Generally measures the utility of Rt for a user modeled as M • Often encodes relevance criteria, but may also capture other preferences • Can be based on long-term gain (i.e., “winning the whole “game” of info service) • P(M|U, H, At, C): user model inference • Often involves estimating the information need U • May involve inference of other variables also (e.g., task, exploratory vs. fixed item search)

  22. Case 1: Context-Insensitive IR • At=“enter a query Q” • r(At) = all possible rankings of docs in C • M= U, unigram language model (word distribution) • p(M|U,H,At,C)=p(U |Q)

  23. Optimal Ranking for Independent Loss “Risk ranking principle” [Zhai 02, Zhai & Lafferty 06] Decision space = {rankings} Sequential browsing Independent loss Independent risk = independent scoring

  24. Case 2: Implicit Feedback • At=“enter a query Q” • r(At) = all possible rankings of docs in C • M= U, unigram language model (word distribution) • H={previous queries} + {viewed snippets} • p(M|U,H,At,C)=p(U |Q,H)

  25. Case 3: General Implicit Feedback • At=“enter a query Q” or “Back” button, “Next” button • r(At) = all possible rankings of unseen docs in C • M= (U, S), S= seen documents • H={previous queries} + {viewed snippets} • p(M|U,H,At,C)=p(U |Q,H)

  26. Case 4: User-Specific Result Summary • At=“enter a query Q” • r(At) = {(D,)}, DC, |D|=k, {“snippet”,”overview”} • M= (U, n), n{0,1} “topic is new to the user” • p(M|U,H,At,C)=p(U, n|Q,H), M*=(*, n*) If a new topic (n*=1), give an overview summary; otherwise, a regular snippet summary Choose k most relevant docs

  27. Case 5: Modeling Different Notions of Diversification • Redundancy reduction  reduce user effort • Diverse information needs (e.g., overview, subtopic retrieval)  increase the immediate utility • Active relevance feedback  increase future utility

  28. Risk Minimization for Diversification • Redundancy reduction: Loss function includes a redundancy measure • Special case: list presentation + MMR [Zhai et al. 03] • Diverse information needs: loss function defined on latent topics • Special case: PLSA/LDA + topic retrieval [Zhai 02] • Active relevance feedback: loss function considers both relevance and benefit for feedback • Special case: hard queries + feedback only [Shen & Zhai 05]

  29. Subtopic Retrieval [Zhai et al. 03] Query: What are the applications of robotics in the world today? Find as many DIFFERENT applications as possible. Subtopic judgments A1 A2 A3 … ... Ak d1 1 1 0 0 … 0 0 d2 0 1 1 1 … 0 0 d3 0 0 0 0 … 1 0 …. dk 1 0 1 0 ... 0 1 Example subtopics: A1:spot-welding robotics A2: controlling inventory A3: pipe-laying robots A4: talking robot A5: robots for loading & unloading memory tapes A6: robot [telephone] operators A7: robot cranes … … This is a non-traditional retrieval task …

  30. 5.1 Diversify = Remove Redundancy Greedy Algorithm for Ranking: Maximal Marginal Relevance (MMR) “Willingness to tolerate redundancy” C2<C3, since a redundant relevant doc is better than a non-relevant doc

  31. 5.2 Diversity = Satisfy Diverse Info. Need[Zhai 02] • Need to directly model latent aspects and then optimize results based on aspect/topic matching • Reducing redundancy doesn’t ensure complete coverage of diverse aspects

  32. Aspect Loss Function: Illustration perfect redundant Desired coverage p(a|Q) “Already covered” p(a|1)... p(a|k -1) non-relevant New candidate p(a|k) Combined coverage p(a|k)

  33. 5.3 Diversify = Active Feedback [Shen & Zhai 05] Decision problem: Decide subset of documents for relevance judgment

  34. Independent Loss Independent Loss

  35. Independent Loss (cont.) Top K Uncertainty Sampling

  36. Dependent Loss Heuristics: consider relevance first, then diversity Select Top N documents … Cluster N docs into K clusters MMR K Cluster Centroid Gapped Top K

  37. Illustration of Three AF Methods Top-K (normal feedback) Gapped Top-K K-cluster centroid 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 … Experiment results show that Top-K is worse than all others [Shen & Zhai 05]

  38. Suggested answers to the basic questions • Search Engine = Game System • Optimal Search Engine = Optimal Game Plan/Strategy • Objective function: based on 3 factors and at the session level • Utility of information delivered to the user • Effort needed from the user • System resource overhead • How can we solve such an optimization problem? • Bayesian decision theory in general, partially observable Markov decision process (POMDP) [Luo et al. 14] • Reinforcement learning • ...

  39. Major benefits of IR as game playing • Naturally optimize performance on an entire session instead of that on a single query (optimizing the chance of winning the entire game) • It optimizes the collaboration of machines and users (maximizing collective intelligence) • It opens up many interesting new research directions (e.g., crowdsourcing + interactive IR)

  40. An interesting new problem: Crowdsourcing to users for relevance judgments collection • Assumption: Approximate relevance judgments with clickthroughs • Question: how to optimize the exploration-exploitation tradeoff when leveraging users to collect clicks on lowly-ranked (“tail”) documents? • Where to insert a candidate ? • Which user should get this “assignment”? • Potential solution must include a model for a user’s behavior

  41. General Research Questions Suggested by the Game-Theoretic Framework • How should we design an IR game? • How to design “moves” for the user and the system? • How to design the objective of the game? • How to go beyond search to support access and task completion? • How to formally define the optimization problem and compute the optimal strategy for the IR system? • To what extent can we directly apply existing game theory? Does Nash equilibrium matter? • What new challenges must be solved? • How to evaluate such a system? MOOC?

  42. Some Relevant Challenges in NLP • How can we turn partial understanding into additional dimension of scoring ? • Readability • Trustworthiness • How can we perform syntactic and semantic analysis of queries? • How can we generate adaptive explanatory summaries of documents? • How can we generate coherent preview of search results ? • How can we generate a topic map to enable users to browse freely?

  43. Intelligent IR System in the Future:Optimizing multiple games simultaneously Game 2 Game k Game 1 Mobile service search Medical advisor Learning engine (MOOC) Log Intelligent IR System • Support whole workflow of a user’s task (multimodelinfo access, info analysis, decision support, task support) • Minimize user effort (maximum relevance, natural dialogue) • Minimize system resource overhead • Learn to adapt & improve over time from all users/data Documents

  44. Action Item: future research requires integration of multiple fields Psychology User action Interactive Service (Search, Browsing, Recommend…) Human-Computer Interaction Document Collection Game Theory (Economics) System response Traditional Information Retrieval User Model User Understanding Document Representation Document Understanding Natural Language Processing Natural Language Processing Machine Learning (particularly reinforcement learning) External User Info (social network) External Doc Info (structures) User interaction Log

  45. References Note: the references are inevitably incomplete due to the breadth of the topic; if you know of any important missing references, please email me at czhai@illinois.edu. • [Salton et al. 1975] A theory of term importance in automatic text analysis. G. Salton, C.S. Yang and C. T. Yu. Journal of the American Society for Information Science, 1975. • [Singhal et al. 1996] Pivoted document length normalization. A. Singhal, C. Buckley and M. Mitra. SIGIR 1996. • [Maron&Kuhn 1960] On relevance, probabilistic indexing and information retrieval. M. E. Maron and J. L. Kuhns. Journal o fhte ACM, 1960. • [Harter 1975] A probabilistic approach to automatic keyword indexing. S. P. Harter. Journal of the American Society for Information Science, 1975. • [Robertson&Sparck Jones 1976] Relevance weighting of search terms. S. Robertson and K. Sparck Jones. Journal of the American Society for Information Science, 1976. • [van Rijsbergen 1977] A theoretical basis for the use of co-occurrence data in information retrieval. C. J. van Rijbergen. Journal of Documentation, 1977. • [Robertson 1977] The probability ranking principle in IR. S. E. Robertson. Journal of Documentation, 1977.

  46. References (cont.) • [Robertson 1981] Probabilistic models of indexing and searching. S. E. Robertson, C. J. van Rijsbergen and M. F. Porter. Information Retrieval Search, 1981. • [Robertson&Walker 1994] Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval. S. E. Robertson and S. Walker. SIGIR 1994. • [Ponte&Croft 1998] A language modeling approach to information retrieval. J. Ponte and W. B. Croft. SIGIR 1998. • [Hiemstra&Kraaij 1998] Twenty-one at TREC-7: ad-hoc and cross-language track. D. Hiemstra and W. Kraaij. TREC-7. 1998. • [Zhai&Lafferty 2001] A study of smoothing methods for language models applied to ad hoc information retrieval. C. Zhai and J. Lafferty. SIGIR 2001. • [Lavrenko&Croft 2001] Relevance-based language models. V. Lavrenko and B. Croft. SIGIR 2001. • [Kurland&Lee 2004] Corpus structure, language models, and ad hoc information retrieval. O. Kurland and L. Lee. SIGIR 2004. • [van Rijsbergen 1986] A non-classical logic for information retrieval. C. J. van Rijsbergen. The Computer Journal, 1986. • [Wong&Yao 1995] On modeling information retrieval with probabilistic inference. S. K. M. Wong and Y. Y. Yao. ACM Transactions on Information Systems. 1995.

  47. References (cont.) • [Amati&vanRijsbergen 2002] Probabilistic models of information retrieval based on measuring the divergence from randomness. G. Amati and C. J. van Rijsbergen. ACM Transactions on Information Retrieval. 2002. • [He&Ounis 2005] A study of the dirichlet priors for term frequency normalization. B. He and I. Ounis. SIGIR 2005. • [Fuhr 89] Norbert Fuhr: Optimal Polynomial Retrieval Functions Based on the Probability Ranking Principle. ACM Trans. Inf. Syst. 7(3): 183-204 (1989) • [Gey 1994] Inferring probability of relevance using the method of logistic regression. F. Gey. SIGIR 1994. • [Fang et al. 2004] H. Fang, T. Tao, C. Zhai, A formal study of information retrieval heuristics. SIGIR 2004. • [Fang et al. 2011] H. Fang, T. Tao, C. Zhai, Diagnostic evaluation of information retrieval models, ACM Transactions on Information Systems, 29(2), 2011 • [Zhai & Lafferty 06] ChengXiangZhai, John D. Lafferty: A risk minimization framework for information retrieval. Inf. Process. Manage. 42(1): 31-55 (2006) • [Zhai 02] ChengXiangZhai, Risk Minimization and Language Modeling in Information Retrieval, Ph.D. thesis, Carnegie Mellon University, 2002. • [Cao et al. 07] ZheCao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th international conference on Machine learning (ICML '07), pp.129-136, 2007

  48. References (cont.) • [Shen et al. 05] XuehuaShen, Bin Tan, and ChengXiangZhai, Implicit User Modeling for Personalized Search , In Proceedings of the 14th ACM International Conference on Information and Knowledge Management ( CIKM'05), pages 824-831. • [Zhai et al. 03] ChengXiangZhai, William W. Cohen, and John Lafferty, Beyond Independent Relevance: Methods and Evaluation Metrics for Subtopic Retrieval , Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval ( SIGIR'03 ), pages 10-17, 2003. • [Shen & Zhai 05] XuehuaShen, ChengXiangZhai, Active Feedback in Ad Hoc Information Retrieval, Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval ( SIGIR'05), 59-66, 2005. • [Teevan et al. 10] Jaime Teevan, Susan T. Dumais, Eric Horvitz: Potential for personalization. ACM Trans. Comput.-Hum. Interact. 17(1) (2010) • [Shen et al. 06] Dou Shen, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2006. Building bridges for web query classification. In Proceedings of the 29th annual international ACM SIGIR 2006, pp. 131-138. • [Wang et al. 13] Hongning Wang, Yang Song, Ming-Wei Chang, Xiaodong He, Ryen W. White, and Wei Chu. 2013. Learning to extract cross-session search tasks, WWW’ 2013. 1353-1364.

  49. References (cont.) • [Agarwal et al. 09] Deepak Agarwal, Bee-Chung Chen, and PradheepElango. 2009. Explore/Exploit Schemes for Web Content Optimization. In Proceedings of the 2009 Ninth IEEE International Conference on Data Mining (ICDM '09), 2009. • [Karimzadehgan & Zhai 13] Maryam Karimzadehgan, ChengXiangZhai. A Learning Approach to Optimizing Exploration-Exploitation Tradeoff in Relevance Feedback, Information Retrieval , 16(3), 307-330, 2013. • [Luo et al. 14] J. Luo, S. Zhang, G. H. Yang, Win-Win Search: Dual-Agent Stochastic Game in Session Search. ACM SIGIR 2014. • [Yang et al. 14] G. H. Yang, M. Sloan, J. Wang, Dynamic Information Retrieval Modeling, ACM SIGIR 2014 Tutorial; http://www.slideshare.net/marcCsloan/dynamic-information-retrieval-tutorial

  50. Thank You! Questions/Comments?

More Related