1 / 10

Research Goal

Implicit Acquisition of Context for Personalization of Information Retrieval Systems Chang Liu, Nicholas J. Belkin School of Communication and Information Rutgers University imliu@gmail.com , belkin@rutgers.edu. Research Goal.

xiu
Télécharger la présentation

Research Goal

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Implicit Acquisition of Context for Personalization of Information Retrieval Systems Chang Liu, Nicholas J. BelkinSchool of Communication and InformationRutgers Universityimliu@gmail.com, belkin@rutgers.edu

  2. Research Goal • Propose a model to personalize search results according to the user’s search context, in particular the type of task that led the user to engage in information-seeking behavior, and the behaviors that the user has engaged in during the search. • This personalization is to predict potentially useful documents based on the type of task, and on behaviors indicative of document usefulness.

  3. Sources of Evidence of Usefulness for Implicit Relevance Feedback • Measures of behaviors on content pages • Time-related measures: display (dwell) time • Amount of actions: number of scrolling, mouse movement & clicks, number of visits • Further usage of content pages: print, bookmark, save, etc. • Patterns of eye movements • Measures of behaviors on search result pages • Click-through, click order, click position, number of clicks • Time on result list before first click, total time on result lists; • Query Reformulation Interval time; • Query Reformulation Type • Query logs • Previous query issuing and results browsing behaviors • Combination of multiple behavioral measures

  4. Task Type as Context • Task type can influence • interpretation of user behaviors for implicit relevance feedback (White and Kelly, 2006; Liu and Belkin, 2010) • influence users’ search behaviors (Li, 2008; Toms et al. 2008; Kim, 2009) • influence the type of information objects users expect (Freund, 2008) • the performance of personalization algorithms may vary for different types of information goals (Teevan, Dumais and Horvitz, 2010) • Classification: Task product, Complexity, Goal (quality), Level of document judgment, etc. • Detect task type from user behaviors

  5. PooDLE Project (Design) • Task-Cognitive Experiment (TCE) • A lab-based user study (journalism domain) • Four search tasks were designed based on task facets • 32 participants • Domain-Knowledge Experiment (DKE) • A lab-based user study (medical domain) • Five search tasks from 2004 TREC Genomics track • Questionnaire about participants’ background information, domain knowledge, search knowledge, etc. • 40 participants • Both experiments • A variety of searcher behaviors: eye gaze, various interactions with the search systems and information objects, saving and deleting pages, evaluation of usefulness of the saved pages, etc.

  6. PooDLE Project (Some Results) • Task type and search behaviors • Task type affected average decision time and the ratio of reading to scanning in participants’ reading models • Some within-session behaviors could indicate the difficulty of search tasks • Search behaviors and document usefulness • Query reformulation intervals were related to whether users found useful document(s) in the intervals or not • Query reformulation type • Knowledge of task stage can help in inferring document usefulness from decision time, especially in the parallel task.

  7. Investigation Plan • Three-part plan • Start • Identify salient behaviors • Generate predictive models • Test • Second stage • Implement personalized IR based on first stage • Test • Third stage • Evaluate personalization in experimental setting

  8. Analyze measures of user behaviors in user studies Implement personalized IR algorithm in the system Evaluation of the IR system Observe user behaviors Activate IR algorithm Predictive Model of Task Type Can it predict the task type? Generate Predictive Models Not yet Comparative study Predictive Model of the Usefulness of Content Pages A general predictive model (The default model) Non-Personalized IR system Yes Specific predictive models for each type of tasks V.S. Personalized IR system Re-rank search results /Reformulate queries

  9. Conclusion • We propose a program for developing and evaluating a personalized IR model • collect user behaviors • predict task type and documents useful to the tasks • personalize search results • evaluate to see if this leads to a better search experience

  10. Acknowledgments The research that led to this proposal was supported by IMLS Grant LG-06-07-0105-07. We owe a great debt to our colleagues in the PoODLE project*, Michael Cole, Jingjing Liu, Ralf Bierig, Jacek Gwizdka, Jun Zhang and Xiangmin Zhang. *http://comminfo.rutgers.edu/imls/poodle

More Related