1 / 38

Effectiveness of Implicit Rating Data on Characterizing Users in Complex Information Systems

This study explores the effectiveness of using implicit rating data to characterize users in complex information systems, such as digital libraries. It examines the attributes of user activity, proposes a user grouping model based on implicit data, and discusses the benefits of collecting user interests for user grouping.

Télécharger la présentation

Effectiveness of Implicit Rating Data on Characterizing Users in Complex Information Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Effectiveness of Implicit Rating Data on Characterizing Users in Complex Information Systems 9th ECDL 2005 Vienna, Austria Sep. 20, 2005 Seonho Kim, Uma Murthy, Kapil Ahuja, Sandi Vasile, Edward A. Fox Digital Library Research Laboratory (DLRL) Virginia Tech, Blacksburg, VA 26061 USA

  2. Acknowledgements (Selected) • Sponsors: AOL; NSF grants DUE-0121679 DUE-0435059; Virginia Tech; … • Faculty/Staff: Lillian Cassel, Manuel Perez, … • VT (Former) Students: Aaron Krowne, Ming Luo, Hussein Suleman, … ECDL 2005

  3. Overview • Introduction • Prior Work • Web Trends and DL • Data for User Studies • Problem of Explicit Rating Data • Implicit Rating Data in DLs • Attributes of User Activity • User Tracking Interface and User Model DB • Questions and Experiments • Questions to Solve • Experiments, Hypothesis Tests, Data, Settings • Results of Hypothesis Testing • Data Types and Characterizing Users • Future Work • Conclusions • References ECDL 2005

  4. Prior Work • User study, User feedback • Pazzani et al. [1]: learned user profile from user feedback on the interestingness of Web sites. • Log analysis & standardization efforts • Jones et al [2]: a transaction log analysis of DL • Gonçalves et al. [3]: defined an XML log standard for DLs. • Implicit rating data • Nichols [4]: suggested the use of implicit data as a check on explicit ratings. • GroupLens[5]: employed “time consuming” factor for personalization. ECDL 2005

  5. Web Trends & DL • WWW Trends • One way  Two way services • e.g., Blogs, wikis, online journals, forums, etc. • Passive anonymous observer  visible individuals with personalities • Same situation in Digital Libraries • Research emphasis on “User Study” • Collaborative Filtering • Personalization • User Modeling • Recommender system, etc. ECDL 2005

  6. Data for User Studies • Explicit Ratings • User interview • User preference survey: demographic info, research area, majors, learning topics, publications • User rating for items • Implicit Ratings • “User activities”, e.g., browsing, clicking, reading, opening, skipping, etc. • Time ECDL 2005

  7. Problem of Explicit Rating Data in Digital Libraries • Expensive to obtain • Patrons feel bothered • Limited questions • Terminology problems in describing research interests and learning topic • Too broad area, and too narrow personal interests • Term ambiguity • New terminology in new areas • Multiple terms for same area, multiple meanings of a term  Hard to figure out users’ interests and topics ECDL 2005

  8. Implicit Rating Data in Complex Information Systems • Easy to obtain • Patrons don’t feel bothered, can concentrate on their tasks • No terminology issues • Potential knowledge is included in data • More effective when hybrid, with explicit rating data (Nichols [4], GroupLens[5]) ECDL 2005

  9. User Tracking Interface and User Model DB Digital Library Retrieval System ignore read -Load -Update -Save -Create tracking info browse expand open click type a query User Model DB ECDL 2005

  10. Attributes of User Activity ANY ANY User Interest  ANY Document Topic  ANY High  ANY Low  ANY • DGG (Domain Generalization Graph) for user activity attributes in DL Frequency Type ANY Rating  ANY Perceiving  ANY implicit  ANY explicit  ANY Direction Intention Entering a query  implicit Sending a query  implicit Reading  implicit Skipping  implicit Selecting  implicit Expanding a node  implicit Scrolling  implicit Dragging  implicit Entering user info.  explicit Entering a query  perceiving Sending a query  rating Reading  perceiving Skipping  rating Selecting  rating Expanding a node  rating Scrolling  perceiving Dragging  perceiving Entering user info.  rating User Activity ECDL 2005

  11. Overview • Introduction • Prior Work • Web Trends and DL • Data for User Studies • Problem of Explicit Rating Data • Implicit Rating Data in DLs • Attributes of User Activity • User Tracking Interface and User Model DB • Questions and Experiments • Questions to Solve • Experiments, Hypothesis Tests, Data, Settings • Results of Hypothesis Testing • Data Types and Characterizing Users • Future Work • Conclusions • References ECDL 2005

  12. Proposed User Grouping Model • User grouping is the most critical procedure for a recommender system. • Suitable for dynamic and complex information systems like DLs • Overcomes data sparseness • Uses implicit rating data rather than explicit rating data • User oriented recommender algorithm • User interest-based community finding • User modeling • User model (UM) contains complete statistics for recommender system. • Enhanced interoperability ECDL 2005

  13. Collecting User Interests for User Grouping • Users with similar interests are grouped • Employs a Document Clustering Algorithm, LINGO [10], to collect document topics • Users’ interests are collected implicitly during searching and browsing. • A User Model (UM) contains her interests and document topics. • Interests of a user are subset of document topics proposed to her by Document Clustering. ECDL 2005

  14. Interest-based Recommender System ECDL 2005

  15. System Analysis with 5S Model Interest-based Recommender System for DL Society Structure represented by Researcher User Model Interest Group Users UM schema Teacher participates User description Statistics Class Group User interests Learner Document topics Community User groups refers Collaboration space refers Probability Space Recommendation Space Vector space generates displays User Interface Push service Group Selection Presentation Filtering Ranking Individual Selection Highlighting Text Video Audio Personalized pages Stream Scenario Users composed of ECDL 2005

  16. User Model (UM) User ID User Description Groups Statistics (implicit data -generated by recommender) • (implicit data • generated by user interface • and recommender) (explicit data -obtained from questionnaire) Name Group ID Score Document Topic Score E-mail User Interest Score Address Publications User Interests ECDL 2005

  17. Experiment - Tasks • Subjects are asked to • answer a questionnaire to collect democratic information • list research interests to help us collect explicit rating data which is used for evaluation in the experiment • search some documents in her research interests and browse the result documents to help us collect implicit rating data ECDL 2005

  18. Experiment - Participants • 22 Ph.D and MS students majoring in Computer Science • CITIDEL [8] is used as a DL in “Computing” field • Data from 4 students were excluded as their research domains are not included in CITIDEL ECDL 2005

  19. Experiment - Interfaces • JavaScripts • Specially designed user interfaces are required to capture user’s interactions • Java Application ECDL 2005

  20. Results - Collected Data • Example <Semi Structured Data<Cross Language Information Retrieval CLIR<Translation Model<Structured English Query<TREC Experiments at Maryland<Structured Document<Evaluation<Attribute Grammars<Learning<Web<Query Processing<Query Optimisers<QA<Disambiguation<Sources<SEQUEL<Fuzzy<Indexing<Inference Problem<Schematically Heterogeneity<Sub Optimization Query Execution Plan<Generation<(Other)(<Cross Language Information Retrieval CLIR)(<Structured English Query)(<TREC Experiments at Maryland)(<Evaluation)(<Query Processing)(<Query Optimisers)(<Disambiguation) <Cross Language Information Retrieval CLIR<Machine Translation<English Japanese<Based Machine<TREC Experiments at Maryland<Approach to Machine<Natural Language<Future of Machine Translation<Machine Adaptable Dynamic Binary<CLIR Track<Systems<New<Tables Provide<Design<Statistical Machine<Query Translation<Evaluates<Chinese<USA October Proceedings<Interlingual<Technology<Syntax Directed Transduction<Interpretation<Knowledge<Linguistic<Divergences<(Other)(<Cross Language Information Retrieval CLIR)(<Machine Translation)(<English Japanese)(<TREC Experiments at Maryland)(<CLIR Track)(<Query Translation) • Parenthesized topics mean they are rated positively ECDL 2005

  21. Questions to Solve • Is implicit rating data really effective for user study? for characterizing users? especially in complex information systems like DLs? • If we are to prove it statistically, what are the right hypotheses and what are the right settings for hypotheses testing? ECDL 2005

  22. Two Experiments in this Study • Two hypothesis tests to prove the effectiveness of Implicit Rating Data on characterizing users in DL • An ANOVA test for comparing implicit rating data types on distinguishing users in DL ECDL 2005

  23. Hypothesis Tests • Hypotheses • H1: For any serious user with their own research interests and topics, show repeated (consistent) output for the document collections referred to by the user. • H2: For serious users who share common research interests and topics, show overlapped output for the document collections referred to by them. • H3: For serious users who don’t share any research interests and topics, show different output for the document collections referred to by them. ECDL 2005

  24. Data Used for Hypothesis Tests • Data for Hypothesis Tests: Users’ learning topics and research interests are obtained “implicitly” by tracking users’ activities with user tracking interface while users need not be aware. • Data collected by a user tracking system for 18 students at both Ph.D. and M.S. levels, in CS major, while using CITIDEL [6] ECDL 2005

  25. Setting for Hypothesis Test 1 • Let H0 be a null hypothesis of H1, thus H0 is: Means (μ) of the frequency of document topics ‘proposed’ by the Document Clustering Algorithm are NOT consistent for a user. • Simplified  A testing whether the population mean, μ, is statistically significantly greater than hypothesis mean, μ0. ECDL 2005

  26. User Groups DL system e b a f c d a,b,c,d,e,f : users Setting for Hypothesis Test 2 • Let H0 be a null hypothesis of H2, thus H0 is: A user’s average ratio of overlapped topics with other persons in her groups over her total topics which have been referred, in-group overlapping ratio, μ1, is the same as the average ratio of overlapped topics with other persons out of her groups over her total topics which have been referred, out-group overlapping ratio, μ2 ECDL 2005

  27. Setting for Hypothesis Test 2 • In-group overlapping ratio • Out-group overlapping ratio • Oi,j: user i’s topic ratio overlapped with user j’s topics over i’s total topics G : total number of user group nK : total number of users in group K N : total number of users in the system ECDL 2005

  28. Setting for Hypothesis Test 2 • Simplified  A testing whether μ1 is statistically significantly greater than μ2 • Hypothesis 3 can be proven and estimated together, by hypothesis test 2 ECDL 2005

  29. Results of Test 1 • Conditions: 95% confidence (test size α = 0.05), sample size ‘n’ < 25, standard deviation ‘σ’ unknown, i.i.d. random samples, normal distribution,  estimated z-score T-test • Test statistics: sample mean ‘ỹ’ = 1.1429, sample standard deviation ‘s’ = 0.2277, are observed from the experiment • Rejection Rule is to reject H0 if the ỹ > μ0+zα/2 σ/√n • From the experiment, ỹ = 1.1429 > μ0+zα/2 σ/√n = 1.0934 • Therefore decision is to Reject H0 and accept H1 • 95% Confidence Interval for μ is 1.0297 ≤ μ ≤1.2561 • P-value (confidence of H0) = 0.0039 ECDL 2005

  30. Results of Test 2 • Conditions: 95% confidence (test size α = 0.05), two i.i.d. random sample from a normal distribution, for two sample sizes n1 and n2, n1=n2 < 25, standard deviations of each sample σ1 and σ2 are unknown  two-sample Welch T-test • From the experiment, sample mean of μ1, ỹ1 = 0.103, sample mean of μ2, ỹ2 = 0.0215, Satterthwaite’s degree of freedom approximation dfs =16.2 and Welch score w0 = 4.64 > t16.2, 0.05 = 1.745 • Therefore decision is to Reject H0 and accept H2 • 95% Confidence Interval for μ1, μ2 and μ1 - μ2 are 0.0659 ≤ μ1 ≤ 0.1402, 0.0183 ≤ μ2 ≤ 0.0247 and 0.0468 ≤ μ1 - μ2≤ 0.1163, respectively • P-value (confidence of H0) = 0.0003 ECDL 2005

  31. Results of Hypothesis Testing • Statistically proved that implicit rating data is effective in characterizing users in complex information systems. ECDL 2005

  32. Data Types and Characterizing Users • Previous similar studies were based on explicit user answers to surveys on their preferences, research and learning topics  basic flaw caused by the variety of academic terms. • Purpose: Compare the effectiveness of different data types in characterizing users by using only automatically obtained objective data without using subjective users’ answers. ECDL 2005

  33. Data Types and Characterizing Users • Topics: noun phrases logged in User Models generated by a document clustering system ‘LINGO’ from documents to which users referred • Terms: single words found on user queries and topics • ANOVA statistics F(3,64) = 4.86, p-value = 0.0042, LSD = 1.7531 ECDL 2005

  34. Data Types and Characterizing Users • The higher in-group overlapping ratio / out-group overlapping ratio is more effective in characterizing users. • “Proposed topics” which have appeared during the use of a digital library were most effective, however the differences between data types were not significant except the “proposed terms”. ECDL 2005

  35. Future Work • Large scale Experiment on NDLTD [7] • User Model DB visualization • Utilize implicit rating data for personalization and recommendation ECDL 2005

  36. Conclusions • Built a User Tracking system to collect Implicit rating data in DL. • Statistically proved that implicit ratings is effective information in characterizing users in complex information systems like DLs. • Compared the effectiveness of data types in characterizing users without depending on users’ subjective answers. ECDL 2005

  37. References • [1] Michael Pazzani, Daniel Billsus: Learning and Revising User Profiles: The Identification of Interesting Web Sites, Machine Learning 27, 1997, 313-331 • [2] Steve Jones, Sally Jo Cunningham, Rodger McNab: An Analysis of Usage of a Digital Library, in Proceedings of the 2nd ECDL, 1998, 261-277 • [3] Marcos André Gonçalves, Ming Luo, Rao Shen, Mir Farooq and Edward A. Fox: An XML Log Standard and Tools for Digital Library Logging Analysis. In Proceedings of Sixth European Conference on Research and Advanced Technology for Digital Libraries, Rome, Italy, September, 2002, 16-18 • [4] David M. Nichols: Implicit Rating and Filtering. In Proceedings of 5th DELOS Workshop on Filtering and Collaborative Filtering, Budapest Hungary, November 1997, 31-36 • [5] Joseph A. Konstan, Bradley N. Miller, David Maltz, Jonathan L. Herlocker, Lee R. Gordon and John Riedl, GroupLens: Applying Collaborative Filtering to Usenet News. In Communications of the ACM, Vol. 40, No. 3, 1997, 77-87 • [6] CITIDEL: Available at http://www.citidel.org/, 2005 • [7] NDLTD: Available at http://www.ndltd.org/, 2005 ECDL 2005

  38. Review • Introduction • Prior Work • Web Trends and DL • Data for User Studies • Problem of Explicit Rating Data • Implicit Rating Data in DLs • Attributes of User Activity • User Tracking Interface and User Model DB • Questions and Experiments • Questions to Solve • Experiments, Hypothesis Tests, Data, Settings • Results of Hypothesis Testing • Data Types and Characterizing Users • Future Work • Conclusions • References ECDL 2005

More Related