Leveraging ... User Models
This paper explores how systems utilizing Bayesian networks can optimize user data, leading to more effective learning models. By comparing general and individual user models, it examines approaches such as collaborative filtering and adaptive methodologies. The study emphasizes the importance of understanding causal relationships and context factors in predicting user behavior. Results from experiments demonstrate the effectiveness of adaptive models over general ones, particularly in terms of articulatory metrics, showcasing differential adaptation in real-world scenarios.
Leveraging ... User Models
E N D
Presentation Transcript
Leveraging ... User Models Leveraging Data About Users in General in the Learning of Individual User Models* • Anthony Jameson PhD (Psychology) • Adjunct Professor of HCI • Frank Wittig • CS Researcher • Saarland University, Saarbrucken Germany *i.e. pooling knowledge to improve learning accuracy
Their Contributions • Answer the question: • How can systems that employ Bayesian networks to model users most effectively exploit data about users in general and data about the individual user? • Most previous approaches looked only at: • Learning general user models • Apply the model to users in general • Learning individual user models • Apply each model to its particular user
Collaborative Filtering and Bayesian Networks • Collaborative filtering systems can make individualised predictions based on a subset of users determined to be similar to U • But sometimes we want a more interpretable model • Causal relationships are represented explicitly • Can predict behaviour of U based on contextual factors • Can make inferences about unobserved contextual factors • Bayesian networks are more straightforwardly applied to this type of task
Collaborative Filtering Example – Recommending Products • Each user rates a subset of products • Determines the users tastes as well as product quality • To recommend a CD for user U • First look for users especially similar to U • ie who have rated similar items in a similar way • Compute the average rating for this subset of users • Recommend products with high ratings • Used by Amazon.com, CDNow.com and MovieFinder.com [Herlocker et al. 1999]
Their Experiment - Inferring Psychological States of the User • Simulated on a computer workstation • Navigating through a crowded airport while asking a mobile assistant questions via speech • Pictures appeared to prompt questions • Some instructed time pressure • Finish each utterance as quickly as possible • Some instructed to do a secondary task • “navigate” through terminal (using arrow keys) • Speech input was later coded semi-automatically to extract features
Learning Models Used • Model #1 - General Model • Learned from experimental data via maximum-likelihood method (not adapted to individual users) • Model #2 - Parametrised Model • Like general model, but baselines for each user and for each speech metric are included • Model #3 - Adaptive (Differential) Model • Uses AHUGIN method (next slide) • Model #4 - Individual Model • Learned entirely on individual data
A Tangent – AHUGIN[Olesen et al. 1992] • Adaptive HUGIN • No explicit dimensional representation for how users differ • The conditional probability tables (CPTs) of the Bayesian network are adapted with each observation • Thus a variety of individual differences can be adapted to, without the designer of the BN anticipating their nature
Equivalent Sample Size (ESS) • However, you also need to address the speed at which the CPTs adapt • The ESS represents the extent of the system's reliance on the initial general model, relative to each users' new data • This paper contributes a principled method of estimating the optimal ESS, which is generally not obvious a priori, nor consistent across the parts of the BN • Differential adaptation
Speech Metrics;Results • Articulation Rate • Syllables articulated per second of speaking • General performs worst, other three on par • Individual takes a while to catch up, as with all metrics • Number of Syllables • The number of syllables in the utterance • Again, General is poor, Parametrised OK, Individual and Adaptive best • Disfluencies and Silent Pauses • Any of four types of disfluency; eg failing to complete a sentence • Duration of silent pauses relative to word number • All about equal (perhaps due to infrequencies)
Summary • Now Dave can rip into it