1 / 18

Context Learning Can Improve User Interaction

Context Learning Can Improve User Interaction. Sushil J. Louis, Anil K. Shankar Evolutionary Computing Systems Lab (ECSL) Department of Computer Science and Engineering University of Nevada, Reno http://www.cs.unr.edu/~anilk anilk@cs.unr.edu sushil@cs.unr.edu. Current UIs can be improved.

krista
Télécharger la présentation

Context Learning Can Improve User Interaction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Context Learning Can Improve User Interaction Sushil J. Louis, Anil K. Shankar Evolutionary Computing Systems Lab (ECSL) Department of Computer Science and Engineering University of Nevada, Reno http://www.cs.unr.edu/~anilk anilk@cs.unr.edu sushil@cs.unr.edu

  2. Current UIs can be improved • Hardware • Keyboard, mouse, clock • Software • GUI • Little personalization, no long term-memory • Little use of context • Advances in speech, vision, and text analysis have not been well integrated

  3. Can extended context improve UI • What sensors should we use? • How do we use extended context to improve user interaction? • Can we personalize interaction • Personalized transportable UI PC is a stationary robot

  4. Simple sensors provide context • Good vision, speech recognition, and image or speech understanding are hard AI problems • What can we do with simple sensors? • Object recognition versus motion detection • Speech recognition versus speech detection • Keyboard activity • Mouse activity • Selected processes

  5. Simple context allows richer user interaction …But every user has different answers!... • If there is no one in the room should I pop up a scheduled appointment? • If there is someone in the room should I remind Jane? • Should I turn down my music player when the telephone rings? • Should I pause the current song when Jane leaves the room?

  6. Sycophant uses ML techniques to learn context to action mappings • Sycophant is a calendaring application that learns to predict preferred reminder actions • Sycophant stores user interaction and context • Sycophant learns to predict reminder type

  7. Related Work • Reba (Kulkarni 1992) – PC is a stationary robot • Bailey and Adamczyk, 2004 – Interruptions disrupts user’s emotional state and task performance • Hudson, Fogarty, et al, 2003 – predict interruptibility from context. Wizard of Oz study (simulated sensors) achieved 82.4% accuracy • Sycophant learns whether or not to interrupt the user as well as how to interrupt the user • Sycophant uses real sensors

  8. Sycophant uses simple context to predict action • Sensors for context • Keyboard, mouse • Motion: http://motion.sourceforge.net and a cheap logitech webcam • Speech: http://www.speech.cs.cmu.edu the Sphinx speech recognition engine. We only DETECT speech • Five processes: java, bash, terminal, xscreensaver, mozilla • Sycophant reminder actions (Four classes) • Visual (Popup), Speech (TTS), Neither, Both User has to provide feedback on action suitability

  9. Sycophant stores sensor data • For each sensor and process we store the following data if the sensor was activated (15 sec intervals) • Any5 : any in 5 minute interval • All5 : all 5 minutes • Any1 : any in 1 minute interval • All1 : all 1 minute • Immed: in the last 15 seconds • Count : number of times sensor active in last 5 minutes • User ((4 sensors + 5 processes) X 6 derived values + 1 user) = 55 total features

  10. Sycophant uses WEKA ML tools • Zero-R: predicts majority class • One-R: one level decision tree testing one attribute • J48 : Decision tree like C4.5 • Bagging: Voting over N decision trees • LogitBoost: Numerical model • Naïve Bayes: Bayes

  11. Results • Performance of decision tree inducer with different number of features • Run J48 on all features, then choose most significant N features • Show performance on N features with J48 Not much difference in performance with fewer features

  12. Results: Predict user action • Performance of different ML algorithms on 25 feature data set on four class problem Small differences in performance

  13. Results: Two class problemClass1: Remind, Class 2: No reminder • Significant increase in performance • From 65% to 80%

  14. Sycophant performs at 65% on four class problem Sycophant performs at 80% on two class problem Removing motion and speech detectors results in a statistically significant decrease in performance Sample Rules: IF Keyboard Any5 && speech count > 2 && no motion in last 1min && appoint time > 1220 THEN generate Speech AND Popup reminders IF Keyboard Any5 && speech count > 2 && keyboard Any1 THEN generate Speech only Results

  15. Summary • Sycophant uses machine learning tools to learn a mapping from user context to user actions • Simple context provides good features • Motion and speech sensors leads to statistically significant performance improvement • 65% accuracy on four class problem • 80% accuracy on two class problem

  16. Future work • We are developing a general architectural framework for a context learning layer for all applications • Improve performance • We need more studies with other users and different types of users • Feature subset selection • Classifier systems

  17. Acknowledgements • Office of Naval Research – Contract Number N00014030104 • Evolutionary Computing System Lab (ECSL) • Chris Miles • Kai Xu • Ryan Leigh • http://ecsl.cs.unr.edu • Anil K. Shankar • http://www.cs.unr.edu/~anilk • Code, other papers

More Related