150 likes | 241 Vues
Future Information Access. Joo-Hwee Lim IPAL @ I 2 R, Singapore. Current Information Access. Web, Desktop Search Engines (the war!) Google, Yahoo, Microsoft (all US giant companies) Why do we search? [EU Workshop Report on Challenges of Future Search Engines, Brussels, 15 Sep 2005]
E N D
Future Information Access Joo-Hwee Lim IPAL @ I2R, Singapore
Current Information Access • Web, Desktop Search Engines (the war!) • Google, Yahoo, Microsoft (all US giant companies) • Why do we search? [EU Workshop Report on Challenges of Future Search Engines, Brussels, 15 Sep 2005] • Search effectively replaces traditional ways of engaging with information • In both cultural and commercial terms, search and retrieval is becoming the most significant single facilitating technology, as important as electricity or transport • Context: E-Government, e-commerce, social live, relationships, leisure, security and intelligence, etc • Distinct commercial sector: Euro 40 billion a year by 2008 • Franco-German Project “Quaero” (hard to spell, also to seek)
Information Need Current Search Paradigm Keywords Search Engine formulates,translates e.g. Google Web, Image, Earth relevant information for decision making • Potential problems: • semantic distortion • non-ubiquitous • input constraint generates TASK
Future Information Access We live and move around in physical world, we want information and computing to be closer to our “continuous” part of our real lives
Location-Based Reminder • Recall using current location and associative memory “Did we like the food when we were here last time?” Date: Dec 5 2004 - We just had a great sushi appetizer at this place!
New “information”, New Query • Content (multimedia): enrich experience in life • Query (closer to task): invoke relevant experience • New characteristics of Query • proactive, transparent, multimodal • ex: current location, time, weather, network bandwidth (image or video), device (small display), body state (hungry), emotion (fun seeking), images of object/scene of interest, personal preference, calendar, relationships (extended preferences) etc
Some possible scenarios • [commerce] on the way to business meeting: • Swatch store nearby; wife’s birthday; swatch collector; receive latest design on phone; send image of colleague’s watch as query etc • [relationship] revisit honeymoon location: • receive image/video of first date, wedding, honeymoon • [culture and education] a tourist in Vienna: • virtual experience of history, Mozart and his music etc; get more information by sending an image of the scene • [security] camera phone from a terrorist suspect: • images plus location trace to identify potential threat
Collaborative Annotation • metadata production and sharing by Social Networks • e.g. propagate annotation based on visual similarity between two images captured in the same vicinity “The Merlion Statue is a symbol of Singapore. It guards the entrance to the Singapore River.” ?
Scientific Axis 1 Contextual Multimodal Interaction for Mobile Information AccessCMIMIA PI Singapore: Joo-Hwee Lim PI France: Jean-Pierre Chevallet
Motivation for CMIMIA People are on the move Mobile communication infrastructure and devices are becoming pervasive Ubiquitous Mobile Information Access will be key information activities in our daily lives Current information retrieval technologies (web search, desktop search) cannot provide adequate solutions Context:task, user profile, current location etc Multimodal:images, audio, text description Interaction:small display, multimedia, selection and adaptation of information
content selection interaction history, profile ontology-based annotation Trusted Sites multimodal query/result collaborative Other Bloggers CMIMIA server annotation Context General Web annotation by examples content/context representation/learning Proposed Framework for mobile content query, consumption, and enhancement Convergence and beyond research of context IR, multimodal query and fusion, ontology adaptation and information selection for mobile interaction
Snap2Tell(IR on small mobile device) • From image to text • reverse IR paradigm (use text to search images) • index is a set of images “describing” the object/scene • use image matching to select object, then return related text and audio description • Image matching issues • image processing on the phone • contextual pruning, backward reasoning etc • robust image matching (invariant to scales, rotation, perspective, illumination, occlusion etc)
What’s Next? • New Related Projects • Snap2Go: image-based navigation assistant • MobiLog: context-aware mobile blog producer • New Research Challenges • Precise relevance versus surprise, discovery • Discriminative image semantics discovery • Personal context modeling: adaptive ontology • Multimodal interaction: query and data fusion • Social network to improve relevance: collaborative annotation
Summing up….. “Information is becoming pervasive in real space, enchanting physical spaces with multimedia content that can deepen our experiences of them and making query and search into a more “continuous” part of our real lives”