1 / 12

Supporting Video Library Exploratory Search: When Storyboards are not Enough

Supporting Video Library Exploratory Search: When Storyboards are not Enough. Mike Christel christel@cs.cmu.edu School of Computer Science Carnegie Mellon University. CIVR July 8, 2008. Talk Outline. Strength of Storyboards for TRECVID interactive search task (quick review)

jena
Télécharger la présentation

Supporting Video Library Exploratory Search: When Storyboards are not Enough

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Supporting Video Library Exploratory Search: When Storyboards are not Enough Mike Christel christel@cs.cmu.edu School of Computer Science Carnegie Mellon University CIVR July 8, 2008

  2. Talk Outline • Strength of Storyboards for TRECVID interactive search task (quick review) • Types of search: beyond fact-finding • Exploratory Search through Multiple Views • Evaluation Hurdles • Discussion

  3. Storyboards: TRECVID Search Success • For the shot-based directed search information retrieval task evaluated at TRECVID, storyboards have consistently and overwhelmingly produced the best performance (see references in paper, e.g., [Snoek et al. 2007]) • Motivated users can navigate through thousands of shot thumbnails in storyboards, better even than with “extreme video retrieval” interfaces: 2487 shots on average per 15 minute topic for TRECVID 2006 [Christel/Yan CIVR 2007] • Storyboard benefits: packed visual overview, trivial interactive control needed for “overview, zoom and filter, details on demand” – Shneiderman’s Visual Information-Seeking Mantra

  4. Beyond Fact-Finding • CACM April 2006 special issue on this topic • G. Marchionini (“Exploratory Search: From Finding to Understanding,” CACM 49, April 2006) breaks down 3 types of search activities: • Lookup (fact-finding; solving stated/understood need) • Learn • Investigate • Computer scientists and information retrieval specialists emphasize evaluation of lookup activities (NIST TREC) • Real world interest in learn/investigate: for an oral history collection, SUNY Buffalo Workshop library science and humanities participants quite interested in learn/investigate activities

  5. Exploratory Search (Demonstrations) • Examples where storyboards still useful: visual review, e.g., of disaster field footage • Where storyboards fail: • Showing other facets like time, space, co-occurrence, named entities (When did disasters occur? Which ones? Where?) • Providing collection understanding, a holistic view of what’s in say 100s of segments of 1000s of matching shots • Providing window into visually homogenous results, e.g., results from color search perhaps, or a corpus of just lecture slides, or head-and-shoulder interview shots • Claim: Storyboards are not sufficient, but are part of a useful suite of tools/interfaces for interactive video search

  6. Anecdotal Support for Claim • Collected 2006-2007 from: • Government analysts with news data • History students and faculty with oral history data • Views Tested: • Timeline • Visualization By Example (VIBE) Plot (query terms) • Map View • Named Entity view (people, places, organizations) • Text-dominant views: • Nested Lists (pre-defined clusters by contributor) • Common Text (on-the-fly grouping of common phrases)

  7. Anecdotal Results • 38 HistoryMakers corpus users (mostly students, 15 female, average age 24), experienced web searchers, modest digital video experience • 6 intelligence analysts (1 female; 2 older than 40, 3 in their 30s, 1 in 20s), very experienced text searchers, experienced web searchers, novice video searchers • View use minimal aside from Common Text • Text titling and text transcripts used frequently • A bit of evidence for collection understanding (e.g., diffs in topic between New York and Chicago), but overall, cautious use of default settings for initial trial(s).

  8. Evaluation Hurdles • How does one evaluate information visualization for promoting exploratory video search? • Low level simple tasks vs. complex real-world tasks • Traditional effectiveness, efficiency, satisfaction are even problematic: is “fast” interface for exploration good or bad? • HCI discount usability techniques offer some support, but ecological validity may limit impact of conclusions (e.g., HCII students found Common Text well suited for History students) • Look to field of Visual Analytics for help, e.g., Plaisant • “First hour with system” studies, or “developer as user” insights too limiting. Rather, consider Multi-dimensional In-depth Long-term Case-studies (MILC)

  9. Concluding Points - 1 • “Interactive” allows direction to compensate for automation shortcomings and user vagaries • Interactive fact-finding better than automated fact-finding in visual shot retrieval (TRECVID) • Interactive computer vision has successes (Harry Shum at Microsoft, Michael Brown et al. at NUS) • Interactive video summaries would allow user to switch between coverage and detail emphasis (see Christel et al. CIVR 2008) • Interactive view/facet control == ??? (too early to tell) • Users need scaffolding/support to get started • Evaluations need to run longer term, in depth, with case studies to see what has benefit (MILC)

  10. Concluding Points - 2 • Storyboards work well for visual overview • There are more tasks than just visual overview, and some of these tasks require more than what storyboards afford (sports highlights/dynamics, BBC rush review, collection understanding and association mining across multiple facets) • Future interactive video search interfaces likely to hold a mix of: • general interface capabilities (like dynamic query sliders for information visualization) • specialized ones (like specific sports interfaces) to support interactive video search, built up from facets specific to a domain or user community • “Video” is challenging area because grazing (as in TV viewing) has quite passive (or no) interactive requirements

  11. An Aside • From keynote at ACM/IEEE JCDL 2008 in June: “Shakespeare, God, and Lonely Hearts: Transforming Data Access with Many Eyes” by Viégas and Wattenberg • Information visualization for lay people and as a social artifact – see http://www.many-eyes.com • “Instead of scaling the data, scale the audience” • Leverage the web to get crowd perspective on what works and why through fielding interactive video search mechanisms for use by lay people and as a social artifact • See Bungee View work by Mark Derthick (Carnegie Mellon University) on web for faceted browsing of image/video resources

  12. Credits Many members of the Informedia Project, CMU research community, and The HistoryMakers contributed to this work, including: Informedia Project Director: Howard Wactlar The HistoryMakers Executive Director: Julieanna Richardson HistoryMakers Beta Testers: Joe Trotter (CMU History Dept.), SUNY at Buffalo and all UB Workshop participants: Schomburg Center for Research in Black Culture, NY Public Library, Randforce Associates, University of Illinois (3 campuses) Informedia User Interface: Ron Conescu, Neema Moraveji Informedia Processing: Alex Hauptmann, Ming-yu Chen, Wei-Hao Lin, Rong Yan, Jun Yang Informedia Library Essentials: Bob Baron, Bryan Maher This work supported by the National Science Foundation under Grant Nos. IIS-0205219 and IIS-0705491

More Related