1 / 18

High-Performance Distributed Multimedia Computing

High-Performance Distributed Multimedia Computing . Frank Seinstra, Jan-Mark Geusebroek. MultimediaN (BSIK Project). Intelligent Systems Lab Amsterdam Informatics Institute University of Amsterdam. MultimediaN and DAS-3. MultimediaN and high-performance computing.

luella
Télécharger la présentation

High-Performance Distributed Multimedia Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High-Performance Distributed Multimedia Computing Frank Seinstra, Jan-Mark Geusebroek MultimediaN (BSIK Project) Intelligent Systems Lab AmsterdamInformatics InstituteUniversity of Amsterdam

  2. MultimediaN and DAS-3

  3. MultimediaN and high-performance computing Van Essen et al. Science 255, 1999.

  4. automatic analysis? A Real Problem, part 1… • News Broadcast - September 21, 2005 (see video1.wmv) • Police investigating over 80.000 (!) CCTV recordings • First match found no earlier than 2.5 months after July 7 attacks

  5. Image/Video Content Analysis • Lots of research + benchmark evaluations: • PASCAL-VOC (10,000+ images), TRECVID (200+ hours of video) • A Problem of scale: • At least 30-50 hours of processing time per hour of video! • Beeld&Geluid => 20.000 hours of TV broadcasts per year • NASA => over 850 Gb of hyper-spectral image data per day • London Underground => over 120.000 years of processing … !!!

  6. Since 1998: “Parallel-Horus” Beowulf-type Clusters - familiar programming - easy execution High Performance Computing • Solution: • Very, very large scale parallel and distributed computing • New Problem: • Very, very complicated software Solution: tool to make parallel & distributed computing transparent to user User Wide-Area Grid Systems

  7. +/- 18 patterns (MPI) Parallel-Horus: Features (1) • Sequential programming: Parallel-Horus Sequential API Parallelizable Patterns Seinstra et al., Parallel Computing, 28(7-8):967-993, August 2002

  8. Don’t do this: Scatter ImageOp Gather Scatter ImageOp Gather Do this: Scatter ImageOp Avoid Communication ImageOp Gather On the fly! Parallel-Horus: Features (2) • Lazy Parallelization: Seinstra et al., IEEE Trans. Par. Dist. Syst., 15(10):865-877, October 2004

  9. Extensions for Distributed Computing • Wide-Area Multimedia Services: Parallel Horus Client Parallel Horus Server Parallel Horus Servers Parallel Horus Servers Parallel Horus Client • User transparency? • Abstractions & techniques? • Grid connectivity problems?

  10. Color-Based Object Recognition (1) • Our Solution: • Place ‘retina’ over input image • Each of 37 ‘retinal areas’ serves as a ‘receptive field’ • For each receptive field: • Obtain set of local histograms, invariant to shading / lighting • Estimate Weibull parameters ß and γ for each histogram • Hence: scene description by set of 37x4x3 = 444 parameters + = Geusebroek, British Machine Vision Conference, 2006.

  11. Color-Based Object Recognition (2) • Learning phase: • Set of 444 parameters is stored in database • So: learning from 1 example, under single visual setting “a hedgehog” • Recognition phase: • Validation by showing objects under at least 50 different conditions: • Lighting direction • Lighting color • Viewing position

  12. Amsterdam Library of Object Images (ALOI) • In laboratory setting: • 300 objects correctly recognized under all (!) visual conditions • 700 remaining objects ‘missed’ under extreme conditions only Geusebroek et al., Int. J. Comput. Vis.. 61(1):103-112, January 2005

  13. Example: Object Recognition See also: http://www.science.uva.nl/~fjseins/aibo.html

  14. Example: Object Recognition (see video2.wmv) Demonstrated live (a.o.) at ECCV 2006, June 8-11, 2006, Graz, Austria

  15. Performance / Speedup on DAS-2 Single cluster, client side speedup Four clusters, client side speedup • Recognition on single machine: +/- 30 seconds • Using multiple clusters: up to 10 frames per second • Insightful: even ‘distant’ clusters can be used effectively for close to ‘real-time’ recognition

  16. Current & Future Work • Very Large-Scale Distributed Multimedia Computing: • Overcome practical annoyances: • Software portability, firewall circumvention, authentication, … • Optimization and efficiency: • Tolerant to dynamic Grid circumstances, … • Systematic integration of MM-domain-specific knowledge, … • Deal with non-trivial communication patterns: • Heavy intra- & inter-cluster communication, … • Reach the end users: • Programming models, execution scenarios, … • Collaboration with VU (Prof. Henri Bal) & GridLab • Ibis: www.cs.vu.nl/ibis/ • Grid Application Toolkit: www.gridlab.org

  17. Conclusions • Effective integration of results from two largely distinct research fields • Ease of programming => quick solutions • With DAS-3 / StarPlane we can start to take on much more complicated problems • But most of all: • DAS-3 very significant for future MM research

  18. The End (see video3.avi)

More Related