1 / 54

Project 35

Project 35. Visual Surveillance of Urban Scenes. Principal Investigators . David Clausi, Waterloo Geoffrey Edwards, Laval James Elder, York (Project Leader) Frank Ferrie, McGill (Deputy Leader) James Little, UBC. Partners. Honeywell (Jeremy Wilson) CAE (Ronald Kruk)

tracyf
Télécharger la présentation

Project 35

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Project 35 Visual Surveillance of Urban Scenes

  2. Principal Investigators • David Clausi, Waterloo • Geoffrey Edwards, Laval • James Elder, York (Project Leader) • Frank Ferrie, McGill (Deputy Leader) • James Little, UBC PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  3. Partners • Honeywell (Jeremy Wilson) • CAE (Ronald Kruk) • Aimetis (Mike Janzen) PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  4. Participants PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  5. Goals • Visual surveillance of urban scenes can potentially be used to enhance human safety and security, to detect emergency events, and to respond appropriately to these events. • Our project investigates the development of intelligent systems for detecting, identifying, tracking and modeling dynamic events in an urban scene, as well as automatic methods for inferring the three-dimensional static or slowly-changing context in which these events take place. PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  6. Results • Here we demonstrate new results in the automatic estimation of 3D context and automatic tracking of human traffic from urban surveillance video. • The CAE S-Mission real-time distributed computing environment is used as a substrate to integrate these intelligent algorithms into a comprehensive urban awareness network. PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  7. dispatcher dispatcher dispatcher HLA logic historic data logs other types of logs Proprietary CAE Inc 2007 CAE STRIVE ARCH. PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  8. 3D Urban Awareness from Single-View Surveillance Video

  9. 3D Urban Awareness • 3D scene context (e.g., ground plane information) is crucial for the accurate identification and tracking of human and vehicular traffic in urban scenes. • 3D scene context is also important for human interpretation of urban surveillance data • Limited static 3D scene context can be estimated manually, but this is time-consuming, and cannot be adapted to slowly-changing scenes. PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  10. PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  11. Ultimate Goal • Our ultimate goal is to automate this process! PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  12. Immediate Goal • Automatic estimation of the three vanishing points corresponding to the “Manhattan directions”. PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  13. Manhattan Frame Geometry • An edge is aligned to a vanishing point if the interpretation plane normal is orthogonal to the vanishing point vector in the Gaussian Sphere (i.e. dot product is 0) PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  14. Mixture Model Image • Each edge Eij in the image is generated by one of four possible kinds of scene structure: • m1-3: a line in one of the three Manhattan directions • m4: non-Manhattan structure • The observable properties of each edge Eij are: • position • angle • The likelihoods of these observations are co-determined by: • The causal process (m1-4) • The rotation Ψ of the Manhattan frame relative to the camera mi mi E11 E12 Ψ mi mi E22 E21 PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  15. Mixture Model Image • Our goal is to estimate the Manhattan frame Ψ from the observable data Eij. mi mi E11 E12 Ψ mi mi E22 E21 PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  16. E-M Algorithm • E Step • Given an estimate of the Manhattan coordinate frame, calculate the mixture probabilities for each edge m1 PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  17. E-M Algorithm • E Step • Given an estimate of the Manhattan coordinate frame, calculate the mixture probabilities for each edge m2 PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  18. E-M Algorithm • E Step • Given an estimate of the Manhattan coordinate frame, calculate the mixture probabilities for each edge m3 PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  19. E-M Algorithm • E Step • Given an estimate of the Manhattan coordinate frame, calculate the mixture probabilities for each edge m4 PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  20. E-M Algorithm • M Step • Given estimates of the mixture probabilities for each edge, update our estimate of the Manhattan coordinate frame PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  21. Results PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  22. Results • Convergence of the E-M algorithm for example image Test Image PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  23. Results • Example: lines through top 10 edges in each Manhattan direction PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  24. Tracking Human Activity

  25. Single-Camera Tracking

  26. Tracking Using Only Colour / Grey Scale • Tracking using only grey scale or colour features can lead to errors PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  27. Incorporating dynamic information enables successful tracking Tracking Using Dynamic Information PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  28. Tracking over Multi-Camera Network

  29. Goal • Integrate tracking of human activity from multiple cameras into world-centred activity map PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  30. Input left and right sequences PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  31. Independent tracking • Each person tracked independently in each camera using Boosted Particle Filters. • Background subtraction identifies possible detections of people which are then tracked with a particle filter using brightness histograms as the observation model. • Tracks are projected via a homography to the street map, and then  Kalman filtered independently based on the error model. PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  32. Independent tracks PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  33. Integration • Tracks are averaged to approximate joint estimation of composite errors PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  34. Merged trajectories PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  35. Future Work • Integrated multi-camera background subtraction • Integrated particle filter in world coordinates using joint observation model over all sensors in network. PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  36. Tracking in Dynamic Background Settings

  37. Foreground Extraction and Tracking in Dynamic Background Settings • Extracting objects from dynamic backgrounds is challenging • Numerous applications: • Human Surveillance • Customer Counting • Human Safety • Event Detection • In this example, the problem is to extract people from surveillance video as they enter a store through a dynamic sliding door PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  38. Methodology Overview • Video sequences are pre-processed and corner feature points are extracted • Corners are tracked to obtain trajectories of the moving background • Background trajectories are learned and a classifier is formed • Trajectories of all moving objects in the test image sequences are classified based on learned model into either background or foreground trajectories • Foreground Trajectories are kept in image sequence and the object corresponding to those trajectories is tagged as foreground PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  39. Demo 1: Successful Tracking and Classification • This demo illustrates a case of successful tracking and classification of an entering person. • The person is classified into foreground based on the extracted trajectories. PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  40. Demo 2: Failed Tracking but Successful Classification • Demo 2 shows a case when the tracker loses track of the person after a few frames • However, the classification is still correct since only a small number of frames are required to identify the trajectory. PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  41. Recognizing Actions using the Boosted Particle Filter

  42. Motivation Frame 682 Frame 814 Input Output PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  43. Extracted image patches System Diagram predict new templates update the SPPCA template updater PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  44. 2D histogram 1D histogram Saturation Hue Value HSV Color Histogram • The HSV color histogram is composed of: • 2D histogram of Hue and Saturation • 1D histogram of Value + PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  45. The HOG descriptor SIFT descriptor SIFT descriptor Image gradients The HOG descriptor SIFT descriptor SIFT descriptor PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  46. Template Updating: Motivation • Tracking: search for the location in the image whose image patch is similar to a reference image patch – the template. • Template Updating: Templates should be updated because the players change their pose. ? ? ? ? Frame 677 Frame 687 PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  47. Template Updating: Operations • Offline • Learning: Learn the template model from training data • Online: • Prediction: Predict the new template used in the next frame • Updating: Update the template model using the current observation PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  48. SPPCA Template Updater New templates Predict new templates Update the SPPCA template updater PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  49. switch to select an Eigen space (discrete) coordinate on the Eigen space (continuous) observation (continuous) Graphical Model of SPPCA PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

  50. skating down skating left skating right skating up Action Recognizer • Input: a sequence of image patches • Output: action labels Action Recognizer PROJECT 35: VISUAL SURVEILLANCE OF URBAN SCENES

More Related