1 / 28

Real-Time Decentralized Articulated Motion Analysis and Object Tracking From Videos

Real-Time Decentralized Articulated Motion Analysis and Object Tracking From Videos. Wei Qu , Member, IEEE , and Dan Schonfeld , Senior Member, IEEE. OUTLINE. INTRODUCTION DAOT FRAMEWORK HAOT FRAMEWORK EXPERIMENTAL RESULTS. INTRODUCTION.

eron
Télécharger la présentation

Real-Time Decentralized Articulated Motion Analysis and Object Tracking From Videos

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real-Time Decentralized Articulated MotionAnalysis and Object Tracking From Videos Wei Qu, Member, IEEE, and Dan Schonfeld, Senior Member, IEEE

  2. OUTLINE • INTRODUCTION • DAOT FRAMEWORK • HAOT FRAMEWORK • EXPERIMENTAL RESULTS

  3. INTRODUCTION • Articulated object tracking is a challenging task such as exponentially increased computational complexity in terms of the degrees of the object and the frequent self-occlusions. • In this paper, we present two new articulated motion analysis and object tracking approaches: DAOT and HAOT.

  4. DECENTRALIZED FRAMEWORK FOR ARTICULATED MOTION ANALYSIS AND OBJECT TRACKING • A. Articulated Object Representation:An articulated object can be represented by a graphical model such as shown Fig1.

  5. In order to describe the motion of an articulated object, we accommodate the state dynamics by a dynamical graphical model such as shown in Fig.2.

  6. In order to facilitate the analysis and achieve real-time implementation, we adopt a decentralized framework. • Fig. 3(a) shows the decomposition result for part 3 in Fig.2.

  7. B. Bayesian Conditional Density Propagation:In this section, we formulate the motion estimation problem. In other words, given the observations, we want to determine the underlying object state. Apply the Markov properties

  8. C. Sequential Monte Carlo Approximation:The basic idea of SMC approximation is to use a weighted sample setto estimatethe importance density q(˙) is chosen to factorize such that

  9. By substituting (6) and (8) into (7), we have • models the interaction between two neighboring parts’ samples and . • The local likelihood acts as a weight to the associated interaction.

  10. HIERARCHICAL DECENTRALIZED FRAMEWORK FORARTICULATED MOTION ANALYSIS AND OBJECT TRACKING • A. Hierarchical Graphical Modeling:we define a group of parts as a unit, which is denoted by , where ; is the total number of units.

  11. Similar to DAOT, we adopt a decentralized framework and, therefore, decompose the graphical model for each part.

  12. B. Hierarchical Bayesian Conditional Density Propagation: • Similar to DAOT, we present a Bayesian conditional density propagation framework for each decomposed graphical model.

  13. Apply the Markov properties

  14. Apply the Markov properties

  15. C. Sequential Monte Carlo Implementation: • In HAOT, the importance density q(˙) is chosen to be • The sample weights can be updated by

  16. By substituting (15), (19), and (20) into (21) and approximating the integrals by summations, we have

  17. in (22) can be further approximated by a product of all parts’ local observation likelihoods in unit • By first calculating all parts’ local observation likelihood, we do not have to calculate the interunit observation likelihood .

  18. D. High-Level Interaction Model: • We used a Gaussian mixture model in our experiments to estimate the density from training data for a walking person.

  19. EXPERIMENTAL RESULTS • The tracking performance of the proposed two methods were compared both qualitatively and quantitatively with the multiple independent trackers (MIT), joint particle filter (JPF), mean field Monte Carlo (MFMC), and loose-limbed people tracking (LLPT), respectively.

  20. Qualitative Tracking Results: • The video GIRL contains a girl moving her arms. It has 122 frames and was captured by 25 fps with a resolution of 320 x 240 pixels.

  21. The video 3D-FINGER has a finger bending into the image plane. It was captured by 15 fps with a resolution of 240 x 180 pixels and has 345 frames.

  22. The sequence WALKING contains a person walking forward inside a classroom. It has 66 frames and was captured by 25 fps with a resolution of 320 x 240 pixels.

  23. The video sequence GYM was captured in a gym from a sideview of a person on a walking machine. Compared with the WALKING sequence, this video is much longer (1716 frames) and has a very cluttered background.

  24. Quantitative Performance Analysis and Comparisons: • With synthetic data:

  25. In Fig. 10, we compare the RMSE of MIT, JPF and DAOT on the synthetic video.

  26. With real data:we compare the tracking accuracy of different approaches by defining the false position rate (FPR) and false label rate (FLR)

  27. In Table III, we compare both the speed and accuracy data of different particle filter-based approaches on the WALKING sequence.

More Related