1 / 7

Head pose estimation without manual initialization

Head pose estimation without manual initialization. Paul Fitzpatrick. Life without manual initialization. Yaw *. Pitch *. Roll *. Translation in X, Y, Z. Manual initialization gives mapping from relative pose to absolute pose Initial orientation Shape parameters

melba
Télécharger la présentation

Head pose estimation without manual initialization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Head pose estimationwithout manual initialization Paul Fitzpatrick

  2. Life without manual initialization Yaw* Pitch* Roll* Translation in X, Y, Z • Manual initialization gives mapping from relative pose to absolute pose • Initial orientation • Shape parameters • If automated, replacement is likely to be weakest link • Otherwise, it would be a complete pose estimation system in itself • So initialization drives design * Nomenclature varies

  3. Opportunistic initialization • Use frontal view as reference pose • Eyes, nose are well-defined, stable, symmetric • Head outline is less well-defined, but always present and relatively easy to find • Use outline to guide search for eye region

  4. Head outline and eye location • Track head using :- • Color histogram • Grouped motion • Match against ellipse model • “ellipse-specific direct least-square fitting” (M. Pilu et al) • Histogram initialized from grouped motion of ellipse • Eyes located using a blob detector • Trade off detection rate for low false positives • Only search for eyes where orientation can be determined

  5. Tracking pose changes • Establish a mesh of trackers on the face • Prune mesh by head outline, comparative motion • As head rotates, occluded trackers are destroyed and new ones created • Tracking mesh gives 2D translation, in-plane rotation, and relative scale • When eyes are detected, establish a coordinate system on nearby trackers • Purpose of mesh is to allow eye location to be maintained across large excursions in yaw and pitch

  6. Mesh coordinate system • Mesh coordinate system is that of “flattened head” surface • Ideally interacts with image coordinates only where surface is parallel to image plane • Represents the remaining 2 dimensions • Must lead to inconsistencies, especially without good knowledge of head size • Still, useful in two important cases :- • When part of mesh remains visible throughout, serving as landmark • When movement is somewhat degenerate, e.g. look away, then look back Point of interaction shifts when head rotates in depth If movement lies within a 2D strip, Euclidean geometry survives

  7. Video

More Related