1 / 29

Real-Time Template Tracking

Real-Time Template Tracking. Stefan Holzer Computer Aided Medical Procedures (CAMP), Technische Universität München , Germany. Real-Time Template Tracking Motivation. Object detection is comparably slow detect them once and then track them

etenia
Télécharger la présentation

Real-Time Template Tracking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real-Time Template Tracking Stefan Holzer Computer Aided Medical Procedures (CAMP), TechnischeUniversitätMünchen, Germany

  2. Real-Time Template TrackingMotivation • Objectdetectioniscomparablyslowdetectthemonceandthentrackthem • Roboticapplicationsoftenrequire a lotofstepstheless time wespend on objectdetection/tracking • themore time wecanspend on otherthingsor • thefasterthewholetaskfinishes

  3. Real-Time Template TrackingOverview Real-Time Template Tracking Feature-basedTracking Intensity-based Tracking Analytic Approaches Learning-based Approaches … LK, IC, ESM, … JD, ALPs, …

  4. Intensity-based Template TrackingGoal Find parametersof a warpingfunction such that: for all templatepoints

  5. Intensity-based Template TrackingGoal Find parametersof a warpingfunction such that: for all templatepoints Reformulatethegoalusing a predictionasapproximationof : • Find theparameters‘ increment such that • byminimizing Thisis a non-linear minimizationproblem.

  6. Intensity-based Template TrackingLukas-Kanade UsestheGauss-Newton methodforminimization: • Applies a first-order Taylor seriesapproximation • Minimizesiteratively B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision, 1981.

  7. Lukas-KanadeApproximation bylinearization First-order Taylor seriesapproximation: wheretheJacobianmatrixis: gradientimages Jacobianofthecurrentimage Jacobianofthewarp

  8. Lukas-KanadeIterative minimization The followingsteps will beiterativelyapplied: • Minimize a sumofsquareddifferences wheretheparameterincrementhas a closed form solution(Gauss-Newton) • Updatetheparameterapproximation Thisisrepeateduntilconvergenceisreached.

  9. Lukas-KanadeIllustration • Ateachiteration: • Warp • Compute • Update templateimage currentframe

  10. Lukas-KanadeImprovements • Inverse Compositional (IC):reduce time per iteration • Efficient Second-Order Minimization (ESM):improveconvergence • Approach ofJurie & Dhome (JD):reduce time per iterationandimproveconvergence • Adaptive Linear Predictors (ALPs):learnandadapttemplate online

  11. Inverse CompositionalOverview Differencestothe Lukas-Kanadealgorithm • Reformulationofthegoal: Jacobianofthetemplateimage Jacobianofthewarp

  12. Inverse CompositionalOverview Differencestothe Lukas-Kanadealgorithm • Reformulationofthegoalandcanbeprecomputedonlyneedstobecomputedateachiteration • Parameter update changes S. Baker and I. Matthews. Equivalence and efficiency of image alignment algorithms, 2001.

  13. Efficient Second-Order MinimizationShort Overview • Usessecond-order Taylor approximationofthecostfunctionLessiterationsneededtoconvergeLarger areaofconvergenceAvoidinglocalminimaclosetothe global • Jacobianneedstobecomputedateachiteration S. Benhimane and E. Malis. Real-time image-based tracking of planes using efficient second-order minimization, 2004.

  14. Jurie & DhomeOverview • Motivation: • Computing theJacobian in everyiterationis expensive • Goodconvergencepropertiesaredesired • Approach of JD:pre-learnrelationbetweenimagedifferencesandparameter update: • relationcanbeseenas linear predictor • isfixedfor all iterations • learningenablesto jump overlocalminima F. Jurie and M. Dhome. Hyperplane approximation for template matching. 2002

  15. Jurie & DhomeTemplate and Parameter Description • Template consistsof sample points • Distributed overthetemplateregion • Usedto sample theimagedata • Deformation isdescribedusingthe 4 cornerpointsofthetemplate • Image valuesarenormalized (zeromean, unitstandarddeviation) • Error vectorspecifiesthedifferencesbetweenthenormalizedimagevalues

  16. Jurie & DhomeLearning phase • Applysetofrandomtransformationstoinitialtemplate • Computecorrespondingimagedifferences • Stacktogetherthetrainingdata ( ) = ) ( =

  17. Jurie & DhomeLearning phase • Applysetofrandomtransformationstoinitialtemplate • Computecorrespondingimagedifferences • Stacktogetherthetrainingdata • The linear predictorshouldrelatethesematricesby: • So, the linear predictorcanbelearnedas

  18. Jurie & DhomeTracking phase • Warp sample pointsaccordingtocurrentparameters • Usewarped sample pointstoextractimagevaluesandtocomputeerrorvector • Compute update using • Updatecurrentparameters • Improvetrackingaccuracy: • Apply multiple iterations • Use multiple predictorstrainedfor different amoutsofmotions

  19. Jurie & DhomeLimitations • Learning large templatesis expensivenot possible online • Shape oftemplatecannotbeadaptedtemplatehastoberelearned after eachmodification • Tracking accuracyis inferior to LK, IC, …use JD asinitializationforoneofthose

  20. Adaptive Linear PredictorsMotivation • Distributelearningof large templatesoverseveralframeswhiletracking • Adaptthetemplateshapewithregardto • suitabletexture in thesceneand • thecurrentfieldofview S. Holzer, S. Ilic and N. Navab. Adaptive Linear Predictors for Real-Time Tracking, 2010.

  21. Adaptive Linear PredictorsMotivation

  22. Adaptive Linear PredictorsMotivation

  23. Adaptive Linear PredictorsMotivation

  24. Adaptive Linear PredictorsSubsets • Sample pointsaregroupedintosubsetsof 4 pointsonlysubsetsareaddedorremoved • Normalizationisappliedlocally, not globallyeachsubsetisnormalizedusingitsneighboringsubsetsnecessarysince global meanandstandarddeviationchangeswhentemplateshapechanges

  25. Adaptive Linear PredictorsTemplate Extension Goal: Extend initial template by an extension template • Standard approach for single templates: • Standard approach for combined template:

  26. Adaptive Linear PredictorsTemplate Extension Goal: Extend initial template by an extension template • The inverse matrix can be represented as: • Using the formulas of Henderson and Searle leads to: • Only inversion of one small matrix necessary since is known from the initial template

  27. Adaptive Linear PredictorsLearning Time

  28. Adaptive Linear PredictorsAddingnewsamples • Number of random transformations must be greater or atleast equal to the number of sample points Presented approach is limited by the number of transformations used for initial learning • Restriction can be overcome by adding newtraining data on-the-fly This can be accomplished in real-time using the Sherman-Morrison formula:

  29. Thankyouforyourattention! Questions?

More Related