1 / 17

Using linking features in learning Non-parametric part models *

Ammar Kamal Hattab ENGN2560 Mid Project Presentation April 16, 2013. Using linking features in learning Non-parametric part models *. * Leonid Karlinsky, Shimon Ullman , ECCV (3) 2012 . Project Goal. Nk. Torso.

damian
Télécharger la présentation

Using linking features in learning Non-parametric part models *

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AmmarKamalHattab ENGN2560 Mid Project Presentation April 16, 2013 Using linking features in learning Non-parametric part models * * Leonid Karlinsky, Shimon Ullman, ECCV (3) 2012

  2. Project Goal Nk Torso • Project Goal: implement Linking Features Algorithm to detect a set of parts of a deformable object. • Examples: • detect human parts: head, torso,upper/lower limbs • detect facial landmarks: eyes, nose, mouth outlines, etc. • detect animal parts • … tll trl bll brl

  3. Linking Features Method • To use local features in strategic locations • To provide evidence on the connectivity of the part candidates Features from the elbow are the “Linking Features” for the arm parts How do we choose the right lower arm candidate? The elbow appearance “links” the correct arm part candidates

  4. Training steps Steps Completed

  5. Step 1: Generating Training Dataset • Convert a video file (13 seconds) to a sequence of images (397). • Manually Adding annotations to each image using SVG editor. • Read the annotations from the SVG image to a text file.

  6. Step 2: Extracting SIFT features • For each image, find SIFT descriptors using VLFeatopen sourceMatlab library on a dense grid in a specific ROI, using 20 × 20 pixels patch. • Grid steps: 3 pixels (gives ~ 10,300 SIFT feature in the following region)

  7. Step 3: Connecting Features with Parts • Read and draw the stickman annotation file • Enlarge each stick by 20 pixels • For each part, assignSIFT features thatare inside the rectangle with each part • Storing all features info and learned parts in an external file

  8. Step 4: Find Linking Features • Find linking features in the training images by using a circle with radius of15 pixels centered between the two parts, (different from the paper) • for each feature in the image define variable Ai=1 if it’s a linking feature, =0 otherwise, and store them also.

  9. Testing steps

  10. Step 1: Extract SIFT features • For a new test image, extract all SIFT features on a dense grid. * Image from the referenced paper

  11. Step 2: Building the Nearest Neighbor Tree • Load all training data from files. • Using training data Build ANN tree • Given a test feature,  efficiently find nearest neighbors features in the training images. • NNs used for estimating Kernel Density Estimation (KDE) for different variables. • Building the following ANN trees: • One ANNallfor all features • Several ANNjfor features assigned with j-th part (e.g: ANN for Lower Left Arm) • Using C++ code from Mount with a Matlab wrapper from Bagon.

  12. Step 3: Generalized Hough Transform • for each part (ex: head) • For each feature in the test features • Find 25 nearest neighbors using the trained ANNs • Accumulate offset to the part center to get the KDE( Pj (Lj|Fi) conditional probability of the center location given Fi ) • Summing up these KDEs gives us the GHT (each feature votes for part center location using the probability) Training Images Fi A test feature KDE

  13. Step 4: Parts Candidates • Then we need to get top 20 maxima locations of the voting matrix V which will give us candidate parts locations • Taking top 3 orientations and length, we have 60 parts candidates * Image from the referenced paper

  14. Step 5: Finding Linking Features • Using parts candidates for a pair of parts • For each pair of parts candidates • As in the previous step, for each feature in the image, compute probability of the feature to link the two parts P(Fi | Lj, Lk) (Find 25 NN  KDE) Training Images A test feature KDE Fi Inference * Image from the referenced paper

  15. Step 6: Inference • So by maximizing the previous probability (linking features probability) we could infer the correct choice of part candidates. • Example: Comparing probability of two part candidates for upper and lower left arm P = 0.0164 P =0.0868 1 2

  16. Next Steps • Completing the Implementation: • use greedy approach to find the optimal parts configurations(that maximizes probability of previous step) • Solving Problems: • Taking too much time to compute the GHT and spectral clustering. • There is a problem in voting for orientations. • Evaluate the algorithm and test it on different data sets.

  17. Missed Step: Spectral Clustering • Use spectral clustering algorithm to cluster the features into 20 clusters based on how much features contribute to Maximas of voting matrix. • Using code from: http://books.nips.cc/papers/files/nips14/AA35.pdf

More Related