1 / 1

Interactive image segmentation Sara Vicente 1 (s.vicente@adastral.ucl.ac.uk)

T B. T U. T F. Interactive image segmentation Sara Vicente 1 (s.vicente@adastral.ucl.ac.uk) Supervised by Vladimir Kolmogorov 1 and Carsten Rother 2 1 University College London, 2 Microsoft Research Cambridge.

jaser
Télécharger la présentation

Interactive image segmentation Sara Vicente 1 (s.vicente@adastral.ucl.ac.uk)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TB TU TF Interactive image segmentation Sara Vicente 1(s.vicente@adastral.ucl.ac.uk) Supervised by Vladimir Kolmogorov 1 and Carsten Rother 2 1 University College London, 2 Microsoft Research Cambridge The aim of interactive image segmentation is to extract an object from an image by segmenting the image in two regions: background and foreground. To minimize the problems of fully automatic segmentation, a user imposes some hard constraints: a lasso or rectangle around the object or the specification of regions that have to be part of background or foreground. GrabCut overview Model Constraints Input Goal Iterative algorithm Computes segmentation using a standard minimum cut algorithm Updates in each iteration the colour model for background and foreground based on last iteration Colour agreement: colour of the pixel should agree with the colour model of the label assigned to it (colour models are computed for background and foreground) Regional coherence: neighbour pixels should be assigned the same label, especially if the colour of both is similar. Different weights can be given to the two components of the model producing very distinct results. Assign to each pixel a label 0 – background, 1 – foreground dividing the image in two regions First iteration User Input Trimap: TF – foreground TU – unknown region TB – background Extreme settings: exaggerated colour agreement weight Extreme settings: exaggerated regional coherence weight Last iteration Improving GrabCut: introducing flux GrabCut “shrinking” effect Results with flux For some images, GrabCut algorithm has a shrinking effect, cutting elongated structures. It was proven in [1] that it is possible to integrate the optimization of flux in the GrabCut framework. This integration should prevent this shrinking effect to happen. The choice of the vector field for which we intend to optimize the flux should be done carefully in order to achieved the desirable results. Future work References: [1] Vladimir Kolmogorov and Yuri Boykov. What metrics can be approximated by geocuts, or global optimization of length/area and flux. In ICCV ’05, 2005. [2] C. Rother, V. Komogorov, and A. Blake, “GrabCut” - Interactive foreground extraction using iterated graph cuts. In ACM Transactions on Graphics (SIGGRAPH'04), 2004 • Development and test of new vector fields that can be used for the flux optimization • Learn parameters of the model: weights of the different components (agreement with data, regional coherence and flux) • Evaluation of the new model using a more complete database of images

More Related