1 / 25

Rigid and Non-Rigid Classification Using Interactive Perception

Rigid and Non-Rigid Classification Using Interactive Perception. Bryan Willimon, Stan Birchfield, Ian Walker Department of Electrical and Computer Engineering Clemson University IROS 2010. What is Interactive Perception?.

jcorbin
Télécharger la présentation

Rigid and Non-Rigid Classification Using Interactive Perception

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rigid and Non-Rigid Classification Using Interactive Perception Bryan Willimon, Stan Birchfield, Ian Walker Department of Electrical and Computer Engineering Clemson University IROS 2010

  2. What is Interactive Perception? • Interactive Perception is the concept of gathering information about a particular object through interaction • Raccoons and cats use this technique to learn about their environment using their front paws.

  3. What is Interactive Perception? • The information gathered is: • Either complementing information obtained through vision • Or adding new information that cannot be determined through vision alone

  4. Previous Related Work on Interactive Perception Adding New Information: Learning about prismatic and revolute joints on planar rigid objects Complementing: Segmentation through image differencing D. Katz and O. Brock. Manipulating articulated objects with interactive perception. ICRA 2008 Previous work focused on rigid objects P. Fitzpatrick. First Contact: an active vision approach to segmentation. IROS 2003

  5. Goal of Our Approach Learn about Object Isolated Object Classify Object

  6. Color Histogram Labeling • Use color values (RGB) of the object to create a 3-D histogram • Each histogram is normalized by number of pixels in object to create a probability distribution • Each histogram is then compared to histograms of previous objects for a match using histogram intersection • White area is found by using same technique as in graph-based segmentation and used as a binary mask to locate object in image

  7. Skeletonization • Use binary mask from previous step to create a skeleton of the object • Skeleton is a single-pixel wide outline of the area • Prairie-fire analogy Iteration 1 Iteration 3 Iteration 5 Iteration 7 Iteration 9 Iteration 10 Iteration 11 Iteration 13 Iteration 15 Iteration 17 Iteration 47

  8. Monitoring Object Interaction • Use KLT feature points to track movement of the object as the robot interacts with it • Only concerned with feature points on the object and disregard all other points • Calculate distance between each feature point every flength frames (flength=5)

  9. Monitoring Object Interaction (cont.) • Idea: Like features keep a constant inter-feature distance, features from different groups have variable intra-distance • Features were separated into groups by measuring the intra-distance amount after flengthframes • If the intra-distance between two features changes by less than a threshold, then they are within the same group • Otherwise, they are within different groups • Separate groups relate to separate parts of an object

  10. Labeling Revolute Joints using Motion • For each feature group, create an ellipse that encapsulates all features • Calculate major axis of ellipse using PCA • End points of major axis correspond to a revolute joint and the endpoint of the extremity

  11. Labeling Revolute Joints using Motion (cont.) • Using the skeleton, locate intersection points and end points • Intersection points (Red) = Rigid or Non-rigid joints • End points (Green) = Interaction points • Interaction points are locations that the robot uses to “push” or “poke” the object

  12. Labeling Revolute Joints using Motion (cont.) • Map estimated revolute joint from major axis of ellipse to actual joint in skeleton • After multiple interactions from the robot, a final skeleton is created with revolute joints labeled (red)

  13. Experimental Results Sorting using socks and shoes Articulated rigid object - pliers Classification experiment - toys

  14. Results Articulated rigid object (Pliers) • Comparing objects of the same type to that of similar work* • Pliers from our results compared to shears in their results* Our approach Katz-Brock approach Revolute Joint *D. Katz and O. Brock. Manipulating articulated objects with interactive perception. ICRA 2008

  15. Results Classification (cont.) Experiment (Toys) Final Skeleton used for Classification

  16. Results Classification (cont.) Experiment (Toys) 1 2 3 4

  17. Results Classification (cont.) Experiment (Toys) 5 6 7 8

  18. Results Classification (cont.) Experiment Misclassification Classification Experiment without use of Skeleton *Rows = Query image, Columns = Database image

  19. Results Classification (cont.) Experiment Classification Corrected Classification Experiment with use of Skeleton *Rows = Query image, Columns = Database image

  20. Results Sorting (cont.) using socks and shoes 3 4 5 1 2

  21. Results Sorting (cont.) using socks and shoes Classification Experiment without use of Skeleton Misclassification

  22. Results Sorting (cont.) using socks and shoes Classification Experiment with use of Skeleton Classification Corrected

  23. Conclusion • The results demonstrated that our approach provided a way to classify rigid and non-rigid objects and label them for sorting and/or pairing purposes • Most of the previous work only considers planar rigid objects • This approach builds on and exceeds previous work in the scope of “interactive perception” • We gather more information with interaction like a skeleton of the object, color, and movable joints. • Other works only look to segment the object or find revolute and prismatic joints

  24. Future Work • Create a 3-D environment instead of a 2-D environment • Modify classification area to allow for interactions from more than 2 directions • Improve the gripper of the robot for more robust grasping • Enhance classification algorithm and learning strategy • Use more characteristics to properly label a wider range of objects

  25. Questions?

More Related