1 / 21

How do we know that we solved vision?

How do we know that we solved vision?. 16-721: Learning-Based Methods in Vision A. Efros, CMU, Spring 2009. Columbia Object Image Library (COIL-100) (1996). Corel Dataset. Yu & Shi, 2004. Average Caltech categories (Torralba). { }. all photos. Flickr.com. Flickr Paris. Real Paris.

shen
Télécharger la présentation

How do we know that we solved vision?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How do we know that we solved vision? 16-721: Learning-Based Methods in Vision A. Efros, CMU, Spring 2009

  2. Columbia Object Image Library (COIL-100) (1996)

  3. Corel Dataset

  4. Yu & Shi, 2004

  5. Average Caltech categories (Torralba)

  6. { } all photos Flickr.com

  7. Flickr Paris

  8. Real Paris

  9. Automated Data Collection Kang, Efros, Hebert, Kanade, 2009

  10. Something More Objective? Famous Tsukuba Image Middlebury Stereo Dataset

  11. Issue 1 • We might be testing too soon… • Need to evaluate the entire system: • Give it enough data • Ground it in the physical world • Allow it to affect / manipulate its environment • Do we need to solve Hard AI? • Maybe not. We don’t need Human Vision per se – how about Rat Vision?

  12. Issue 2 • We might be looking for “magic” where none exist…

  13. Valentino Braitenberg, Vehicles Source Material: http://www.bcp.psych.ualberta.ca/~mike/ Pearl_Street/Margin/Vehicles/index.html Introduces a series of (hypothetical) simple robots that seem, to the outside observer, to exhibit complex behavior. The complex behavior does not come from a complex brain, but from a simple agent interacting with a rich environment. Vehicle 1: Getting around A single sensor is attached to a single motor. Propulsion of the motor is proportional to the signal detected by the sensor. The vehicle will always move in a straight line, slowing down in the cold, speeding up in the warm. Braitenberg: “Imagine, now, what you would think if you saw such a vehicle swimming around in a pond. It is restless, you would say, and does not like warm water. But it is quite stupid, since it is not able to turn back to the nice cold sport it overshot in its restless ness. Anyway, you would say, it is ALIVE, since you have never seen a particle of dead matter move around quite like that.”

  14. More complex vehicles

  15. Moral of the Story • “Law of Uphill Analysis and Downhill Invention: machines are easy to understand if you’re creating them; much harder to understand ‘from the outside’. • Psychological consequence: if we don’t know the internal structure of a machine, we tend to overestimate its complexity.”

  16. Turing Tests for Vision • Your thoughts…

  17. Have we solved vision if we solve all the boundary cases? Varum

  18. Computer Vision Database Zhaoyin Jia During the Spring break Before the deadline In the class Best project in 16721 Love Kiss Failed in 16721 Cute Adorable Safe Threatened Run Call for help More threatened Run faster Need more help • Object segmentation/recognition Detailed segmented/labeled, all the scenes in life. • Semantic meaning in image/video Human understanding of the image/story behind the image • Feeling/reaction after understanding

  19. How do we know that we solved vision? Yuandong Tian • General Rule: Turing test • IfCVS == HVSin • Training&Performance&Speed&Failure case • ThenWe declarevision is solved. Beers and Being laid off. • Verifiable Specific Rules: • Challenges in Training • Full-automatic object Discovery & Categorization from unlabeled, long video sequence. • Multi-view robust real-timeRecognition of ten of thousands of objects, given few trainings of each object. • Challenges in Performance • Pixel-wise Localization and Registration in cluttered and degraded scene; • Long-term real-time robust Tracking for generic objects in cluttered and degraded video sequence. • Human failure – human vision illusion • Able to explain human vision illusions, and Reproduce them. • Conclusion: • Good luck for all! 16-721: Learning-based method in vision

  20. Turing Test for Vision • From the blog: • No overall test. Vision is task-dependent. Do one problem at a time. • Use Computer Graphics to generate tons of test data • A well-executed Grand Challenge • Genre Classification in Video • The Ultimate Dataset (25-year-old grad student) • Need to handle corner cases / illusions. “Dynamic range of difficulty”.  • It’s all about committees, independent evaluations, and releasing source code • It’s hopeless…

More Related