430 likes | 691 Vues
Perception for Robot Detection. 2011/12/08. Robot Detection. Robot Detection. Better Localization and Tracking No Collisions with others. Goal. Robust Robot Detection Long Range Short Range. Long Range C urrent M ethod : Heuristic Color-Based .
E N D
Perception for Robot Detection 2011/12/08
Robot Detection • Robot Detection • Better Localization and Tracking • No Collisions with others
Goal Robust Robot Detection • Long Range • ShortRange
Long RangeCurrent Method:Heuristic Color-Based non-line white segments are clustered. The extracted clusters are classified as Nao robots if the following three criterions are satisfied: • the number of segments in the cluster should be larger than 3 • the width-to-height ratio should be larger than 0.2 • the highest point of the cluster should be close enough to the border line within 10 pixels as the observed robot should intersect with the field border in the camera view if both of the observing and observed robots are standing in the field.
Long RangeImprovement:Feature-Based Considerations • Scale Invariant • Affine Invariant • Complexity Possible Solutions • SIFT(Scale-invariant feature transform) • SURF(Speeded Up Robust Features) • MSER(Maximally Stable ExtremalRegions)
Long RangeImprovement:Feature-Based • Offline Models • Online Object Recognition using Predefined Models Feature Detection Feature Detection
Short RangeSonar and Vision • Two-Stage • Sonar • Active Vision • Feet Detection(sufficiently large white spot)
New NAOPossible Improvement • Using two cameras • One for ball, the other for localization • One for feet detection, the other for localization • …… • Not Downsampling • 320 x 240 -> 640 x 480
References • bhuman11_coderelease • SIFT(http://www.cs.ubc.ca/~lowe/keypoints/) • SURF Paper:Speeded-Up Robust Features (SURF) • MSER Tracking Paper:Efficient Maximally Stable Extremal Region (MSER) Tracking
Perception for Robot Detection 2011/12/22
Front View SURF Affine SIFT 209.318ms 73 -> 201 7s 11655 -> 14237
Back View SURF ASIFT 6s 11377 -> 13930 125.309ms 74->186
Side View SURF ASIFT 7s 7763 -> 13060 18matches
False Alarm Misclassifications of the field lines. SURF ASIFT
< 100cm front SURF ASIFT 92 matches
< 100cm back SURF ASIFT
< 100cm side SURF ASIFT 40 matches
300cm front SURF ASIFT 21 matches
300cm side ASIFT
350cm front ASIFT
Conclusion • Performance is not significantly better • Processing time is an issue
Perception for Robot Detection 2011/1/5
Multi-Class Training Stage:Using Adaboost • Classes = (different view point of nao robots) X (different scale of nao robots) X (different illuminations) • Input:For each class, training images (I1, l1)…(In, ln) where li = 0, 1 for negative and positive examples, respectively. • Output:strong classifier (set of weak classifiers) for each class.
Issues In Training Stage • Number of Classes:It depends on the limits of SIFT features(angle of view-invariant, range of scale-invariant, degree of illumination-invariant)
Detection Stage • Input:input image from nao camera • Output: • Number of robots in the image • Classes each robot belongs to => rough distance and facing direction of the detected robot
Issues In Training Stage • Speed:Using sharing and non-sharing features to speed up. Input Image SIFT Feature Extraction Extracted Features Class 1 Detection using Sharing Features from Training Stage Detection using Non-sharing Features from Training Stage yes Class 2 Class 3 NO
References • Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction • Sharing features: efficient boosting procedures for multiclass object detection
Aldebaran SDK 2012/3/16
NAOqi Framework NAOqi is a process, which is like a module look-up server.
Aldebaran Modules • Local Modules: • It is compiled as a library(xxxx.so), and can only be used on the robot. • More efficient than a remote module. • Launched in the same process. They speak to each other using only ONE broker. They can share variables and call each others’ methods without serialization nor networking. • Remote Modules: • it is compiled as an executable file(xxxx), and can be run outside the robot. • Less performance in terms of speed and memory usage. • Modules communicate each other by using the network.
BHuman Lib-bhuman is a Aldebaran module which manages Nao’s hardware-related memory(joints, sensor data).
C++ SDK 1.12 Installation • Installation Guide: http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html • Related Files: On Lab Server: /usr/home/markcsie/AldebaranSDK Requirements: 1. Linux(Ubuntu 10.04) 2. gcc> 4.4 is required. 3. CMake2.8(Used by qibuild) 4. qibuild-1.12 5. naoqi-sdk-1.12-linux32.tar.gz 6. nao-geode-cross-toolchain-1.12.0.tar.gz (For NAO 3.3) 7. nao-atom-cross-toolchain-1.12.0.tar.gz (For NAO 4.0) 8. nao-flasher-1.12.1.3-linux32.tar.gz (Flasher) 9. opennao-geode-system-image-1.12.gz (OS For NAO 3.3 ) 10. opennao-atom-system-image-1.12.opn (OS For NAO 4.0 ) 10. IDE: QtCreator(optional)
Installation 1. Edit ~/.bashrc: export LD_LIBRARY_PATH=[path to sdk]/lib export PATH=${PATH}:~/.local/bin:~/bin 2. $[path to qibuild]/install-qibuild.sh 3. $ cd [Programming Workspace] $ qibuildinit–interactive(choose UNIX Makefiles) 4. $ qitoolchaincreate [toolchain name] [path to sdk]/toolchain.xml –default
Create and Build a Project 1. $ qibuild create [project name] 2. $ qibuild configure [project name] –c [toolchainname] (--release) 3. $ qibuild make [project name] –c [toolchainname] (–release) 4. $ qibuildopen [project name] • 3. == running Makefile
Cross Compile(Local Module) $ qitoolchain create opennao-geode [path to cross toolchain]/toolchain.xml –default $ qibuild configure [project name] –c opennao-geode $ qibuildmake [project name] –c opennao-geode
Get Image Example There will be a compilation error due to openCV. http://users.aldebaran-robotics.com/index.php?option=com_kunena&Itemid=14&func=view&catid=68&id=8133 /** Create a proxy to ALVideoDevice on the robot.*/ ALVideoDeviceProxycamProxy(robotIp, 9559); /** Subscribe a client image requiring 320*240 and BGR colorspace.*/ conststd::string clientName = camProxy.subscribe("test", kQVGA, kBGRColorSpace, 30); /** Create an iplimage header to wrap into an opencv image.*/ IplImage* imgHeader = cvCreateImageHeader(cvSize(320, 240), 8, 3); /** Retrieve an image from the camera. * The image is returned in the form of a container object, with the * following fields: * 0 = width * 1 = height * 2 = number of layers * 3 = colors space index (see alvisiondefinitions.h) * 4 = time stamp (seconds) * 5 = time stamp (micro seconds) * 6 = image buffer (size of width * height * number of layers) */ ALValueimg = camProxy.getImageRemote(clientName); /** Access the image buffer (6th field) and assign it to the opencv image * container. */ imgHeader->imageData = (char*)img[6].GetBinary();
NAO OS • NAO 3.X : http://www.aldebaran-robotics.com/documentation/software/naoflasher/rescue_nao_v3.html?highlight=flasher • NAO 4.0 : http://www.aldebaran-robotics.com/documentation/software/naoflasher/rescue_nao_v4.html
Connect to NAO • Wired Connection(Windows only): Plug the Ethernet cable, then press the chest button, NAO will speak out his IP address. Connect to NAO using web browser. • Wireless Connection: http://www.aldebaran-robotics.com/documentation/nao/nao-connecting.html
Software, Documentation and Forum http://users.aldebaran-robotics.com/ Account: nturobotpal Password: xxxxxxxxxx