1 / 1

Rijo Santhosh st_santhosh@tarleton Dr. Mircea Agapie agapie@tarleton

Dead-reckoning navigation for autonomous robots using a list of snapshots. Rijo Santhosh - Dept. of Engineering & Physics, Tarleton State University Mentor: Dr. Mircea Agapie. INTRODUCTION. SIMULATED ENVIRONMENT. ABSTRACT.

zahi
Télécharger la présentation

Rijo Santhosh st_santhosh@tarleton Dr. Mircea Agapie agapie@tarleton

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dead-reckoning navigation for autonomous robots using a list of snapshots Rijo Santhosh - Dept. of Engineering & Physics, Tarleton State University Mentor: Dr. Mircea Agapie INTRODUCTION SIMULATED ENVIRONMENT ABSTRACT The following function is responsible for turning the robot clockwise through a given angle (if angle is negative, it turns counter-clockwise). It demonstrates how real-time feedback from the robot can be used to make sure that the desired motion has indeed completed: void turnRight(int angle){ robot.setDeltaHeading(-angle); // API function turns left for( ArUtil::sleep(3000) ; // allow 3 sec. to complete turn robot.getRotVel() !=0; // is robot still moving? ArUtil::sleep(500) ); // give it 0.5 sec. more } Robotic navigation is of major interest today in academia, government and private industry alike. In order to be useful, robots have to be capable of autonomous navigation, and this requires learning. Artificial Intelligence recognizes two main classes of learning algorithms: supervised and unsupervised. We propose a two-stage algorithm that combines the two: first the robot is guided by teleoperation on the given path, and then it is able to autonomously navigate the path as many times as needed. Our work is also biologically-inspired [3]. It is known that, in order to perform navigation, organisms rely on (at least) two major mechanisms: memorization of simplified representations of their environment, called snapshots [2], and organization of the snapshots in an internal map [1]. Dead reckoning (DR) is a well-known navigation algorithm: at each step, the robot estimates its current position based upon a previously determined position and its known speed, direction and the time elapsed. If no feedback is taken from the environment, DR has a severe limitation: errors accumulate, sooner or later causing the robot to stray from the desired path. Our algorithm extends DR, using learning (supervised and unsupervised), snapshots and maps to correct the errors. The topic of machine learning is at the forefront of Artificial Intelligence and Robotics research today. There is biological evidence that, in order to perform navigation, organisms rely on two major mechanisms: 1.Memorization of simplified representations of their environment, called snapshots, and 2.Organization of the snapshots in an internal map. We propose a biologically-inspired two-stage algorithm for autonomous navigation of a path. In the first stage, the robot is “taught” the path under human supervision. It captures a series of snapshots and directions along the way, memorizing them in a list. In the second stage, the robot operates autonomously, using the information in the list to navigate the same path. To avoid the pitfalls of classical dead-reckoning, we implement simple self-correcting behaviors: the robot compares the memorized snapshots with the real-time image of the environment and moves in order to minimize the perceived error. The first phase of our algorithm was successfully tested using a simulated robot in a simulated environment. Figure 3 (above). Sample path of the robot Figure 4 (below). List of directions, stored as a file CONCLUSIONS and FUTURE WORK • This project is in progress. Only the first phase, supervised learning, has been implemented, with the robot storing the list of directions in PC memory. We have worked exclusively in a simulator, with no tests so far on the real AmigoBot. • The following steps will be taken to complete the project: storing sonar readings alongside angles and distances in the list; simulated robot navigating the path autonomously, first by dead-reckoning alone, then using error-correcting behaviors; and finally implementing the entire algorithm on the real robot. • Future work will include: • Teaching the robot several paths (a repertoire), with the flexibility of choosing any of them for the autonomous navigation stage. • Developing an algorithm for returning home by “retracing the steps”. • Avoiding obstacles that were not encountered in the supervised stage (dynamic environment). A simulated room is shown in Fig.3, with the robot starting from the upper left corner (point A) in teleoperated mode, and being led on a path with ten turns separated by variable distances. Fig.4 shows a partial list of directions (angles and distances) that was generated automatically during this test and stored in a text file on the PC. The list is truncated, with the last 90-degree turn corresponding to the lower-right corner of the room (C). For ease of processing, angle and distance information is stored in separate nodes of the list. The human operator can cause consecutive turns, as well as consecutive forward motions. ROBOT AND SENSORS We use the commercially-available AmigoBot™ and the accompanying software tools from ActivMedia [5]. C++ IMPLEMENTATION The robot is able to detect imminent collisions through the sonars and will stop and inform the operator when this happens, e.g. at point B in Fig.3. This is implemented through the following C++ function, which makes use of the ARIA API: void moveDesiredDistance(int distance){ printf("%d",robot.getSonarRange(2)); if ((robot.getSonarRange(2) < 500) && (robot.getSonarRange(3) < 500)) { //obstacle is closer than 500 mm printf("\n Cannot proceed, obstacle in front"); printf("\n sonar 2 = %d \t”, robot.getSonarRange(2)); printf(“\n sonar 3 = %d \t”, robot.getSonarRange(3)); printf(“theta %.2f\n",robot.getTh()); printf("\n Please select other options"); } else { robot.move(distance); //move if no collision ArUtil::sleep(5000); //allow 5 sec. for move to complete } } REFERENCES Figure 1. Schematic of the AmigoBot Figure 2. Placement of sonars • M. Mataric, Navigating with a rat brain: A neurobiologically-inspired model for robot spatial representation, in J.-A. Meyer and S. Wilson, eds., From Animals to Animats, Proc. 1st Internat. Conf. on Simulation of Adaptive Behavior, 169-175, MIT Press, 1991. • T.S. Collett, Insect navigation en route to the goal: Multiple strategies for the use of landmarks, The Journal of Experimental Biology, 199, 227-235, 1996. • M.O. Franz and H.A. Mallot, Biomimetic robot navigation, Robotics and Autonomous Systems, 30:133-153, 2000. • http://www.activrobots.com/ROBOTS/amigobot.html Eight sonars constitute the input sensors. The robot detects objects and estimates their distance by measuring the round-trip time of a ping signal, much like a bat. In our application only the front sonars (2 and 3 in Fig.2) are used when moving the robot the desired distance. The robot communicates with a PC across an 802.11 wireless network, with information packets sent back and forth every 100 ms. The navigation algorithm runs on the PC, while the lower-level tasks (e.g. running the sonars) are left to the robot’s processor. CONTACT Rijo Santhosh st_santhosh@tarleton.edu Dr. Mircea Agapie agapie@tarleton.edu

More Related