1 / 63

Adaptive Robotics COM2110 Autumn Semester 2008 Lecturer: Amanda Sharkey

Adaptive Robotics COM2110 Autumn Semester 2008 Lecturer: Amanda Sharkey. Robots in the news. Bath University: a robot that jumps like a grasshopper and rolls like a ball Created by PhD student Rhodri Armour Can roll in any direction, and can jump over obstacles

liora
Télécharger la présentation

Adaptive Robotics COM2110 Autumn Semester 2008 Lecturer: Amanda Sharkey

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adaptive RoboticsCOM2110 Autumn Semester 2008 Lecturer: Amanda Sharkey

  2. Robots in the news • Bath University: a robot that jumps like a grasshopper and rolls like a ball • Created by PhD student Rhodri Armour • Can roll in any direction, and can jump over obstacles • Small motors build up energy in the springy spherical exoskeleton by compressing it. • Avoids problems of robots with legs and robots with wheels

  3. Behaviour-based robotics versus GOFAI Control mechanisms: Subsumption architecture Artificial Neural Nets (ANNs) learning rules and limitations Genetic algorithms Biological inspiration Forms of learning in biological organisms Organisation of biological systems Biological modelling Examples Early robots, Humanoid robots, Examples of research papers, Applications Last week – AI, Magic and Deception

  4. a) Provide a brief account of the following terms, and their relevance to behaviour-based robotics: • (I) embodiment (10%) • (ii) reactivity (10%) • (iii) stigmergy (10%) • b) Consider, with reference to the notion of autopoiesis, whether or not strong embodiment is possible. (30%) • c) Identify and discuss what you see as the main strengths and weaknesses of the new approach to Artificial Intelligence. (40%)

  5. Classical AI (GOFAI) • E.g. chess player, expert systems or traditional planning systems like STRIPS, or GPS (General Problem Solver) • Emphasis on manipulation of symbols, planning and reasoning • Centralised systems • Sequential

  6. Cognitivism • Cognition is the manipulation of abstract representations by explicit formal rules • Knowledge of the world as sentence like descriptions using symbols • Symbol – stands for objects and concepts • E.g. CYC project for creating common-sense reasoner

  7. Problems with Classical systems • Lack of robustness • May not perform well in noisy conditions, or when some components break down • Lack of generalisation • May not perform well in novel situations • Real time processing • Likely to be slower • centralised

  8. Further problems with Classical AI • Little consideration of interaction between agent and the real world • Frame problem • How to model change • Symbol grounding • How to link the symbols being manipulated with the real world

  9. Frame problem • Daniel Dennett (1987) • Robot with propositional representations • E.g. INSIDE(R1,ROOM) ON(BATTERY,WAGON) • Spare battery in room with time bomb • R1 plans to pull wagon and battery out of room. But bomb also on wagon • R1D1 considering implications of actions. But still deciding whether removing wagon would change the colour of the walls when bomb explodes

  10. Back to the drawing board. “We must teach it the difference between relevant implications and irrelevant implications. So they developed a method of tagging implications as either relevant or irrelevant to the project at hand and installed the method in R2D1. They found it sitting outside the room. • “Do something” they yelled. “I am” it retorted. “ I’m busily ignoring some thousands of implications I have determined to be irrelevant. Just as soon as I find an irrelevant implication, I put it on the list of those I must ignore and” • The bomb went off

  11. Symbol grounding problem and the Chinese Room • Gedanken (thought) experiment • Imagine a person in a room, who has a set of rule books. Sets of symbols are passed in to them, and they can process them, using the rule books, and send symbols out • The symbols going in are Chinese questions • The symbols going out are Chinese answers • The room seems to understand Chinese • But the person in the room does not understand Chinese • Similarly, a question answering computer program does not understand language • Computers don’t understand – they just manipulate symbols that are meaningless to them.

  12. Related papers • Harnad, S. (1990) The symbol grounding problem. Physica D. 42, 335-46. • Searle, J.S. (1980) Minds, Brains and Programs. Behavioural and Brain Sciences, 3, 417-24.

  13. Behaviour-based AI • AKA – embodied cognitive science, new AI, new wave AI • Brooks, 1986 subsumption architecture • Emphasis on intelligence emerging from the interaction of organism with the environment, and close coupling between sensors and motors • Brooks, 1990 “Intelligence without Representation” and Behaviour-based Robotics

  14. Key concepts in Embodied Cognitive Science • Embodiment • Situatedness • Emphasis on interaction with the environment • Biological inspiration • Stigmergy • Emergence • Reactive behaviour • Decentralisation

  15. Reactive robotics • Grey Walter’s electronic tortoises • Taxis and tropism • Phototropism, Phototaxis • Phonotaxis • Coastal seaslug • Geotaxis, negative and postive phototaxis

  16. Biology • Biological modelling • Cricket phonotaxis • Catagylphis desert ant • Task allocation • Understanding by building: synthetic modelling • Biological inspiration • Sorting (Holland and Melhuish) • Stigmergy • Emergence • Minimal representation

  17. Biological inspiration • Swarm robotics and swarm intelligence • Keep it simple: Minimal representation and reactive systems • Innate knowledge • Fixed action patterns • Learning and evolution • Classical conditioning • Operant conditioning • Neural nets • Genetic Algorithms

  18. Mechanisms • Subsumption architecture • Braitenberg vehicles • McCulloch and Pitts neurons • Main characteristics • Neural Nets and learning algorithms • Strengths and limitations • Hebbian learning • Delta rule • Generalised delta rule • Genetic Algorithms • Evolving neural nets

  19. Strengths and Limitations of reactive systems A reactive system is one “where sensors and motors are directly linked and which always react to the same sensory state with the same motor action” (Nolfi and Floreano, 2000) • E.g. Grey Walter’s electronic tortoises • E.g: a reactive robot with Braitenberg controller - simple neural network e.g. fully connected perceptrons without internal layers or any form of internal organisation.

  20. Did Brooks et al reject the idea of internal representation?

  21. Criticisms Criticisms of approach see Anderson (2003) No complex intelligent creature can get by without representations….Kirsh (1991)

  22. Criticisms • Ford et al (1994): concerned that “The situationalists are attacking the very idea of knowledge representation – the notion that cognitive agents think about their environments, in large part, by manipulating internal representations of the worlds they inhabit”

  23. Criticisms • Vera and Simon (1993) argue that proponents of situated action are not saying something different to proponents of physical symbol systems • Situated action proponents claim (according to Vera and Simon) • No internal representations • Direct access to affordances of the environment • No use of symbols

  24. Criticisms • But Vera and Simon point out that minimal representations are used • E.g. Pengi, and the notion of “the bee that is chasing me now” is still a symbol. • “If the SA approach is suggesting simply that there is more to understanding behaviour than describing internally generated, symbolic, goal-directed planning, then the symbolic approach has never disagreed.”(Vera and Simon, 1993)

  25. Pengi explanation • Agre and Chapman (1987) • Pengi is a simulated agent that plays the video game Pengo • Plays without planning or using representations • E.g. escaping from “the bee that is chasing me now”, down “the corridor I’m running down”

  26. Also, limitations to reactive systems • Embodied and situated systems can sometimes solve quite complicated tasks without internal representations. • E.g. sensory-motor coordination (exploiting agent-environment interaction) can solve tasks, as agent can select favourable sensory patterns through motor actions. • But there are limits.

  27. Examples of problems that can be solved through sensory-motor coordination • Perceptual aliasing • Sensory ambiguity • Clearly simple behaviours such as obstacle avoidance can be accomplished without internal represenation

  28. Perceptual aliasing: two or more objects generate the same sensory pattern, but require different responses. • E.g. Khepera robot in environment with 2 objects • one with a black top to be avoided, • One with white top to be approached • Khepera: 8 infrared proximity sensors and a linear camera with a view angle of 30 degrees.

  29. If robot approaches object which is not in view angle of camera, it will receive an ambiguous sensory pattern. • Solution – to turn towards object, so it is in view angle of camera, and sensory pattern disambiguated. • An example of active perception. • Similar behaviour found in fruit fly Drosophila, which moves to shift perceived image to certain location of visual field. • But limits to this strategy – will only be effective when robot can find at least one sensory state not affected by aliasing problem.

  30. Example of active restructuring: Scheier et al 1998. • Khepera should approach large and avoid small cylindrical objects in a walled area. • Robot receives information from 6 frontal proximity sensors. • Neural network trained to discriminate between patterns corresponding between cylindrical objects and walls, or between different sizes of cylindrical object. • Poor performance on large/small cylindrical objects

  31. Problem is that sensory patterns belonging to different categories overlap. • “put differently, the distance in sensor space for data originating from one and the same object can be large, while the distance between two objects from different categories can be small” (Scheier et al 1998) • But Scheier et al (1998) used artificial evolution to select the weights for robot’s neural controllers.

  32. Near optimal performance after 40 generations. • Fittest individuals moved in the environment until they perceived an object (large or small). • Then they circled the object • Circling behaviour resulted in different sensory patterns for different sizes of object. • i.e sensory-motor coordination allowed robots to obtain sensory patterns that could be easily discriminated.

  33. So, some difficult tasks can be solved by exploiting environmental constraints through sensory-motor coordination and active perception. • But, not all • An alternative to simple reactive behaviour – robots that can exploit internal dynamical status. • By using neural network with recurrent connections

  34. Summary to date • Classical AI vs Behaviour-based AI • What is reactive behaviour? • What can it accomplish? • Simple tasks, e.g. obstacle avoidance • Some harder tasks by exploiting the environment and active perception • Limits – example of tasks that require some internal states.

  35. Tasks that require reasoning: - activities that involve predicting the behaviour of other agents - activities which require responses to action in the future e.g. avoiding future dangers • activities that require understanding from an objective perspective e.g. following advice, or a new receipe. • Problem solving e.g how many sheets of paper needed to wrap a package • Creative activities e.g. language use, musical performance.

  36. Mataric (2001) identifies “behaviour-based systems as an alternative to reactive systems. • She identifies strengths and weaknesses of reactive systems. • Strengths: real-time responsiveness, scalability, robustness • Weaknesses: lack of state, inability to look into past or future.

  37. Mataric (2001) characterisation of types of control • Reactive control: don’t think, react • Deliberative control: think hard, then act. • Hybrid control: think and act independently in parallel • Behaviour-based control: think the way you act.

  38. Behaviour-based control • Behaviours added incrementally – simplest first. • Behavioural modules can use internal representations when necessary • But no centralised knowledge

  39. Related idea Action oriented representations (Clark, 1997) Use of minimal representations – e.g. when looking for coffee cup, you search for yellow object. . partial models of the world which include only those aspects that are necessary to allow agents to achieve their goals. (Brooks 1991).

  40. But if the same body of information were needed in several activities, might be more economical to deploy a more action-neutral encoding. • E.g. if knowledge about an object’s location to be used for many different purposes, might be better to generate a single action-independent inner map that could be accessed by multiple routines.

  41. Reactive issue • Change from traditional AI: different emphasis on importance of mental representations. • Embodied and situated approach: minimal internal representations best viewed as partial models of the world which include only those aspects that are necessary to allow agents to achieve their goals (Brooks, 1991)

  42. Embodiment – robots deal with real objects in the real world, not symbols • Does that mean they can really be said to be intelligent and capable of thought? • Discuss …..

  43. Embodiment • Key concept in embodied and situated AI • Idea that robots are physically embodied and can act on the world • Does the use of embodied robots make Strong AI possible?

  44. Weak AI: computer is valuable tool for study of mind – ie can formulate and test hypotheses rigorously • Strong AI: appropriately programmed computer really is a mind, can be said to understand, has cognitive states. • Strong AI: “the implemented program, by itself, is constitutive of having a mind. The implemented program, by itself, guarantees mental life” Searle (1997)

  45. Problems: how can symbols have meaning? (Searle and Chinese room)

  46. Two possible solutions • Symbol grounding (not covered here) • Situated and embodied cognition

  47. Situated and Embodied cognition • Exemplified by Rodney Brooks • Approach emphasises construction of physical robots embedded in and interacting with the environment • No central controller • Subsumption architecture • No symbols to ground • Intelligence is found in interaction of robot with its environment.

More Related