1 / 23

Intelligent Agent

Intelligent Agent. Chapter 2. Outline. Agent and Environment Rationality Performance Measure Environment Type Agent Type. Agent interacting with Environment. Agents include Human, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions

saxton
Télécharger la présentation

Intelligent Agent

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intelligent Agent Chapter 2

  2. Outline • Agent and Environment • Rationality • Performance Measure • Environment Type • Agent Type 2

  3. Agent interacting with Environment • Agents include Human, robots, softbots, thermostats, etc. • The agent function maps from percept histories to actions • The agent program runs on the physical architecture to produce f 3

  4. Fig 2.3 4

  5. Rational Agent • Characteristics of Rational Agent • tries to maximize expected value of performance measure • performance measure = degree of success • on the basis of evidence obtained by percept sequence • using its built-in prior world knowledge • Rational Agent : agent that does the RIGHT things Rational not= Omniscient, clairvoyant, successful Right Decision vs. Lucky decision example : Playing lotto 5

  6. Rationality • Rationality  Information-Gathering, learning, autonomy • Information-Gathering • Modify future percepts • Exploration unknown environment • Learning • Modify prior knowledge • Autonomy • Learn to compensate for partial and incorrect prior knowledge • Become independent of prior knowledge • Successful in variety of environment importance of learning 6

  7. Performance Measure, Environment, Actuators, Sensors • To design a rational agent, we must specify task environment which consists of PEAS (Performance Measure, Environment, Actuators, Sensors) • Taxi Driver • Performance measure : safety, fast, legal, confortable trip, maximize profits • Environment : Roads, other traffic, pedestrains, customers • Actuators : steering, accelerator, brake, signal, horn, display • Sensors : cameras, sonar, speedometer, GPS, odometer, accelerometer, engine sensors, keyboard or microphone to accept destination 7

  8. Internet Shopping Agent • Performance Measure : • Environment : • Actuators : • Sensors : 8

  9. Properties of Task Environments • Fully observable vs. Partially observable • Deterministic vs. Stochastic • Strategic : deterministic except for actions of other agents • Episodic vs. Sequential • Static vs. Dynamic • Discrete vs. Continuous • Single Agents vs. Multiagent • Competitive, cooperative 9

  10. Task Environment Types Real world is … 10

  11. 2-3. Structure of Intelligent Agents perception action ? Agent = architecture + program 11

  12. Types of agents • Four basic types • Simple Reflex Agent • Reflex Agent with state • that keeps track of the world • Also called model-based reflex agent • Goal-based agent • Utility-based agent • All these can be turned into Learning Agents Generality 12

  13. (1) Simple Reflex Agent • characteristics • no plan, no goal • do not know what they want to achieve • do not know what they are doing • condition-action rule • If condition then action • architecture - [fig. 2.9] program - [fig. 2.10] 13

  14. Fig 2.9 Simple reflex agent 14

  15. Fig 2.10 Simple Reflex Agent 15

  16. (2) Model-based Reflex Agents • Characteristics • Reflex agent with internal state • Sensor does not provide the complete state of the world. • Updating the internal world • requires two kinds of knowledge which is called model • How world evolves • How agent’s action affect the world • architecture - [fig 2.11] program - [fig 2.12] 16

  17. Fig 2.11 A model-based Agent 17

  18. (3) Goal-based agents • Characteristics • Action depends on the GOAL . (consideration of future) • Goal is desirable situation • Choose action sequence to achieve goal • Needs decision making • fundamentally different from the condition-action rule. • Search and Planning • Appears less efficient, but more flexible • Because knowledge can be provided explicitly and modified • Architecture - [fig 2.13] 18

  19. Fig 2.13 A model-based, Goal-based Agent 19

  20. (4) Utility-based agents • Utility function • Degree of happiness • Quality of usefulness • map the internal states to a real number • (e.g., game playing) • Characteristics • to generate high-quality behavior • Rational decisions are made • Looking for higher Utility value • Expected Utility Maximizer • Explore several goals • Structure - [fig 2.14] 20

  21. Fig 2.14 A Model-based, Utility-based Agent 21

  22. Learning Agents • Improve performance based on the percepts • 4 components • Learning elements • Making improvement • Performance elements • Selecting external actions • Critic • Tells how well the agent doing based on fixed performance standard • Problem generator • Suggest exploratory actions 22

  23. General Model of Learning Agents 23

More Related