1 / 32

Agents

Agents. the AI metaphor. The agent model. agents include all aspects of AI in one object-oriented organizing model:. purpose. AGENT. act. perceive. e n v i r o n m e n t. Agents and decentralization. Mars Rover direct control  agent . Purpose. Why do agents act? goals

luna
Télécharger la présentation

Agents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Agents the AI metaphor

  2. The agent model • agents include all aspects of AI in one object-oriented organizing model: purpose AGENT act perceive e n v i r o n m e n t D Goforth - COSC 4117, fall 2006

  3. Agents and decentralization Mars Rover direct control  agent D Goforth - COSC 4117, fall 2006

  4. Purpose • Why do agents act? • goals • internal (state of agent’s structure, eg survive) • external (state of environment, eg clean up dirt) • How to measure success? • compare actual results to goals • R&N ‘performance measure’ D Goforth - COSC 4117, fall 2006

  5. Performance measure • external to agent (like javadoc specification) • ideal that cannot always be achieved completely (unlike javadoc specs)  Agent success (‘rationality’) is evaluated based on performance measure AND percepts, possible actions, experience (like an athlete) D Goforth - COSC 4117, fall 2006

  6. Factors in rationality • performance measure – goals may be in conflict – can’t all be achieved • perceptions – agent may not have all the facts • actions available • experience – agent may not yet have accumulated all available relevant data D Goforth - COSC 4117, fall 2006

  7. Agents are not just methods • actual outcome of actions are not known 100% • algorithms are not complete solutions – agents should be partly autonomous  learn from experience • gather data about environment • respond better to same perceptions D Goforth - COSC 4117, fall 2006

  8. The agent model • agents include all aspects of AI in one organizing model: purpose AGENT act perceive e n v i r o n m e n t D Goforth - COSC 4117, fall 2006

  9. ExampleCASH REGISTER AS AGENT • Goals: get payment for items, update inventory, accumulate payments • Perceive bar code • Know price lists • Understand finding prices, names from bar code • Understand accumulating bill • Act to send price, code to accounting, send inventory change to db. • Act to display item name, price, and running total • Perceive signal for no-more-items • Act to request payment • Perceive payment • … D Goforth - COSC 4117, fall 2006

  10. Environments • real or virtual • may contain other agents • factors relevant to the agent are called the state of the environment • perceptions give agent information about the state • actions of agent change the state D Goforth - COSC 4117, fall 2006

  11. Categorizing Environments(R&N p40-44) • fully or partly observable – perception of state of the environment game examples: chess, bridge, Myst D Goforth - COSC 4117, fall 2006

  12. Categorizing Environments(R&N p40-44) 2. actions are predictable – deterministic vs. stochastic vs. strategic game examples: chess, Monopoly, solitaire yogo peg game, solitaire card game D Goforth - COSC 4117, fall 2006

  13. Categorizing Environments(R&N p40-44) • episodic vs. sequential – actions are based on how many previous perceptions and actions? game examples: chess, paper-scissors-rock, bridge trick, bridge hand D Goforth - COSC 4117, fall 2006

  14. Categorizing Environments(R&N p40-44) • real-time vs event driven- (static vs dynamic) agent and environment are sequential or co-routines game examples: chess, tetris D Goforth - COSC 4117, fall 2006

  15. Categorizing Environments(R&N p40-44) • discrete vs. continuous environment, perception, action game examples: chess, tetris, driving simulator D Goforth - COSC 4117, fall 2006

  16. Categorizing Environments(R&N p40-44) 6. number of agents – 1 or more competitive, cooperative, codependent, interfering, communicating (info separate from perceptions) game examples: solitaires, chess, bridge, futures, tetris, driving simulator(s), role playing games D Goforth - COSC 4117, fall 2006

  17. Categorizing King’s Court: • fully / partly observable • deterministic / stochastic • sequential / episodic • static / dynamic • discrete / continuous • single- / multi-agent D Goforth - COSC 4117, fall 2006

  18. Agent Structure • agent program is ‘episodic’ – receives percepts and produces actions (parameters and return values) BUT internal state of agent can evolve sequentially – agent may be in a different state after episode than before D Goforth - COSC 4117, fall 2006

  19. Agent Structure • Table-Driven (p.45) • single perception look-up (HUGE table) • perception sequence look-up (HUGER table) • example game: tic-tac-toe • perfect solution but intractible D Goforth - COSC 4117, fall 2006

  20. Table-driven agents (revised from R&N) KNOWLEDGE LOOK-UP TABLE Key value Percept1 action1 Percept2 action2 … D Goforth - COSC 4117, fall 2006

  21. Agent Structure • Simple reflex (p.46) • based on current perception only • i.e., no instance variables in the agent object; no state • ‘condition-action’ rules (if then else algorithm) D Goforth - COSC 4117, fall 2006

  22. Simple reflex agents – R&N D Goforth - COSC 4117, fall 2006

  23. Agent Structure • Model-based reflex (p.48) • uses percepts to build internal model of environment - • internal state is ‘memory’ of environment • algorithm based on percepts and internal state D Goforth - COSC 4117, fall 2006

  24. Model-based reflex agents – R&N D Goforth - COSC 4117, fall 2006

  25. Agent Structure • Goal-based (p.49) • internal state representing environment PLUS • goals expressed in terms of environment and/or agent states • NOT REFLEX; ‘tries’ actions internally and tests results against goals D Goforth - COSC 4117, fall 2006

  26. Goal-based agents – R&N

  27. Agent Structure • Utility-based (p.50) • internal state representing environment PLUS • goals expressed in terms of environment and/or agent states PLUS • performance measure rationality • ‘tries’ actions internally and tests results against goals AND performance measure D Goforth - COSC 4117, fall 2006

  28. Utility-based agents – R&N D Goforth - COSC 4117, fall 2006

  29. Agent Structure • Learning(p.50) • extra component to evaluate performance and change program (if necessary) to act differently in same state • many kinds of learning agents D Goforth - COSC 4117, fall 2006

  30. Learning agents – R&N

  31. Agent Structures • Table driven • Simple reflex • Model-based reflex • Goal-based • Utility-based • Learning D Goforth - COSC 4117, fall 2006

  32. Example CASH REGISTER AS AGENT: • Goals: get payment for items, update inventory, accumulate payments • Perceive bar code • Know price lists • Understand finding prices, names from bar code • Understand accumulating bill • Act to send price, code to accounting, send inventory change to db. • Act to display item name, price, and running total • Act to request payment • Perceive payment • … D Goforth - COSC 4117, fall 2006

More Related