1 / 11

Designing an Intelligent Agent

Designing an Intelligent Agent. CAP 4621 (Artificial Intelligence) Fall 2002, University of Florida Eric Spellman. What this lecture covers. General architectures for agents Reflex agent Goal-based agent Utility-based agent Describing environments

dwight
Télécharger la présentation

Designing an Intelligent Agent

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Designing an Intelligent Agent CAP 4621 (Artificial Intelligence) Fall 2002, University of Florida Eric Spellman

  2. What this lecture covers • General architectures for agents • Reflex agent • Goal-based agent • Utility-based agent • Describing environments • accessible? deterministic? episodic? static? discrete?

  3. Choosing your architecture • Your problem dictates your agent. • Typically default to simplest which works • Must your agent deal with surprises? • How complex must his behavior be?

  4. Reflex agents • Not autonomous, must specify all behavior • Many, many condition-action rules • Suited for a strictly controlled environment • Suited for repetitive tasks (episodic) • Example: Airline passenger screening

  5. Adding a memory • Many problems require a series of steps to succeed. • Agents for such problems must have “internal state” (a memory). • Actions depend on precepts and state. • What do you need to store? (Memento) • Examples: Target tracking; Shell game.

  6. Goal-based agents • Much easier to specify behavior • Tell agent what to do not how to do it. • Agent predicts how his actions effect world. • Uses search techniques to find sequence of actions leading to a goal state. • More flexible than reflex. • Example: Simple game playing.

  7. Utility-based agent • Quantify desirability of states • Can select among conflicting goals • Can act even when it cannot predict the exact sequence leading to a goal. • More powerful than goal-based agents. • Examples: Chess (discrete); Moving a robot arm (continuous).

  8. Accessible Ex: Chess Deterministic Ex: Poker (w/ cheating) Episodic Ex: Passenger profiling Inaccessible Ex: Shell game Nondeterministic Ex: Poker Nonepisodic Ex: Profiling for groups Describing environments

  9. Static Ex: Chess Dynamic Ex: Target tracking Describing environments (cont.) Semidynamic Ex: Credit card authorization Discrete Ex: Chess Continuous Ex: Target tracking

  10. Conclusion • Architectures of different complexities • Must describe and understand your problem before choosing which architecture. • No free lunch: • Simple architecture and complex rules set • Flexibility requires complex architecture.

  11. Go Gators!Gators – ‘Canes34 – 17

More Related