1 / 34

Multi-Agent Systems: Overview and Research Directions

Multi-Agent Systems: Overview and Research Directions. CMSC 477/677 Spring 2007 Prof. Marie desJardins. Outline. Agent Architectures Logical Cognitive Reactive Theories of Mind Multi-Agent Systems Game review Cooperative multi-agent systems Competitive multi-agent systems.

lint
Télécharger la présentation

Multi-Agent Systems: Overview and Research Directions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-Agent Systems:Overview and Research Directions CMSC 477/677 Spring 2007 Prof. Marie desJardins

  2. Outline • Agent Architectures • Logical • Cognitive • Reactive • Theories of Mind • Multi-Agent Systems • Game review • Cooperative multi-agent systems • Competitive multi-agent systems

  3. Agent Architectures

  4. Agent architectures • Logical Architectures • Cognitive Architectures • Reactive Architectures • Theories of Mind

  5. Logical architectures Formal models of reasoning and agent interaction • BDI Models: Explicitly model beliefs, desires, and intentions of agents • Concurrent MetateM, GOLOG: Logic programming languages

  6. Cognitive architectures Computational models of human cognition • ACT-R, Soar: Production rule architectures, very human-inspired • APEX: “Sketchy planning;” focus on human performance in multitasking, action selection, resource limitations • PRODIGY: Planning-centric architecture, focused on learning, less human-inspired

  7. Reactive architectures Perceive and react (a.k.a. “Representation, schmepresentation!”) • Brooks: The original reactivist • PENGI: Reactive video game player • AuRA: Hybrid deliberative/reactive robot architecture

  8. Theories of mind Forays into philosophy and cognitive psychology • Society of Mind (Minsky): The brain is a collection of autonomous agents, all working in harmony • Emotion: Do we need emotions to behave like humans, or to interact with humans? • Consciousness: What is it? Where does it come from? Will our AIs ever have it?

  9. Multi-Agent Systems

  10. What you learned yesterday • Boundary cases are simpler (less constrained, require less communication and coordination) • “Closer to 1 is easy” • “Easy: pick the very first ‘value’” • “This was easy for me because I have to be ‘A’”

  11. What else you learned... • Global knowledge sometimes helps • “Global knowledge of map was unhelpful” • “Global knowledge of others’ values was very helpful” • “Global knowledge made the problem a whole lot easier than without. Without global knowledge was very tough” • “Global knowledge works well...” • “Global knowledge obviously made this easier” • “Being able to ask other agents what their constraints were would have made this easier” • Indexing/brokering of other agents would help • “Easy: being able to locate agents by name after finding their #”

  12. What else you learned... • Backtracking and replanning is hard • “Hard to communicate changes to other neighbors who already got info from my agent” • “Keeping an updated list of other agents’ colors was hard” • “Mind changing is a key to failure ” • “Difficult: when certain agents changed, they did not alert me so I could change my color”

  13. What else you learned... • Restrictive communication constraints and protocols make the problem harder • “Hard: adhering to the 1 person at a time rule” • “Easier if multiple agents can talk to each other in a group” • “Communication made the task difficult. If every agent could communicate at the same time, it would have made the task much easier” • “It was very interesting to physically realize the problem with agent communication and agreement. Much better than simply hearing it.”

  14. What else you learned... • Pre-establishing problem-solving protocols can make the problem easier • “...just having people select color ordered by the number of neighbor could have gotten a solution” • “This would have been made easier by some sort of organization instead of everybody picking a random color and then resolving conflicts”

  15. Most importantly  • 477/677 is fun! • “A bit confusing as to what was going on at first, but afterwards it was fun” • “Exercise was fun”

  16. Multi-agent systems • Jennings et al.’s key properties: • Situated • Autonomous • Flexible: • Responsive to dynamic environment • Pro-active / goal-directed • Social interactions with other agents and humans • Research questions: How do we design agents to interact effectively to solve a wide range of problems in many different environments?

  17. Aspects of multi-agent systems • Cooperative vs. competitive • Homogeneous vs. heterogeneous • Macro vs. micro • Interaction protocols and languages • Organizational structure • Mechanism design / market economics • Learning

  18. Topics in multi-agent systems • Cooperative MAS: • Distributed problem solving: Less autonomy • Distributed planning: Models for cooperation and teamwork • Competitive or self-interested MAS: • Distributed rationality: Voting, auctions • Negotiation: Contract nets

  19. Typical (cooperative) MAS domains • Distributed sensor network establishment • Distributed vehicle monitoring • Distributed delivery

  20. Cooperative Multi-Agent Systems

  21. Cooperative agents, working together to solve complex problems with local information Partial Global Planning (PGP): A planning-centric distributed architecture SharedPlans: A formal model for joint activity Joint Intentions: Another formal model for joint activity STEAM: Distributed teamwork; influenced by joint intentions and SharedPlans Distributed problem solving/planning

  22. Distributed problem solving • Problem solving in the classical AI sense, distributed among multiple agents • That is, formulating a solution/answer to some complex question • Agents may be heterogeneous or homogeneous • DPS implies that agents must be cooperative (or, if self-interested, then rewarded for working together)

  23. Competitive Multi-Agent Systems

  24. Distributed rationality • Techniques to encourage/coax/force self-interested agents to play fairly in the sandbox • Voting: Everybody’s opinion counts (but how much?) • Auctions: Everybody gets a chance to earn value (but how to do it fairly?) • Contract nets: Work goes to the highest bidder • Issues: • Global utility • Fairness • Stability • Cheating and lying

  25. S is a Pareto-optimal solution iff S’ (x Ux(S’) > Ux(S) → y Uy(S’) < Uy(S)) i.e., if X is better off in S’, then some Y must be worse off Social welfare, or global utility, is the sum of all agents’ utility If S maximizes social welfare, it is also Pareto-optimal (but not vice versa) Pareto optimality Which solutions are Pareto-optimal? Y’s utility Which solutions maximize social welfare (global utility)? X’s utility

  26. Stability • If an agent can always maximize its utility with a particular strategy (regardless of other agents’ behavior) then that strategy is dominant • A set of agent strategies is in Nash equilibrium if each agent’s strategy Si is locally optimal, given the other agents’ strategies • No agent has an incentive to change strategies • Hence this set of strategies is locally stable

  27. Prisoner’s Dilemma Reward Sucker (A) Let's play! B A Punishment Temptation (A)

  28. Prisoner’s Dilemma: Analysis • Pareto-optimal and social welfare maximizing solution: Both agents cooperate • Dominant strategy and Nash equilibrium: Both agents defect B A • Why?

  29. Voting • How should we rank the possible outcomes, given individual agents’ preferences (votes)? • Six desirable properties (which can’t all simultaneously be satisfied): • Every combination of votes should lead to a ranking • Every pair of outcomes should have a relative ranking • The ranking should be asymmetric and transitive • The ranking should be Pareto-optimal • Irrelevant alternatives shouldn’t influence the outcome • Share the wealth: No agent should always get their way 

  30. Let’s vote! • Pepperoni • Onions • Feta cheese • Sausage • Mushrooms • Anchovies • Peppers • Spinach

  31. Voting protocols • Plurality voting: the outcome with the highest number of votes wins • Irrelevant alternatives can change the outcome: The Ross Perot factor • Borda voting: Agents’ rankings are used as weights, which are summed across all agents • Agents can “spend” high rankings on losing choices, making their remaining votes less influential • Binary voting: Agents rank sequential pairs of choices (“elimination voting”) • Irrelevant alternatives can still change the outcome • Very order-dependent

  32. Auctions • Many different types and protocols • All of the common protocols yield Pareto-optimal outcomes • But… Bidders can agree to artificially lower prices in order to cheat the auctioneer • What about when the colluders cheat each other? • (Now that’s really not playing nicely in the sandbox!)

  33. Contract nets • Simple form of negotiation • Announce tasks, receive bids, award contracts • Many variations: directed contracts, timeouts, bundling of contracts, sharing of contracts, … • There are also more sophisticated dialogue-based negotiation models

  34. Conclusions and directions • “Agent” means many different things • Different types of “multi-agent systems”: • Cooperative vs. competitive • Heterogeneous vs. homogeneous • Micro vs. macro • Lots of interesting/open research directions: • Effective cooperation strategies • “Fair” coordination strategies and protocols • Learning in MAS • Resource-limited MAS (communication, …)

More Related