1 / 29

Trust in multi-agent systems

Trust in multi-agent systems. Sarvapali D. Ramchurn , Dong Huynh and Nicholas R. Jennings. Presented by: Zoheb H Borbora 10/18/2011. Prof. of Comp. Sci , Univ. of Southampton Heads - Agents, Interaction and Complexity Group Chief Scientific Advisor to UK govt. Chief Scientist

oralee
Télécharger la présentation

Trust in multi-agent systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trust in multi-agent systems Sarvapali D. Ramchurn, Dong Huynh and Nicholas R. Jennings Presented by: Zoheb H Borbora 10/18/2011

  2. Prof. of Comp. Sci, Univ. of Southampton • Heads - Agents, Interaction and Complexity Group • Chief Scientific Advisor to UK govt. • Chief Scientist • Aroxo, aerogility • Founding Editor-in-Chief, IJAAMAS • Research Staff in Web and Internet Science, School of Electronics & Computer Science, Univ. of Southampton • Lecturer, Intelligent Agents and Multimedia Group, Univ. of Southampton • PhD (2001-2004) • Interests • trust and reputation • coalition formation • human-agent interaction • task allocation

  3. Motivation • Open distributed systems can be modelled as multi-agent systems • Peer-to-peer computing • Semantic Web • Web services • E-business • Interactions form the core of MAS • Coordination • Collaboration • Negotiation

  4. Motivation • Interaction problems in MAS • how to engineer protocols (or mechanisms) for multi-agent encounters? • How do agents decide who to interact with? • How do agents decide when to interact with each other? • Trust can minimize uncertainty associated with interactions in open distributed systems

  5. Definitions of Trust • Dasgupta, 1998 Trust is a belief an agent has that the other party will • do what it says it will (being honest and reliable) • or reciprocate, given an opportunity to defect to get higher payoffs • Gambetta, 1998 Trust is the subjective probability by which an individual, A, expects that another individual, B, performs a given action on which its welfare depends • Castelfranchi & Falcone, 2001 Trust is the mental counter-part of delegation

  6. Approaches to Trust Helps cope with uncertainty in beliefs Individual level trust System level trust Guides decision making to alleviate efficiency concerns Protocols and mechanisms to enforce trustworthiness of agents Belief about honesty and reciprocity of interaction partners.

  7. Approaches to Trust

  8. Individual level trust • Learning and Evolving Trust • Learning and Evolving Strategies • Trust metrics • Reputation • Gather ratings • Aggregating ratings • Promote authentic ratings • Socio-cognitive models

  9. Learning and Evolving Trust • Trust is an emergent property of direct interactions between self-interested agents • Cooperation • Defection

  10. Learning and Evolving Strategies • Axelrod’s Tit-for-tat (1984) • Highest payoff with cooperative self-play • Less than maximum against selfish agent • Trust emerges as a result of evolution of strategies over multiple interactions (Wu & Sun, 2001) • Probabilistic reciprocity (Sen, 1996) • Collaborative liars do well with small no. of interactions • Reciprocative strategies perform well otherwise

  11. Learning and Evolving Strategies • Drawbacks • Assume complete information for the algorithms to work • Not tested in real-life scenarios • Outcome of interactions assumes to be bistable • Cooperation • Defection

  12. Learning and Evolving Strategies • Probabilistic reciprocity (Sen, 1996) Pr (agent k carries out task tijfor agent I while it is carrying its own task tkl) Takes into account Balance= Total savings – total cost Extra cost incurred in performing task Average cost of tasks performed till date • Set of agent behaviors • Philanthropic agents • Selfish agents • Reciprocativeagents • Individual agents

  13. Learning and Evolving Strategies • Discussion • Is probabilistic reciprocity better than tit-for-tat?

  14. Learning and Evolving Strategies • Drawbacks of tit-for-tat • Initial decision • First decision is crucial • Symmetrical interactions • In actual interactions, one agent incurs cost and the other gets benefit • Repetition of identical scenarios • Unlikely in real-life • Lack of a measure of work • Amount of benefit should be quanitifed Reciprocity: A Foundational Principle for Promoting Cooperative Behavior Among Self-Interested Agents, SandipSen, 1996

  15. Trust Metrics • Witkowski, 2001 • objective trust based on past interactions • Trading scenario: leads to formation of strong, tight clusters of trading partners quickly • Trust builds trust, but unreliability breeds indifference • REGRET (Sabater and Sierra, 2002) • Considers three dimensions of reputation • Individual (direct interactions) • Social (group relation) • Ontological (combination)

  16. Reputation Models • Sabater and Sierra, 2002 Reputation can be defined as the opinion or view of someone about something • Aspects of reputation • Gather ratings • Aggregating ratings • Promote authentic ratings

  17. Reputation Models • Retrieving ratings from the “social network” • Referrals (Yu et al., 2000) • Enriched model with annotation of nodes (Schillo et al., 2000) • Degree of honesty • Degree of altruism • Takes into account trustworthiness of witnesses

  18. Reputation Models • Aggregating ratings • Problems • simplistic aggregation can be unreliable • less people reporting bad experiences • no rating - good or bad? • open to manipulation by sellers • Dempster-Shafer theory of evidence (Yu and Singh,2002) • Allows combination of beliefs • Trustworthy, untrustworthy, unknown • Assumes witnesses are honest

  19. Socio-cognitive Models • Based on “subjective” perception of opponent’s characteristics • Castelfranchi & Falcone, 2001 • Evaluation of trust for task delegation, based on decomposition into beliefs • Competence belief • Willingness belief • Persistence belief (stability) • Motivation belief

  20. System-level trust • Design of protocols and mechanisms of interactions that foster trustworthy behavior • Truth-eliciting interaction protocols • No utility for lying agents • Reputation mechanisms • System maintained • Security mechanisms • authentication

  21. Truth-eliciting interaction protocols • Single-sided auctions * Dominant strategy is lower valuation

  22. Truth-eliciting interaction protocols • Secure Vickrey scheme (Hsu & Soo, 2002) • Bidders submit encrypted bids • Auctioneer selected randomly from bidders • Auctioneer can only view bid values • Collusion-proof mechanism (Brandt, 2001)

  23. Reputation mechanisms • Modeling reputation at system level • Desiderata (Zacharia & Maes, 2000) • Identity change costly • New entrants not penalized with low score • Agents should be allowed to redeem themselves • High penalty for fake transactions • Reputed agents should have greater weight • Personalized evaluations • Recency of ratings

  24. Security Mechanisms • Authentication of agents • Security requirements (Poslad et al, 2002) • Identity • Access permissions • Content integrity • Content privacy

  25. Security Mechanisms • Mechanisms • Public key encryption • Security certificates • Public key models • PGP (decentralized) • X.509 (centralized) • These mechanisms do not enforce trustworthy behavior

  26. Approaches to Trust

  27. Semantic Web vision (Berners-Lee et al, 2001): • Lucy and Pete have to organize a series of appointments to take their mother to the doctor for a series of physical therapy sessions • Fetch list of provider’s for treatment • Within 20 miles radius • Rating of good or excellent Lucy 2 Fetch prescribed treatment Authentication of Lucy’s agent 1 3 Health-care providers Find appropriate match of available times and Pete’s and Lucy’s schedule Doctor • trusted rating services based on reputation mechanisms • providers could bid based on secure mechanism • Decision choices in provider selection • Past interaction history • Reputation Pete

  28. Open Issues • Strategic lying • Collusion detection • Context • Expectations • Social networks

  29. Discussion • How does trust differ in a social network vs. multi-agent system? • What does it mean for a software agent to lie? • For e.g, lying about quality of goods sold • Is trust transitive? • Is it important to model distrust?

More Related