1 / 33

Recent Advances

Recent Advances. Some slides containing formal results are labeled with * in the title fields. These slides may be skipped if only an intuitive tutorial is desired. Common approaches in MAS. Logic-based: Wooldridge 94, Rao and Georgeff 95, Halpern et al. 96.

sabin
Télécharger la présentation

Recent Advances

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recent Advances Some slides containing formal results are labeled with * in the title fields. These slides may be skipped if only an intuitive tutorial is desired.

  2. Common approaches in MAS • Logic-based: Wooldridge 94, Rao and Georgeff 95, Halpern et al. 96. • Game-theoretic: Rosenschein and Zlotkin 94. • Economic: Wellman 92. • Decision-theoretic: Gmytrasiewicz and Durfee 95. • Truth maintenance (default-reasoning)-based: Mason and Johnson 89, Huhns and Bridgeland 91. • Distributed search-based: Yokoo et al. 92.

  3. MAS Area of Focus • Task: distributed interpretation • Producing higher level descriptions of the environment from distributed sensing without centralizing the evidence. • Examples of distributed interpretation systems: • Sensor networks. • Medical diagnosis by multiple specialists. • Trouble shooting complex artifacts. • Distributed image interpretation.

  4. Recent Advance • Foundation:a probabilistic framework for multiagent distributed interpretation based on multiply sectioned Bayesian networks (MSBNs). • Advance: • Distributed representation of uncertainty knowledge that is consistent with probability theory. • Distributed inference that ensures global consistency. • Agents can be developed by independent designers. • Internal structure/knowledge of agents remains private. • Controversial aspects: • Are cooperative MASs important? • Is homogeneous knowledge representation necessary?

  5. Foundations

  6. Background • Common approaches for distributed interpretation are based on logic (e.g., blackboard) or default reasoning (e.g., DATMS and DTMS). • Logic-based approaches do not have coherent mechanism to deal with uncertain knowledge. • Default reasoning treats uncertain knowledge as “believed until there is reason to believe otherwise” and not as “believed to a certain degree”. • However, decisions often involve tradeoffs and comparison of strength of belief on states of world or outcomes of actions is thus necessary.

  7. Background • Substantial progress has been made in uncertain inference using Bayesian networks (BNs) [Pearl 88]. • Dependencies of domain variables are represented by a DAG. • Strength of dependencies is quantified by an associated jpd. • The jpd is interpreted as the degree of belief of an agent. • Many effective inference algorithms have been developed. • A single-agent paradigm is commonly assumed: • A single processor accesses a single global BN, updates the jpd as evidence becomes available, and answers queries. • This research advances the DTMS approach with a representation of agent’s degree of belief consistent with the probability theory, and advances the single-agent BN approach with a multiagent paradigm and distributed inference algorithm.

  8. *Bayesian Networks (BN) • Defi: A BN is a triplet (N,D,P) where • N is a set of random variables, • D is a directed acyclic graph (DAG) whose nodes are labeled by elements of N, • P is a jpd over N specified by probability distributions of each node x in D conditioned on its parents parn(x) in D. • D expresses dependency relations among elements of N. • A variable is independent of its non-descendents given its parents. • Hence P can be expressed as P(N) =  x N P(x | parn(x)).

  9. A Trivial Example BN [L & S 88]

  10. Multiply Sectioned Bayesian Networks (MSBNs) • Single-agent oriented MSBN [Xiang et al., CI93, AIM93]: • A set of Bayesian subnets that collectively define a BN. • Interface b/w subnets renders them conditionally independent. • Top level structure is a hypertree. • Compiled into a linked junction forest (LJF) for inference. • Coherent inference operations are defined for a LJF. A MSBN (left) and its LJF (right)

  11. *The d-sepset: Interface b/w Subnets in a MSBN • Defi: Let Di=(Ni,Ei) (i=1,2) be two DAGs such that their union D is a DAG. I=N1N2 is a d-sepset b/w D1and D2 if for every xI with its parents parn(x) in D, either parn(x)N1 or parn(x) N2. D is said to be sectioned into {D1,D2}. • Theorem: Let a DAG D=(N,E) be sectioned into {D1,…,Dk} and Iij=Ni Nj be the d-sepset b/w Diand Dj. Then for each i,  j Iij d-separates [Pearl88] Ni\ j Iij from N\Ni. • Semantics: If D represents the dependence relations among elements of N, then d-sepset ensures that variables in a subnet are independent of other variables given the d-sepsets of the subnet.

  12. *Hypertree MSDAG: Top Level Structure of a MSBN • Defi: Let D be the union of Di (i=1,…,n) where each Di is a connected DAG. D is a hypertree MSDAG if it is a DAG built by the following procedure: • Start with an empty graph. Recursively add a DAG Di called a hypernode, to the existing MSDAG subject to the constraints: • [d-sepset] For each Dj (j<k), Ijk=NjNk is a d-sepset when the two DAGs are isolated. • [Local covering] There exists Di (I<k) such that, for each Dj (j<k;ji), Ijk  Ni . For such Di, Iik is called the hyperlink b/w hypernodes Diand Dk. • Semantics: Each hyperlink renders the two parts of the MSBN that it connects conditionally independent.

  13. *Compilation of a MSBN into a Linked Junction Forest • Major steps in compiling a LJF • Convert each DAG (hypernode) into a chordal graph such that all dependence relations are preserved. • Express the chordal graph as a junction tree (JT) of cliques. • Convert each hyperlink (d-sepset) into linkages, each of which is a subset of the d-sepest. • Convert conditional probability distributions in each subnet to belief tables of the corresponding JT and d-sepset. • Let B(Ni) be the belief table on a hypernode and B(Ij) be the belief table on a hyperlink. The joint system belief (JSB) of a LJF is  i B(Ni) /  j B(Ij). • Theorem: JSB of a LJF is equivalent to jpd of its MSBN. • Since each subnet is organized as a tree, a LJF is an equivalent but more effective data structure for inference computation.

  14. *Inference in a Single-agent Oriented MSBN • Inference using LJF of a MSBN • Queries of a single user are focused on a single JT at a time, where a query has the form “what is the probability of event A given that B has occured?” • Evidence can be entered incrementally to the JT and queries are answered by local computation only . • As the user shifts attention to another JT, belief propagation is performed only along the hyperpath to the target JT in the hypertree. • Queries can then be entered at the target JT as above. • Theorem: After finite times of attention shifts, answers to queries computed locally are idential to what would be obtained from an equivalent homogeneous BN.

  15. What Can be Gained y Using a MSBN? • If a domain consists of loosely coupled subdomains, then... • Knowledge acquisition is natural and modular: Subnet can be built one at a time. • Inference requires only local computation. Attention shift uses only hyperpath. Hence computation is more efficient • Answers to queries are the same as from a homogeneous BN. Structure of a general MSBN (left) and the corresponding hypertree

  16. Why Using MSBNs for Distributed Interpretation? • Representation of MSBNs is modular. • Inference in MSBNs is coherent. • The framework of MSBNs is general.

  17. Major Issues in Extending MSBNs to MASs • What is the semantics of a subnet? • What is the semantics of the jpd of the MSBN? • How do we build the system by multiple agent developers? • How do we ensure a correct overall structure while protecting the know-how of each developer? • How do we ensure a coherent inference? • What difference does a multi-agent MSBN make relative to a MAS not organized into a MSBN?

  18. What is the semantics of a subnet? • In a single-agent MSBN: • The MSBN represents multiple perspectives of a domain hold by a single agent. • Each subnet represents one such perspective. • In a multi-agent MSBN [Xiang, AIJ96] • The MSBN represents multiple agents in a domain, each of which holds one perspective. • Each subnet represents one agent's perspective of the domain. An agent model

  19. What is the semantics of the jpd in a MSBN? • If the distribution of each subnet represents one agent's belief, whose belief does the jpd of the MSBN represent? • Example: a computer system. • It processes information coherently as a whole. • Its components are supplied by different vendors. • Observation • As long as vendors follow a protocol in designing component interfaces, the system functions as if it follows a single will.

  20. What is the semantics of the jpd in a MSBN? • Another example: a patient sees a doctor. • Patient tells what doctor needs to know for diagnosis. • After doctor reaches a diagnosis, he prescribes a therapy which patient follows. • Observation • Doctor does not experience symptoms. • Patient does not understand how diagnosis is reached. • A coherent belief is demonstrated on symptoms (used by doctor to reach diagnosis) and the diagnosis (therapy is followed by patient).

  21. What is the semantics of the jpd in a MSBN? • Due to the way the jpd of a MSBN is defined, there exists a unique jpd [Xiang, AIJ96] such that • its marginalization to each subnet is identical to the distribution of the subnet; • adjacent subnets are conditionally independent given their interface. • Implication: If agents are (1) cooperative, (2) independent conditioned on interface, and (3) initially consistent, then the jpd of the MSBN represents a unique collective belief • identical to each agent's belief within its subdomain, • and supplemental to its belief outside its subdomain.

  22. How do we ensure coherent inference? • Issue arising: • In a single-agent MSBN, evidence is entered one subnet at a time. • In a multi-agent MSBN, evidence are entered asynchronously at multiple subnets in parallel. • Solution: extended inference operations [Xiang, AIJ96]. CommunicateBelief CollectNewBelief DistributeBelief

  23. How do we ensure coherent inference? • CollectNewBelief: Initiated at an agent to activate an inward propagation towards the agent.

  24. How do we ensure coherent inference? • DistributeBelief: Initiated at an agent to activate an outward propagation.

  25. How do we ensure coherent inference? • Theorem: After CommunicateBelief, answers to queries from any agent is identical to what is obtained from an equivalent homogeneous BN. • Implication: Distribution causes no loss of coherence. • Complexity of inference computation • Inference at one agent: O(k 2^m), where m is the maximal size of a clique and k is the number of cliques in the JT. • CommunicateBelief: O(t g k 2**m), where t is the number of agents and g is the maximal number of linkages in a hyperlink.

  26. How do we ensure coherent inference? • Theorem: Between successive CommunicateBeliefs, answers to queries from any agent X is identical to what would be obtained in an equivalent homogeneous BN where only evidence in the bottom are entered. Evidence to A Evidence to A ... ... Evidence to W Evidence to W Evidence to X Evidence to X Evidence to Y Evidence to Y Evidence to Z Evidence to Z t CommunicateBelief CommunicateBelief t

  27. How to build a MSBN by multiple developers? • How to ensure system coherence without disclosing structure and distribution of individual subnets? • It is possible if • the interface of each subnet renders it conditionally independent of others; and • adjacent agents agree on an initial belief of their interface. • Solution [Xiang, AI96]: • A single integrater with the knowledge of agents interface puts agents into a hypertree. • Agents negotiate to achieve initial belief on interface.

  28. Global structure vs each agent's know-how • Structures of subnets in a MSBN collectively define a directed acyclic graph (DAG). • Local acyclicity doesn't warrant global acyclicity. • Algorithms to test acyclicity based on topological sorting are well known. However, a central representation of the graph is assumed.

  29. Global structure vs each agent's know-how • If each subnet’s structure is unknown to others, how can we ensure acyclicity of the MSBN? • A distributed algorithm has been developed that has the following features [Xiang, FLAIRS96]: • Each agent provides only info on whether a shared node has a parent or a child in its DAG, plus some flag info. • The acyclicity of the MSBN can be correctly determined.

  30. What if agents are not organized into a MSBN? • Belief propagation in a MSBN proceeds along hypertree in a regulated fashion. • What happens otherwise? • Circular evidence propagation causes no problem if agents are logical. • But it causes false belief if agent’s knowledge is uncertain. Agent X Agent Y Agent W Agent Z • Not knowing message from Y is based on evidence originated from itself, Z counts the same info twice.

  31. Prospects

  32. Prospects for distributed interpretation • A framework is provided for tasks that rely on uncertain knowledge and distributed inference without sacrifice of coherence in the interpretation. • The framework protects individual agent developer’s know-how and hence encourages cooperation of many agent developers in building MASs in large and complex domains. • Ex. Systems for trouble-shooting complex artifacts.

  33. Prospects for distributed interpretation • The framework suggests standardization of agent interfaces in large and complex domains where knowledge sources are naturally distributed and separately owned. • The framework suggests future research directions: • Dynamic formulation of multiagent MSBNs. • Incorporation of decision making. • Incorporation of temporal inference.

More Related