1 / 45

Re-configurable and Scalable Distributed Systems with Autonomous Agents

Re-configurable and Scalable Distributed Systems with Autonomous Agents. Mohsen. A. Jafari, Ph.D. Thomas O. Boucher, Ph.D. Dept. of Industrial & Systems Engineering Rutgers University. This work has been partially sponsored by a grant from the National Science Foundation. Team Members.

lclass
Télécharger la présentation

Re-configurable and Scalable Distributed Systems with Autonomous Agents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Re-configurable and Scalable Distributed Systems with Autonomous Agents Mohsen. A. Jafari, Ph.D. Thomas O. Boucher, Ph.D. Dept. of Industrial & Systems Engineering Rutgers University This work has been partially sponsored by a grant from the National Science Foundation.

  2. Team Members Ardavan Amini Peng Zhao Leila Zia

  3. Outline of the Talk • Vision statement • Example • General Overview of our system • General Application areas • Another Example • Implementation & Enabling Technologies • Background Review

  4. Vision Statement To develop a systematic and unified control framework for re-configurable and scalable multi-agent distributed systems, which can be mapped to real life applications in manufacturing, business and transportation.

  5. Example • Think of robot agents which carry letters between floors in a building, and they also do some other functions or tasks. • Agent A1 carries letters between floors 0 – 2. • Agent A2 carries letters between floors 1- 5. • Requests are made by another agent, say B. • Stairs and elevators can be used. • Robots are subject to failures, e.g., failure of legs, hands (penalty). • Robots may have more than one way of doing the same thing, but with different cost, e.g., a robot with two hands. • Robots are not pre-programmed. They only have some basic skills or functions. Basic functions have some initial cost, but changes with different circumstances. • They should learn how to use these skills to solve a problem (synthesize a controller). • Some bad states or conditions exist.

  6. Agent’s Decision Making Process Each agent go through these stages: • Bidding • Control synthesis and schedule optimization at local level • Commitment & execution

  7. Bidding Service requestor will broadcast the request, and service providers will respond with a plan and cost value. • Two possibilities: • Agent has no experience. • Synthesize a solution from some default initial condition and estimate a cost, no failures, no faults,… • Agent has some earlier experience. • Bid value and actual cost and solution history exist. • (Challenge) Diagnosis for cost differential between bid and actual can be established. Cost differential can be due to unexpected internal failures, different initial conditions, unexpected environmental adverse conditions,… • Expected cost can be established. • Expected plan can be established.

  8. S0 A10 f1 S1 S2 f2 f3 A12 A11 f1 f1 f2 f4 f2 f3 S3 A13 A14 S4 f3 S5 … A15 f1 f2 S6 A16 Goal Function Synthesis Using Search Techniques • Issues: • Arcs are dependent on agent’s: • skill options; • Experience from its own internal behavior, failures, etc. • Its perception of the environment. Agent A1 has 4 basic functions: f1, f2, f3, f4.

  9. Agents – Formal Definition Mapping between basic functions and resources: Basic functions Resources

  10. J3 J2 J1 bf1 bf2 bf3 bf1 bf5 bf2 bf4 Agents – Formal Definition At any time agent’s conditions can be defined by: • current state = (status, location, …) • available resources and basic functions • WTL = work-to-do list, flags, initiators, and responses • schedule

  11. Agent’s Perception of its Environment Problem: To build an automaton which describes the agent’s view of its environment. Issues and Challenges: Agent only sees from its environment a set of labels (e.g., sensory feedback). It can use these together with its own actions to synthesize its perception model. The cause and effect relationship between agent’s actions and the sensory feedback may not be obvious. Some Possible Approaches: Decision Markov Processes; reinforcement learning Hidden Markov Chains Language theory,…

  12. Failures and Faults • Initially agent may or may not be aware of its failure or fault conditions, internally or externally. Learned by experience. • Failure detection and diagnosis routine computes the sequence of actions and conditions which lead to such a failure condition. • Knowledge of failures and faults alter the cost of doing basic functions.

  13. General Overview of our System • A distributed system of groups of agents where: • Each group has a manager in charge of bookkeeping and dispatching of new jobs to the group. • Agents can be service providers or service requestors • They are “somewhat” Intelligent, so that each agent individually can determine its sequence of actions (tasks) according to its local specifications and according to the specification of the overall system

  14. General Overview of our System • Agents are “somewhat” autonomous, but can be colonized • Agents possess a set of basic functions or services, agents are not pre-programmed. • Agents communicate and negotiate with each other • Service providers compete with each other for providing services to the service requestors • New agents can plug in into the system, or existing agents can plug out of the system

  15. General Application areas • Manufacturing systems • Agents: Machine, robots, AGVS, human/machines, sensors, cells, … • Business Systems • Agents: Human/machines, software agents, business units • Transportation systems • Agents: Vehicles, traffic agents, sensors and signals,… • etc.

  16. Another Example • Process Control: System: A chemical reaction where temp., humidity and pressure must be controlled. Real time Control: Local controllers, outside of our control scope, belong to system environment. Supervisory Control: • Shut down • Turn on or off of switches for fans to control temp. or humidity. • System reset, etc. Desirable behavior: • System should not explode. (a bad state) • System should work with “minimal” cost.

  17. Example Problem: Design multi-functional agent(s) which (among other tasks) can supervise this system according to the local and global specifications, including “minimal cost”. Issues and Challenges: At different times, different agents can be assigned to control of this system (depending on some sort of bidding mechanism). These agents may also be responsible for doing many other functions. None of these agents are pre-programmed for this function. They only know about their basic functions and also possess a knowledge base of what they have already done. Agents could learn by exploring, from experience, etc. Each basic function of an agent is associated with an initial cost. The cost could change due to environmental factors.

  18. Example – A Solution Strategy • Learn Normative Model (simulate). • May take a while and perhaps after a number of faulty loops a normative behavior with a reasonable cost can be obtained. • Cost modeling of sequence of actions with some sort of discount factors may be needed. • To avoid local minimal traps, “explore” can be used. • Add some disruptive conditions (due to internal and external factors) to the above model. • Build perception of the environment. • Learn about disruptive behavior, explore, fault diagnosis. • Update the normative model. Repeat.

  19. Fault Analysis – Issues & Challenges • Embedded fault detection and diagnosis mechanism • Run-time fault states (e.g., deadlocks or undesirable states for one or more agents) must be detected at control planning stage and prevented from happening at the execution stage in future. • Fault states may or may not be known in advance, but in any case the means of detecting or preventing them can not be established at any design stage. • Fault state detection and prevention in distributed systems require communication between agents. Local information on non-local fault states must be combined with information from other agents. • Fault state prevention may require event disabling or enforcement at some earlier stages.

  20. Fault Analysis – Issues & Challenges • Events can be observable or unobservable. Failures are unobservable events. This leads to non-deterministic behavior. • Some sensory information available. • Sensor readings can be affected by more than one component within the system (agent). • Environmental factors could affect the sensory information. • Failure detection/diagnosis from available sensor readings (indirect info) and observable events.

  21. Fault Detection/Diagnosis in Distributed Systems – Issues & Challenges • Distributed information on faults. • Who should initiate the fault detection/diagnosis? • What information should be exchanged? • How the exchange should proceed? Who should be involved? • The information is exchanged in real time; but may be incomplete, delayed and erroneous. • How the same fault should be avoided in the future? • What information needs to be kept for avoidance in the future?

  22. Existing Models • Forward Diagnosis: The hypothesis updated according to the current events until a conclusion is made: • Propagation Model • Event Based Model • Probabilistic Model • Backward Diagnosis: When there is a fault, backtrack the event sequence to find the sequence of events and conditions leading to the failure: • Fault Tree Analysis • Back Firing Timed Petri Nets

  23. System has three components: a heater, a thermocouple and a window. The heater is controllable. It has two normal states: on and off. The window is uncontrollable. The thermocouple can measure the temperature of the room. The objective is to check if the heater works properly. An Example

  24. Behavioral Model for the example

  25. State Space Model

  26. Reduced State Space Model This model together with a probabilistic reasoning model can then be used to diagnose the failure.

  27. Agent Model

  28. Layers of the system

  29. Agent Enabling Technologies (Cont’d) • Component Based Automation (CBA) by Siemens. • IEC 61499 • Standard for distributed systems • Low level communication • Not fully developed / not adopted.

  30. Agent Enabling Technologies (Cont’d) IEC - The Basic Function Block

  31. Agent Enabling Technologies (Cont’d) IEC - A Composite Function Block

  32. Agent Enabling Technologies (Cont’d) IEC - An Example of Using Function Block for Industry Control

  33. A background review: Theory of Agents The term "agent" is used (and misused) to describe a broad range of computational entities. This tends to obscure the differences between radically different approaches*: Some agents performs tasks individually ... Others need to work together Some are mobile ... some static Some learn and adapt ... others don't An agent is a reusable software/hardware component that provides controlled access to (shared) services and resources. * Michael Weiss - MITEL Corp

  34. A background review: Theory of Agents In the literature we find three types of agents: reactive or reflex agents, deliberative or goal-oriented agents, and collaborative agents. Reactive agents* respond only to external stimuli and the information available from their sensing of environment. They show emergent behavior, which is the result of the interactions of these simple agents. * Brooks, R. A., “A robust layered control system for a mobile robot,” IEEE Journal of Robotics and Automation, vol. 2, pp. 14-23, 1986.

  35. A background review: Theory of Agents Goal-directed agents have domain knowledge and the planning capabilities necessary to take a sequence of actions in the hope of reaching or achieving a specific goal. Collaborative agents work together to solve big problems. Each individual agent is autonomous. These agents can solve problems by collaboration and synergy. Problems will be parsed into smaller chunks that can be solved by a modular approach. The approach is based on specialization of agent functions and domain knowledge.

  36. A background review: Theory of Agents Bratman* has introduced agents with beliefs, desires and intentions (BDI) as a form of collaborative agents. What an agent believes to be true will be the basis for all of its reasoning, planning and actions. When an agent reasons about the state of the world (beliefs) and its desires (goals) it must decide what course of action to take (intensions). * Bratman, M. E., “Intensions, Plans, and Practical Reasons,” Cambridge, MA: Harvard University Press, 1987.

  37. A background review: Theory of Agents Agents and AI • The idea is to design intelligent agents to achieve specific tasks automatically. • An intelligent agent works based on events, i.e. if any specific event happens the corresponding agent will react accordingly in order to satisfy a predetermined objective by taking some special actions under different conditions (states). • When an event occurs the corresponding agent must recognize it and respond to it.

  38. Collaborative Learning Agents Smart Agents Learn Cooperate Autonomous Interface Agents Collaborative Agents A background review: Theory of Agents Source: H. Nwana, BT Laboratories, U.K., Software Agents: An Overview, Knowledge Engineering Review, Vol. 11, No 3, pp.1-40, Sept 1996.

  39. Agent Enabling Technologies (Cont’d) • FIPA / KQML (Foundation for Physical Intelligent Agents & Knowledge Query and Manipulation Language) • Included in CORBA specification. • A more practical form of KQML documented in evolving FIPA standard. • Based on the linguistic concept of "speech acts" (Searle). • Speech acts come in different flavors: directives ("I command"), interrogatives ("I ask"), commissives ("I promise"). • Most common: ask/tell, query/reply, offer/accept/reject (FIPA). • Pragmatic extensions to speech acts: facilitation, registration, errors.

  40. Centralized vs. Distributed Fault Analysis

  41. Example of Distributed Diagnosis (Deadlock Detection)

  42. e1 e2 A2 A3 A1 A1:e1, A2:e2 A2:e1, A3:e2 Knowledge Base (Bad State) A1:e1, A2:e2 A2:e1, A3:e2 e1:A1 to A2 Get A3’s state e2 in A3 Reject e1 A1: A2 A1: A2, A3 e2:A3 to A2 Get A1’s state e1 in A1 Reject e2 A3: A2 A3: A2, A1 A3: A2,A1,A2 A3: A2,A1,A2,A3 A1:e1, A2:e2 A2:e1, A3:e2 A1:e1, A2:, A3:e2 Knowledge Base (Bad State) A1:e1, A2:e2 A1:e1, A2:, A3:e2 A2:e1, A3:e2 A1:e1, A2:, A3:e2 Higher Order Deadlock Detection Check Knowledge Base Check Knowledge Base WFG A1  A2 WFG A2  A3 Check Knowledge Base Check Knowledge Base WFG A3  A2 WFG A2  A1 WFG A1  A2 WFG A2  A3 Deadlock Detected Add to knowledge Base

  43. 2 1 2 2 2 1 2 2 3 3 3 3 3 2 2 2 3 3 3 3 3 3 2 2 1 1 3 3 2 1 1 1 Deadlock Detection: Mitchell-Merritt Algorithm M1 M2 M1 M2 M1 M2 M3 M1 M2 M1 M2 M1 M2 * Deadlock detected M3 M3 M3

  44. Future Work • Complete underlying models and algorithms for synthesis and scheduling • Prototyping • Extension of the models to a multi-layered system

More Related