1 / 70

Do software agents know what they talk about?

Do software agents know what they talk about?. Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005. Overview. Agents Ontology Communication RDF Semantic Web Sample Implementations. Agents. Examples Definitions Properties Why Agents Pitfalls Models

miette
Télécharger la présentation

Do software agents know what they talk about?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005

  2. Overview • Agents • Ontology • Communication • RDF • Semantic Web • Sample Implementations Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  3. Agents • Examples • Definitions • Properties • Why Agents • Pitfalls • Models • Architectures • Standards Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  4. Examples • Tim Berners-Lee example of negotiating Agents • Agents in a Route Planning Application • Tele Truck • Planning of Lab Sessions • Personal Assistant • Intelligent Room Project Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  5. Tim Berners Lee’s example • The entertainment system was belting out the Beatles' "We Can Work It Out" when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all the other local devices that had a volume control. His sister, Lucy, was on the line from the doctor's office: "Mom needs to see a specialist and then has to have a series of physical therapy sessions. Biweekly or something. I'm going to have my agent set up the appointments." Pete immediately agreed to share the chauffeuring. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  6. Tim Berners Lee’s example • At the doctor's office, Lucy instructed her Semantic Web agent through her handheld Web browser. The agent promptly retrieved information about Mom's prescribed treatment from the doctor's agent, looked up several lists of providers, and checked for the ones in-plan for Mom's insurance within a 20-mile radius of her home and with a rating of excellent or very good on trusted rating services. It then began trying to find a match between available appointment times (supplied by the agents of individual providers through their Web sites) and Pete's and Lucy's busy schedules. (The emphasized keywords indicate terms whose semantics, or meaning, were defined for the agent through the Semantic Web.) Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  7. Tim Berners Lee’s example • In a few minutes the agent presented them with a plan. Pete didn't like it—University Hospital was all the way across town from Mom's place, and he'd be driving back in the middle of rush hour. He set his own agent to redo the search with stricter preferences about location and time. Lucy's agent, having complete trust in Pete's agent in the context of the present task, automatically assisted by supplying access certificates and shortcuts to the data it had already sorted through. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  8. Tim Berners Lee’s example • Almost instantly the new plan was presented: a much closer clinic and earlier times—but there were two warning notes. First, Pete would have to reschedule a couple of his less important appointments. He checked what they were—not a problem. The other was something about the insurance company's list failing to include this provider under physical therapists: "Service type and insurance plan status securely verified by other means," the agent reassured him. "(Details?)" Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  9. Tim Berners Lee’s example • Lucy registered her assent at about the same moment Pete was muttering, "Spare me the details," and it was all set. (Of course, Pete couldn't resist the details and later that night had his agent explain how it had found that provider even though it wasn't on the proper list.) Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  10. Agents in a route planning application • The problem: • Mobile nurses travel from patient to patient during the day. • They have to meet some patients within certain time windows. • They have to finish in a fixed number of hours • They want to be near their home at lunch time, do not like certain patients,… Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  11. Agents in a route planning application • The procedure : • The dispatching center calculates the routes for the day. • They try to minimise the travel time and to equalise the workload over the routes • The nurses are assigned a route according to some criteria. • The nurses may negotiate and exchange routes Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  12. Agents in a route planning application • Agents? • Agents representing the nurses effectively do the negotiation. • They use a measure for sympathy reflecting the agent’s interrelationships • This sympathy allows for some memory in the system Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  13. Agents in a route planning application • Negotiation : • Agents switch between 3 states: • Enquiring • Agents with a non satisfactory route (personal cost > 30) • Listening • Agents with a satisfactory route (personal cost <= 30) • Occupied • Agents involved in a discussion Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  14. Sympathy • In the cause of a negotiaion, an agent may accept a worsening of his own loopcost • It calculates cost = loopcost – sympathy/2 • Cost outside [0,80] is rejected • In [0,80] chance of rejection is cost/81 • In case of acception, the cost is send to the requesting agent Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  15. Sympathy • The accepting agent accepts if the cost is less than 20, or else with a probability of 1-(cost-19)/61 • In case of acception, the requesting agent augments his sympathy for the offering agent with cost, and vice versa for the offering agent. • Negotiation stops when all agents are in the listening state. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  16. Generating trajectories Clarke-Wright Patient data Tabu Search Fixed trajectories Assigning trajectories to nurses Draft assignment Agent data Negotiations Trajectory swaps lead to sympathy level changes Result Trajectories - Nurses Fig. 3Diagram of the algorithms used to solve the mobile nursing service problem Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  17. Tele Truck • Two levels of scheduling using agents in transportation • Shipment contracting between firms • Effectively planning the transport using trucks • Tele Truck mainly concentrated on the second issue • Trucks consist of a driver, a carrier and an engine • They may be on different places and must be brought together for a certain job • Tele truck uses a bidding scheme based on the contract net protocol (CNP) Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  18. Personal assistant • A personal assistant is a software agent following your actions and trying to help • It can follow your surfing behaviour and decide to track certain pages for you. • It can read over your shoulder and try to find related documents on your hard disk. • It can be a helping paperclip. • It can follow links on the website you are reading and point at nearby sites of interest to you. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  19. Personal assistant • In order to perform well, such an agent must • Act autonomously • Be able to learn • Be able to build your profile • Techniques of AI must be used for learning • Datamining can reveal patterns in your behaviour Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  20. Intelligent Room Project • Room behaves as a person • Reasons about what happens in the room • Tries to anticipate • Has a lot of sensorial inputs • Is equiped with gigantic computer power • http://www.ai.mit.edu/people/mhcoen/ Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  21. Reference • “An Introduction to Multiagent Systems”, Michael Wooldridge, Department of Computer Science, University of Liverpool, UK, John Wiley & Sons, LTD, 2002 ISBN 0-471-49691-X. • Links Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  22. Links • http://allserv.kahosl.be/~patdc/Agents/ • http://www.csc.liv.ac.uk/~mjw • http://www.csc.liv.ac.uk/~mjw/links/ Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  23. Introduction to Agent Based Systems • What • Vision • Viewpoints • Criticising MAS Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  24. What • Five trends have dominated the history of computer • Ubiquity • Interconnection • Intelligence • Delegation • Human-orientation Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  25. Ubiquity (Allgegenwärtigkeit) • The decreasing cost of computerpower allows to introduce it in unexpected environments. • Electrical devices • Bordcomputers • Mobiles • … • Vb. http://ingenieur.kahosl.be/projecten/amobe/ Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  26. Interconnectivity • Computers are networked (Internet) • Distributed systems are no longer considered strange beasts of rare species, hard to handle and understand, not available for human control. • Nowadays we have to think of interaction as the fundamental force of computerscienc Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  27. Intelligence • The complexity of the task that we trust a computer is increasing every day. • Our ability to build trustworthy systems that can operate in critical situations increases. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  28. “The A380, which will seat 555 passengers in a typical three-class interior layout, will enter airline service in 2006.” Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  29. Delegation • As a consequence, we trust ever more complex tasks to the computer (navigate an airplane, play at the stock market…) • Computersystems reach a level of control over humans and society before only heard of in science fiction stories. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  30. Human-orientation • The first computers were programmed through switches. One had to understand all details of the machine to be able to use it. • Afterwards, the textual interfaces allowed to interface with the computer on a line per line basis. • From 1980 we have seen graphical usere interfaces appering. The user can manipulate objects such as files, programs, devices through their icons. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  31. Mark 1 Colossus (Christmas 1943) Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  32. Paper thin screens The manipulation paradigm Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  33. Mission • The fundamental mission for software developers is: • How do we incorporate those trends in our applications. • E.g. • Ubiquity and interconnecion: “global computing”, 1010 processors?! • Delegation: how to build devices that can take on our tasks in our place? • … Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  34. Multi agent systems • An agent is a computersystem that is able to function as a representant of its owner. • An agent can find out what it needs to realise its design goals. • A multi agent system consists of communicating agents. • Those agents will represent owners with deiverse interests and goals. They will have to collaborate, co-ordinate and negotiate. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  35. Multi agent systems: the problem • How do we build agents that are capable to function independently and autonomously in order to perform their tasks? (agent design) • How do we build agents that are able to interact with other agents to successfully perform their tasks, especially in the case that the agents do not share interests and goals? (society design) Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  36. Questions • How can collaboration emerge in societies of self-interested agents? • Which languages can agents use in their communication? • How do self-interested agents find out when their goals are conflicting and how can the reach agreement? • Hoe do autonomous agents co-ordinate activities? Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  37. Vision • Scenario 1 • Due to an unexpected system failure, a space probe approaching Saturn looses contact with its Earth-based ground crew and becomes disoriented. Rather than simply disapperaing into the void, the probe recognizes that there has been a key system failure, diagnoses and isolates the fault, and correctly re-orients itself in order to make contact with its ground crew. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  38. Autonomous vehicles Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  39. Vision • Scenario 2 • A key air-traffic control system at the main airport of Ruritania suddenly fails, leaving flights in the vicinity of the airport with no air-traffic control support. Fortunately, autonomous air-trafiic control systems in nearby airports recognize the failure of their peer, and cooperate to track and deal with all affected flights. The potentially disastrous situation passes without incident. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  40. Vision • Scenario 3 • After the wettest and coldest (UK) winter on record, you are in desparate need of a last minute holiday somewhere warm and dry. After specifying your requirements to your personal digital assistant (PDA), it converses with a number of different Web sites, which sell services such as flights, hotel rooms, and hire cars. After hard negotiation on your behalf with a range of sites, your PDA presents you with a package holiday. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  41. Some views of the field • Agents as a paradigm for software engineering. • Agents as a tool for understanding human societies. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  42. Agents as a paradigm for software engineering. • Interaction is the key. Programs that proces a specific input and produce a specified output, are a minority. • In recent years, tools have been designed and developed to build systems of interacting components. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  43. Agents as a tool for understanding human societies. • “Psychohistory” allows sociological predictions (Azimov) • Socilogists can use MAS to build simulations. (E.g. : How did social complexity evolve in the Paleolithicum?) Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  44. Objections to MAS • Just distributed/concurrent programming? • Just artificial intelligence? • Just game theory? • Just social science? Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  45. Distributes/concurrent programming? • Important work has been done in this field since the 70’s. Agenten build on this work and add a dimension: • Autonomousness: synchronisation mechanisms are not hard coded. • Encounters have an economical meaning because of the self interested property of the agents. This differes from a situation where components are build to co-operate. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  46. Artificial intelligence? • Agents are sometimes considered as a division of AI. or vice-versa: • AI has concentrated on learning, planning, understanding…, an agent integrates these parts to arrive at decisions. Most of the agents (99%) use conventional programming and do not incorporate any AI at all. • The social aspect has not been investigated in AI at all. It is an essential constituent of any solution build on agents. It distinguishes the human kind from its peer creatures, the annimals. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  47. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  48. Game theory? • The very same pioneers that founded computer science created game theory and artifical intelligence: von Neumann, Turing. • Game theory is widely applied within MAS, but: • The methods of game theory result in techniques and concepts. MAS use those. • The rational agent from game theory may not have any meaning at all in the real world. The purely self-interested agents cannot contribute sufficiently toe social wellbeing even to warrant survival. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  49. Social science? • The domain offers possibilities to sociology to eperiment. • But agent systems are not at all comparable to real world societies when complexity is at stake. Agents and Ontology Patrick.DeCausmaecker@kahosl.be

  50. Questions on scenario 3 • How do you specify your preferences? • How does the agent compare the different offers? • Which algorithms govern the negotiations? Agents and Ontology Patrick.DeCausmaecker@kahosl.be

More Related