1 / 76

DCP 1172 Introduction to Artificial Intelligence

DCP 1172 Introduction to Artificial Intelligence. Ch.1 & Ch.2 [AIMA] Chang-Sheng Chen. AI Study/Research Map. Turing Test Search-base System Knowledge-base System Logical Reasoning System Neural Network Fuzzy Network Machine Learning Genetic Programming. Applied Areas of AI.

Télécharger la présentation

DCP 1172 Introduction to Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DCP 1172Introduction to Artificial Intelligence Ch.1 & Ch.2 [AIMA] Chang-Sheng Chen

  2. AI Study/Research Map • Turing Test • Search-base System • Knowledge-base System • Logical Reasoning System • Neural Network • Fuzzy Network • Machine Learning • Genetic Programming DCP 1172, Lecture 2

  3. Applied Areas of AI • Game playing • Speech and language processing • Expert reasoning • Planning and scheduling • Vision • Robotics … DCP 1172, Lecture 2

  4. The ArchitecturalComponents of AI Systems • State-space search • Knowledge representation • Logical reasoning • Reasoning under uncertainty • Learning DCP 1172, Lecture 2

  5. The History of AI • The “Dark Ages”, or the birth of artificial intelligence (1943-1956) • The rise of artificial intelligence, or the era of great expectation (1956-late 1960s) • Unfulfilled promises, or the impact of reality ( late 1960s – early 1970s) • The technology of expert system, or the key to success ( early 1970 – mid-1980) • How to make machine learn, or the rebirth of neural networks ( mid-1980 – present) • Evolutionary Computation, or learn by doing ( early 1970-present) DCP 1172, Lecture 2

  6. Acting Humanly: The Turing Test • Alan Turing's 1950 article Computing Machinery and Intelligence discussed conditions for considering a machine to be intelligent“Can machines think?” “Can machines behave intelligently?” • The Turing test (The Imitation Game): Operational definition of intelligence. DCP 1172, Lecture 2

  7. Acting Humanly: The Turing Test • Computer needs to possess: Natural language processing, Knowledge representation, Automated reasoning, and Machine learning • Are there any problems/limitations to the Turing Test? DCP 1172, Lecture 2

  8. Playing chess Driving on the highway Translating languages Diagnosing diseases Recognizing pattern (e.g., speech, characters, etc.) Mowing the lawn Internet-based applications (e.g., spam filtering, intrusion detection/prevention, etc.) Some Examples DCP 1172, Lecture 2

  9. Playing Chess • Environment? • Board • Actions? • Legal moves • Doing the right thing? • Moves that lead to wins DCP 1172, Lecture 2

  10. Recognizing Speech • Environment • Audio signal • Knowledge of user • Actions • Choosing word sequences • Doing the right thing • Recovering the users words DCP 1172, Lecture 2

  11. Translation • Environment • Source text to be translated • Actions • Word sequences in target language • Doing the right thing? • Words that achieve the same effect • Words that are faithful to the source DCP 1172, Lecture 2

  12. Recognizing Pattern • Environment • Visual Characters (e.g., OCR) • Optical Character Recognition • Knowledge of user • Actions • reading text from paper • translating the images into a form that the computer can manipulate (for example, into ASCII codes). • Doing the right thing • Recovering the users writing and/or printing words DCP 1172, Lecture 2

  13. Diagnosing Diseases • Environment • Patient information • Results of tests • Actions • Choosing diseases • Choosing treatments • Doing the right thing • Eliminating disease DCP 1172, Lecture 2

  14. Driving • Environment • Restricted access highway • Actions • Accelerate, brake, turn, navigate, other controls • Doing the right thing • Stay safe, get where you want to go, get there quickly, don’t get a ticket DCP 1172, Lecture 2

  15. Cloth Washing • Environment • Washing machine in a washroom • Actions • Washing (e.g., twisting, circulation, etc.) • Refueling clean water • Dumping dirty water • Doing the right thing • Make clothes look clean in a timely manner DCP 1172, Lecture 2

  16. Internet-based Application • Pattern recognition • Anti-virus, Intrusion detection system (IDS) • Content filtering • Anti-SPAM Mail filtering • Network Security • Intrusion detection/prevention system DCP 1172, Lecture 2

  17. Acting Humanly: The Full Turing Test • Alan Turing's 1950 article Computing Machinery and Intelligence discussed conditions for considering a machine to be intelligent • “Can machines think?” “Can machines behave intelligently?” • The Turing test (The Imitation Game): Operational definition of intelligence. • Computer needs to posses:Natural language processing, Knowledge representation, Automated reasoning, and Machine learning • Problem: 1) Turing test is not reproducible, constructive, and amenable to mathematic analysis. 2) What about physical interaction with interrogator and environment? • Total Turing Test: Requires physical interaction and needs perception and actuation. DCP 1172, Lecture 2

  18. Acting Humanly: The Full Turing Test • Problem: • 1) Turing test is not reproducible, constructive, and amenable to mathematic analysis. • 2) What about physical interaction with interrogator and environment? DCP 1172, Lecture 2

  19. Acting Humanly: The Full Turing Test Problem: 1) Turing test is not reproducible, constructive, and amenable to mathematic analysis. 2) What about physical interaction with interrogator and environment? Trap door DCP 1172, Lecture 2

  20. What would a computer need to pass the Turing test? • Natural language processing: to communicate with examiner. • Knowledge representation: to store and retrieve information provided before or during interrogation. • Automated reasoning: to use the stored information to answer questions and to draw new conclusions. • Machine learning: to adapt to new circumstances and to detect and extrapolate patterns. DCP 1172, Lecture 2

  21. What would a computer need to pass the Turing test? • Vision (for Total Turing test): to recognize the examiner’s actions and various objects presented by the examiner. • Motor control (total test): to act upon objects as requested. • Other senses (total test): such as audition, smell, touch, etc. DCP 1172, Lecture 2

  22. Thinking Humanly: Cognitive Science • 1960 “Cognitive Revolution”: information-processing psychology replaced behaviorism • Cognitive science brings together theories and experimental evidence to model internal activities of the brain • What level of abstraction? “Knowledge” or “Circuits”? • How to validate models? • Predicting and testing behavior of human subjects (top-down) • Direct identification from neurological data (bottom-up) • Building computer/machine simulated models and reproduce results (simulation) DCP 1172, Lecture 2

  23. Thinking Rationally: Laws of Thought • Aristotle (~ 450 B.C.) attempted to codify “right thinking”What are correct arguments/thought processes? • E.g., “Socrates is a man, all men are mortal; therefore Socrates is mortal” • Several Greek schools developed various forms of logic:notation plus rules of derivation for thoughts. DCP 1172, Lecture 2

  24. Thinking Rationally: Laws of Thought • Problems: • Uncertainty: Not all facts are certain (e.g., the flight might be delayed). • Resource limitations: • Not enough time to compute/process • Insufficient memory/disk/etc • Etc. DCP 1172, Lecture 2

  25. Acting Rationally: The Rational Agent • Rational behavior: Doing the right thing! • The right thing: That which is expected to maximize the expected return • Provides the most general view of AI because it includes: • Correct inference (“Laws of thought”) • Uncertainty handling • Resource limitation considerations (e.g., reflex vs. deliberation) • Cognitive skills (NLP, AR, knowledge representation, ML, etc.) • Advantages: • More general • Its goal of rationality is well defined DCP 1172, Lecture 2

  26. How to achieve AI? • How is AI research done? • AI research has both theoretical and experimental sides. The experimental side has both basic and applied aspects. • There are two main lines of research: • One is biological, based on the idea that since humans are intelligent, AI should study humans and imitate their psychology or physiology. • The other is phenomenal, based on studying and formalizing common sense facts about the world and the problems that the world presents to the achievement of goals. • The two approaches interact to some extent, and both should eventually succeed. It is a race, but both racers seem to be walking. [John McCarthy] DCP 1172, Lecture 2

  27. Ontology DCP 1172, Lecture 2 Khan & McLeod, 2000

  28. The task-relevance map Scalar topographic map, with higher values at more relevant locations DCP 1172, Lecture 2

  29. More formally: how do we do it? (1) • Use ontology to describe categories, objects and relationships: Either with unary predicates, e.g., Human(John), Or with reified categories, e.g., John  Humans, And with rules that express relationships or properties, e.g., x Human(x)  SinglePiece(x)  Mobile(x)  Deformable(x) DCP 1172, Lecture 2

  30. More formally: how do we do it? (2) • Use ontology to expand concepts to related concepts: E.g., parsing question yields “LookFor(catching)” Assume a category HandActions and a taxonomy defined by catching  HandActions, grasping  HandActions, etc. We can expand “LookFor(catching)” to looking for other actions in the category where catching belongs through a simple expansion rule: a,b,c a  c  b  c  LookFor(a)  LookFor(b) DCP 1172, Lecture 2

  31. Last Time: Acting Humanly: The Full Turing Test • Alan Turing's 1950 article Computing Machinery and Intelligence discussed conditions for considering a machine to be intelligent • “Can machines think?”  “Can machines behave intelligently?” • The Turing test (The Imitation Game): Operational definition of intelligence. • Computer needs to possess: Natural language processing, Knowledge representation, Automated reasoning, and Machine learning • Problem: 1) Turing test is not reproducible, constructive, and amenable to mathematic analysis. 2) What about physical interaction with interrogator and environment? • Total Turing Test: Requires physical interaction and needs perception and actuation. DCP 1172, Lecture 2

  32. Last time: The Turing Test http://aimovie.warnerbros.com http://www.ai.mit.edu/projects/infolab/ DCP 1172, Lecture 2

  33. Last time: The Turing Test http://aimovie.warnerbros.com http://www.ai.mit.edu/projects/infolab/ DCP 1172, Lecture 2

  34. Last time: The Turing Test http://aimovie.warnerbros.com http://www.ai.mit.edu/projects/infolab/ DCP 1172, Lecture 2

  35. Last time: The Turing Test http://aimovie.warnerbros.com http://www.ai.mit.edu/projects/infolab/ DCP 1172, Lecture 2

  36. Last time: The Turing Test FAILED! http://aimovie.warnerbros.com http://www.ai.mit.edu/projects/infolab/ DCP 1172, Lecture 2

  37. This time: Outline • Intelligent Agents (IA) • Environment types • IA Behavior • IA Structure • IA Types DCP 1172, Lecture 2

  38. What is an (Intelligent) Agent? • An over-used, over-loaded, and misused term. • Anything that can be viewed asperceiving its environment through sensors and acting upon that environment through its actuatorsto maximize progress towards its goals. DCP 1172, Lecture 2

  39. What is an (Intelligent) Agent? • PAGE (Percepts, Actions, Goals, Environment) • Task-specific & specialized: well-defined goals and environment • The notion of an agent is meant to be a tool for analyzing systems, • It is not a different hardware or new programming languages DCP 1172, Lecture 2

  40. Agency sensors actuators Intelligent Agents and Artificial Intelligence • Example:Human mind as network of thousands or millions of agents working in parallel. To produce real artificial intelligence, this school holds, we should build computer systems that also contain many agents and systems for arbitrating among the agents' competing results. • Distributed decision-making and control • Challenges: • Action selection: What next actionto choose • Conflict resolution DCP 1172, Lecture 2

  41. Agent Types We can split agent research into two main strands: • Distributed Artificial Intelligence (DAI) – Multi-Agent Systems (MAS) (1980 – 1990) • Much broader notion of "agent"(1990’s – present) • interface, reactive, mobile, information DCP 1172, Lecture 2

  42. Rational Agents How to design this? Sensors percepts Environment ? Agent actions Actuators DCP 1172, Lecture 2

  43. Remember: the Beobot example DCP 1172, Lecture 2

  44. A Windshield Wiper Agent How do we design a agent that can wipe the windshields when needed? • Goals? • Percepts? • Sensors? • Effectors? • Actions? • Environment? DCP 1172, Lecture 2

  45. A Windshield Wiper Agent (Cont’d) • Goals: Keep windshields clean & maintain visibility • Percepts: Raining, Dirty • Sensors: Camera (moist sensor) • Effectors: Wipers (left, right, back) • Actions: Off, Slow, Medium, Fast • Environment: Inner city, freeways, highways, weather … DCP 1172, Lecture 2

  46. Towards Autonomous Vehicles http://iLab.usc.edu http://beobots.org DCP 1172, Lecture 2

  47. Interacting Agents Collision Avoidance Agent (CAA) • Goals: Avoid running into obstacles • Percepts ? • Sensors? • Effectors ? • Actions ? • Environment: Freeway • Lane Keeping Agent (LKA) • Goals: Stay in current lane • Percepts ? • Sensors? • Effectors ? • Actions ? • Environment: Freeway DCP 1172, Lecture 2

  48. Interacting Agents Collision Avoidance Agent (CAA) • Goals: Avoid running into obstacles • Percepts: Obstacle distance, velocity, trajectory • Sensors: Vision, proximity sensing • Actuators: Steering Wheel, Accelerator, Brakes, Horn, Headlights • Actions: Steer, speed up, brake, blow horn, signal (headlights) • Environment: Freeway • Lane Keeping Agent (LKA) • Goals: Stay in current lane • Percepts: Lane center, lane boundaries • Sensors: Vision • Actuators: Steering Wheel, Accelerator, Brakes • Actions: Steer, speed up, brake • Environment: Freeway DCP 1172, Lecture 2

  49. Conflict Resolution by Action Selection Agents • Override: CAA overrides LKA • Arbitrate:if Obstacle is Close then CAAelse LKA • Compromise: Choose action that satisfies both agents • Any combination of the above • Challenges:Doing the right thing DCP 1172, Lecture 2

  50. The Right Thing = The Rational Action • Rational Action: The action that maximizes the expected value of the performance measure given the percept sequence to date • Rational = Best ? • Rational = Optimal ? • Rational = Omniscience ? ( 無所不知, …) • Rational = Clairvoyant ?(預知未來, 和死者溝通, ..) • Rational = Successful ? DCP 1172, Lecture 2

More Related