1 / 42

CMPUT 366 Intelligent Systems:

CMPUT 366 Intelligent Systems:. Introduction to Artificial Intelligence. Instruction Team. Prof: Dekang Lin Office hours: Tue, Thur: 3:30-4:30, or by appointment Phone: 492-9920 TAs: Yaling Pei, Mark Schmidt, Gang Wu E-mail: c366@cs.ualberta.ca

sidney
Télécharger la présentation

CMPUT 366 Intelligent Systems:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CMPUT 366 Intelligent Systems: Introduction to Artificial Intelligence

  2. Instruction Team • Prof: Dekang Lin • Office hours: Tue, Thur: 3:30-4:30, or by appointment • Phone: 492-9920 • TAs: Yaling Pei, Mark Schmidt, Gang Wu • E-mail: c366@cs.ualberta.ca • Home Page: http://www.cs.ualberta.ca/~lindek/366 • Announcements • Slides • Assignments

  3. Textbooks • Required  • S Russell and P Norvig, Artificial Intelligence: A Modern Approach, Prentice Hall, 1995. • Recommended • D Poole, A Mackworth and R Goebel, Computational Intelligence: A Logical Approach , Oxford, 1998. • Nilsson, Artificial Intelligence: A New Synthesis, Morgan Kaufmann, 1998.

  4. Evaluation • 4 Assignments • 16% each. Solo! (see code of conducts) • Paper/Pencil • Submit hard copy on due date before class, write ligibly • Implementations (C++/Java) • Submit using ‘try’. The deadline is 11:59pm on the due date. • The implementations must run on the lab machines (in CSC 219) • Final Exam • 36%

  5. Other Issues • Prerequisites • Programming skills (C++, Java) • Elementary probability theory • AI Seminar • http://www.cs.ualberta.ca/~ai/seminars • Friday noons, CSC333 • Neat topics, great speakers, FREE PIZZA!

  6. Course Overview • Introduction: intelligent agent • Search and constraint satisfaction • Logical agent and planning • Probabilistic reasoning • Natural language and speech • Perception (if there is time)

  7. What is Artificial Intelligence (AI)? Discipline that systematizes and automates intellectual tasks to create machines that:

  8. Act Like Humans • AI is the art of creating machines that perform functions that require intelligence when performed by humans • Methodology: Take an intellectual task at which people are better and make a computer do it • Prove a theorem • Play chess • Plan a surgical operation • Diagnose a disease • Navigate in a building

  9. Turing Test • Alan Turing, a mathematician who not only cracked the German code making machine, Enigma during the Second World War, but invented the concept of computers as we know them. • Turing asserted that if you can fool a human into believing that he/she is receiving answers from another human when in fact it is a computer, this proves that computers are doing essentially what human brains do.

  10. “Can machines think” -> “Can machines behave intelligently?” • Operational test of intelligence: Imitation Game: • Problem: • Turing Test is not reproducible, constructive, or amenable to mathematical analysis.

  11. Think Like Humans • How the computer performs functions does matter • Comparison of the traces of the reasoning steps • Cognitive science  testable theories of the workings of the human mind

  12. Examples • Garden-Path Sentence: • The horse raced past the barn fell. • Center-embedding: • The cat that the dog that the mouse that the elephant admired bit chased died. • The elephant admired the mouse that bit the dog that chased the cat that died. But, do we want to duplicate human imperfections?

  13. Think Rationally: Laws of Thought • Normative (or prescriptive) rather than descriptive • Aristotle: what are correct arguments/thought processes? • Several Greek schools developed forms of logic: notation and rules of derivation for thoughts. • Problems: • Not all intelligent behavior is mediated by logical deliberation • What is the purpose of thinking? What thoughts should I have?

  14. Act Rationally • Rational behavior: doing the right thing • “The right thing”: • that which is expected to maximize goal achievement, given the available information • Limited resource, imperfect knowledge • Rationality ≠ Omniscience, Rationality ≠ Clairvoyance, Rationality ≠ Successes • Doesn't necessarily (but often) involve thinking • Ignores the role of consciousness, emotions, fear of dying, … • Doesn’t necessarily have anything to do with how humans solve the same problem.

  15. Example: Semantic Orientation • In many tasks, it is necessary to determine the semantic orientation of words • Mining movie reviews • Routing custermer e-mail • Turney 2002 • Determine the semantic orientation of words using internet search engines.

  16. AI History

  17. Trends Since 90’s • Relying less on logic and more on probability theory and statistics. • More emphasis on objective performance evaluation. • Intelligent Agents • Accomplishments in • Game playing: Deep blue, Chinook, … • Space Probe • Biological sequence analysis • OCR • Consumer electronics ……

  18. laser range finder sensors environment ? agent actuators sonars touch sensors Notion of an Agent Source: robotics.stanford.edu/~latombe/cs121/2003/home.htm

  19. sensors environment ? agent actuators Notion of an Agent • Locality of sensors/actuators • Imperfect modeling • Time/resource constraints • Sequential interaction • Multi-agent worlds Source: robotics.stanford.edu/~latombe/cs121/2003/home.htm

  20. target robot Example: Tracking a Target • The robot must keep the target in view • The target’s trajectory is not known in advance • The robot may not know all the obstacles in advance • Fast decision is required Source: robotics.stanford.edu/~latombe/cs121/2003/home.htm

  21. What is Artificial Intelligence? (revised) • Study of design of rational agents • agent = thing that acts in environment • Rational agent = agent that acts rationally: • actions are appropriate for goals and circumstances to changing environments and goals • learns from experience

  22. Goals of Artificial Intelligence • Scientific goal: • understand principles that make rational (intelligent) behavior possible, in natural or artificial systems. • Engineering goal: • specify methods for design of useful, intelligent artifacts. • Psychological goal: • understanding/modeling people • cognitive science (not this course)

  23. Goals of This Course • Introduce key methods & techniques from AI • searching, • reasoning and decision making (logical and probabilistic) • learning (covered in detail in CMPUT466) • language understanding, • . . . • Understand applicability and limitations of these methods

  24. Goals of This Course • Our approach: • Characterize Environments • Identify agent that is most effective for each environment • Study increasingly complicated agent architectures requiring • increasingly sophisticated representations, • increasingly powerful reasoning strategies

  25. Intelligent Agents • Definition: An Intelligent Agentperceives its environment via sensors and acts rationally upon that environment with its acutators. • Hence, an agent gets percepts one at a time, and maps this percept sequence to actions. • Properties • Autonomous • Interacts with other agents plus the environment • Adaptive to the environment • Pro-active (goal-directed)

  26. Applications of Agents • Autonomous delivery/cleaning robot • roams around home/office environment, delivering coffee, parcels,. . . vacuuming, dusting,. . . • Diagnostic assistant helps a human troubleshoot problems and suggest repairs or treatments. • E.g., electrical problems, medical diagnosis. • Infobot searches for information on computer system or network. • Autonomous Space Probes • . . .

  27. Task Environments: PEAS • Performance Measure • Criterion of success • Environment • Actuators • Mechanisms for the agent to affect the environment • Sensors • Channels for the agent to perceive the environment

  28. Example: Taxi Driving • Performance Measure • Safe, fast, legal, comfortable trip, maximize profit • Environment • Roads, other traffic, pedestrians, customers • Actuators • Steering, accelerator, break, signal, horn, … • Sensors • Cameras, sonar, speedometer, GPS, …

  29. Types of Environments • Fully observable (accessible) or not • Deterministic vs. stochastic • Episodic vs. sequential • Static vs. dynamic • Discrete vs. continuous • Single agent vs. multiagent • competitive vs. cooperative

  30. Example: Cleaning Agent

  31. Performance Measure • ?? • Environment • ?? • Actuators • ?? • Sensors • ??

  32. SurfBot • Automated web surfing • A SurfBot operates in the environment of the web. • takes in high-level, perhaps informal, queries • finds relevant information • presents information in meaningful way

  33. Performance Measure • ?? • Environment • ?? • Actuators • ?? • Sensors • ??

  34. Agent Function and Program • Agent specified by agent function • mapping percept sequences to actions • Aim: Concisely implement “rational agent function” • Agent program • input: a single percept-vector • (keeps/updates internal state) • returns action

  35. Skeleton Agent Program function SkeletonAgent(percept) returns action static: memory, [agent's memory of the world] memory  UpdateMemory(memory,percept) action  ChooseBestAction(memory) memory  UpdateMemory(memory, action) return action

  36. Types of Agents • Simple reflex agents • Actions are determined by sensory input only • Model-based reflex agents • Has internal states • Goal-based agents • Action may be driven by a goal • Utility-based agents • Maximizes a utility function

  37. Simple Reflex Agent

  38. Example • A LEGO MindStormTM program: if (isDark(leftLightSensor)) turnLeft() else if (isDark(rightLightSensor)) turnRight() else goStraight() • What’s the agent function?

  39. Model-Based Agent

  40. Goal-based Agent

  41. Utility-based Agent

  42. Summary • What is AI? • Rationality • A bit of History • Intelligent Agent • PEAS • Types of Agents

More Related