1 / 68

Artificial Intelligence

Artificial Intelligence. Our Working Definition of AI. Artificial intelligence is the study of how to make computers do things that people are better at or would be better at if: they could extend what they do to a World Wide Web-sized amount of data and not make mistakes. Why AI?.

andrew
Télécharger la présentation

Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence

  2. Our Working Definition of AI • Artificial intelligence is the study of how to make computers do things that people are better at or would be better at if: • they could extend what they do to a World Wide • Web-sized amount of data and • not make mistakes.

  3. Why AI? "AI can have two purposes. One is to use the power of computers to augment human thinking, just as we use motors to augment human or horse power. Robotics and expert systems are major branches of that. The other is to use a computer's artificial intelligence to understand how humans think. In a humanoid way. If you test your programs not merely by what they can accomplish, but how they accomplish it, they you're really doing cognitive science; you're using AI to understand the human mind." - Herb Simon

  4. The Dartmouth Conference and the Name Artificial Intelligence J. McCarthy, M. L. Minsky, N. Rochester, and C.E. Shannon. August 31, 1955. "We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

  5. Time Line – The Big Picture academic $ academic and routine 50 60 70 80 90 00 10 1956 Dartmouth conference. 1981 Japanese Fifth Generation project launched as the Expert Systems age blossoms in the US. 1988 AI revenues peak at $1 billion. AI Winter begins.

  6. The Origins of AI Hype 1950 Turing predicted that in about fifty years "an average interrogator will not have more than a 70 percent chance of making the right identification after five minutes of questioning". 1957 Newell and Simon predicted that "Within ten years a computer will be the world's chess champion, unless the rules bar it from competition."

  7. Evolution of the Main Ideas • Wings or not? • Games, mathematics, and other knowledge-poor tasks • The silver bullet? • Knowledge-based systems • Hand-coded knowledge vs. machine learning • Low-level (sensory and motor) processing and the resurgence of subsymbolic systems • Robotics • Natural language processing

  8. Symbolic vs. Subsymbolic AI Subsymbolic AI: Model intelligence at a level similar to the neuron. Let such things as knowledge and planning emerge. Symbolic AI: Model such things as knowledge and planning in data structures that make sense to the programmers that build them. (blueberry (isa fruit) (shape round) (color purple) (size .4 inch))

  9. The Origins of Subsymbolic AI 1943 McCulloch and Pitts A Logical Calculus of the Ideas Immanent in Nervous Activity “Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic”

  10. Interest in Subsymbolic AI 40 50 60 70 80 90 00 10

  11. The Origins of Symbolic AI • Games • Theorem proving

  12. Games • 1950 Claude Shannon published a paper describing how • a computer could play chess. • 1952-1962 Art Samuel built the first checkers program • 1957 Newell and Simon predicted that a computer will • beat a human at chess within 10 years. • 1967 MacHack was good enough to achieve a class-C • rating in tournament chess. • 1994 Chinook became the world checkers champion • 1997 Deep Blue beat Kasparpov • 2007 Checkers is solved • Summary

  13. Games • AI in Role Playing Games – now we need knowledge

  14. Logic Theorist • Debuted at the 1956 summer Dartmouth conference, although it was hand-simulated then. • Probably the first implemented A.I. program. • LT did what mathematicians do: it proved theorems. It proved, for example, most of the theorems in Chapter 2 of Principia Mathematica [Whitehead and Russell 1910, 1912, 1913]. • LT began with the five axioms given in Principia Mathematica. From there, it began to prove Principia’s theorems.

  15. Logic Theorist • LT used three rules of inference: • Substitution (which allows any expression to be substituted, consistently, for any variable): • From: A  B  A, conclude: fuzzy  cute  fuzzy • Replacement (which allows any logical connective to be replaced by its definition, and vice versa): • From A  B, conclude A  B • Detachment (which allows, if A and AB are theorems, to assert the new theorem B): • From man and man  mortal, conclude: mortal

  16. Logic Theorist In about 12 minutes LT produced, for theorem 2.45: (pq) p (Theorem 2.45, to be proved.) 1. A (AB) (Theorem 2.2.) 2. p (pq) (Subst. p for A, q for B in 1.) 3. (AB)  (BA) (Theorem 2.16.) 4. (p (pq))  ((pq) p) (Subst. p for A, (pq) for B in 3.) 5. (pq) p (Detach right side of 4, using 2.) Q. E. D.

  17. Logic Theorist The inference rules that LT used are not complete. The proofs it produced are trivial by modern standards. For example, given the axioms and the theorems prior to it, LT tried for 23 minutes but failed to prove theorem 2.31: [p (qr)]  [(pq) r]. LT’s significance lies in the fact that it opened the door to the development of more powerful systems.

  18. Mathematics 1956 Logic Theorist (the first running AI program?) 1961 SAINT solved calculus problems at the college freshman level 1967 Macsyma Gradually theorem proving has become well enough understood that it is usually no longer considered AI.

  19. Discovery • AM “discovered”: • Goldbach’s conjecture • Unique prime factorization theorem

  20. What About Things that People Do Easily? • Common sense reasoning • Vision • Moving around • Language

  21. What About Things People Do Easily? • If you have a problem, think of a past situation where you solved a similar problem. • If you take an action, anticipate what might happen next. • If you fail at something, imagine how you might have done things differently. • If you observe an event, try to infer what prior event might have caused it. • If you see an object, wonder if anyone owns it. • If someone does something, ask yourself what the person's purpose was in doing that.

  22. They Require Knowledge • Why do we need it? Find me stuff about dogs who save people’s lives. • How can we represent it and use it? • How can we acquire it?

  23. Why? • Why do we need it? Find me stuff about dogs who save people’s lives. Two beagles spot a fire. Their barking alerts neighbors, who call 911. • How can we represent it and use it? • How can we acquire it?

  24. Even Children Know a Lot A story described in Charniak (1972): Jane was invited to Jack’s birthday party. She wondered if he would like a kite. She went into her room and shook her piggy bank. It made no sound.

  25. We Divide Things into Concepts • What’s a party? • What’s a kite? • What’s a piggy bank?

  26. What is a Concept? Let’s start with an easy one: chair

  27. Chair?

  28. Chair?

  29. Chair?

  30. Chair?

  31. Chair?

  32. Chair?

  33. Chair?

  34. Chair?

  35. Chair?

  36. Chair?

  37. Chair?

  38. Chair?

  39. Chair?

  40. Chair?

  41. Chair? The bottom line?

  42. How Can We Teach Things to Computers? A quote from John McCarthy: In order for a program to be capable of learning something, it must first be capable of being told it. Do we believe this?

  43. Some Things are Easy If dogs are mammals and mammals are animals, are dogs mammals?

  44. Some Things Are Harder If most Canadians have brown eyes, and most brown eyed people have good eyesight, then do most Canadians have good eyesight?

  45. Some Things Are Harder If most Canadians have brown eyes, and most brown eyed people have good eyesight, then do most Canadians have good eyesight? Maybe not for at least two reasons: It might be true that, while most brown eyed people have good eyesight, that’s not true of Canadians. Suppose that 70% of Canadians have brown eyes and 70% of brown eyed people have good eyesight. Then assuming that brown-eyed Canadians have the same probability as other brown-eyed people of having good eyesight, only 49% of Canadians are brown eyed people with good eyesight.

  46. Concept Acquisition Pat Winston’s program (1970) learned concepts in the blocks micro-world.

  47. Concept Acquisition The arch concept:

  48. Further Complications from How Language is Used • After the strike, the president sent them away. • After the strike, the umpire sent them away. The word “strike” refers to two different concepts.

  49. When Other Words in Context Aren’t Enough • I need a new bonnet. • The senator moved to table the bill.

  50. Compiling Common Sense Knowledge • CYC (http://www.cyc.com) • UT (http://www.cs.utexas.edu/users/mfkb/RKF/tree/ ) • WordNet (http://www.cogsci.princeton.edu/~wn/)

More Related