1 / 70

Chapter 13

Chapter 13. Artificial Intelligence. Artificial Intelligence. Artificial : humanly contrived often on a natural model Intelligence : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria

rio
Télécharger la présentation

Chapter 13

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 13 Artificial Intelligence

  2. Artificial Intelligence Artificial:humanly contrived often on a natural model Intelligence:the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria Clearly, intelligence is an internal characteristic.How can it be identified?

  3. Thinking Machines • A computer can do some things better --and certainly faster--than a human can: • Adding a thousand four-digit numbers • Counting the distribution of letters in a book • Searching a list of 1,000,000 numbers for duplicates • Matching finger prints

  4. Thinking Machines • BUT a computer would have difficulty pointing out the cat in this picture, which is easy for a human. • Artificial intelligence (AI) The study of computer systems that attempt to model and apply the intelligence of the human mind. Figure 13.1 A computer might have trouble identifying the cat in this picture.

  5. In the beginning… • In 1950 Alan Turing wrote a paper titled Computing Machinery And Intelligence, in which he proposed to consider the question “Can machines think?” • But the question is “loaded” so he proposed to replace it with what has since become known as the Turing Test.“Can a machine play the Imitation Game?”

  6. The Imitation Game Skip detailed description

  7. The Imitation Game The 'imitation game' is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman.

  8. The Imitation Game The interrogator is allowed to put questions to A and B. It is A's object in the game to try and cause C to make the wrong identification. The object of the game for the third player (B) is to help the interrogator.

  9. The Imitation Game We now ask the question, 'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?

  10. The Imitation Game

  11. The Turing Test (objections) • There are authors who question the validity of the Turing test. • The objections tend to be of 2 types. • The first is an attempt to distinguish degrees, or types of equivalence…

  12. The Turing Test (objections) • Weak equivalence:Two systems (human and computer) are equivalent in results (output), but they do not arrive at those results in the same way. • Strong equivalence: Two systems (human and computer) use the same internal processes to produce results.

  13. The Turing Test (objections) • The Turing Test, they argue, can demonstrate weak equivalence, but not strong. So even if a computer passes the test we won’t be able to say that it thinks like a human. • Of course, neither they, nor anyone else, can explain how humans think! • So strong equivalence is a nice theoretical construction, but since it’s impossible to demonstrate it between humans, it would be an unfair requirement of the Turing Test.

  14. The Turing Test (objections) • The other objection is that a computer might seem to be behaving in an intelligent manner, while it’s really just imitating behaviour. • This might be true, but notice that when a parrot talks, or a horse counts, or a pet obeys our instructions, or a child imitates its parents we take all of these things to be signs of intelligence. • If a parrot mimicking human sounds can be considered intelligent (at least to some small degree) then why wouldn’t a computer be considered intelligent (at least to some small degree) for imitating other human behaviour?

  15. Turing’s View • “I believe that in about fifty years time it will be possible to programme computers with a storage capacity of about 109 to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.”

  16. Context • In 1950, computers were very primitive. • UNIVAC I was the first commercial computer made in the United States. It was delivered to the United States Census Bureau on March 31, 1951! • At a time when the first computers were just being built, to suggest that they might soon be able to think was quite radical.

  17. Objections Turing Foresaw • The Theological Objection • The 'Heads in the Sand' Objection • The Mathematical Objection • The Argument from Consciousness • Arguments from Various Disabilities • Lady Lovelace's Objection • Argument from Continuity in the Nervous System • The Argument from Informality of Behaviour • The Argument from Extra-Sensory Perception

  18. Can Machines Think? • No machine has yet passed the Turing Test. • Loebner Prize established in 1990 • $100,000 and a gold medal will be awarded to the first computer whose responses are indistinguishable from a human's

  19. Aspects of AI • Knowledge Representation • Semantic Networks • Search Trees • Expert Systems • Neural Networks • Natural Language Processing • Robotics

  20. Knowledge Representation • The knowledge needed to represent an object or event depends on the situation. • There are many ways to represent knowledge. One is natural language. • Even though natural language is very descriptive, it doesn’t lend itself to efficient processing.

  21. Semantic Networks • Semanticnetwork: A knowledge representation technique that focuses on the relationships between objects. • A directed graph is used to represent a semantic network (net).

  22. Semantic Networks Figure 13.3 A semantic network

  23. Semantic Networks • The relationships that we represent are completely our choice, based on the information we need to answer the kinds of questions that we will face. • The types of relationships represented determine which questions are easily answered, which are more difficult to answer, and which cannot be answered.

  24. Search Trees • Search tree: a structure that represents all possible moves in a game, for both you and your opponent. • The paths down a search tree represent a series of decisions made by the players.

  25. Search Trees Figure 13.4 A search tree for a simplified version of Nim

  26. Search Trees • Search tree analysis can be applied nicely to other, more complicated games such as chess. • Because Chess trees are so large, only a fraction of the tree can be analyzed in a reasonable time limit, even with modern computing power.

  27. Search Trees Techniques for searching trees • Depth-first: a technique that involves the analysis of selected paths all the way down the tree. • Breadth-first:a technique that involves the analysis of all possible paths but only for a short distance down the tree. Breadth-first tends to yield the best results.

  28. Search Trees Figure 13.5 Depth-first and breadth-first searches

  29. Search Trees Even though the breadth-first approach tends to yield the best results, we can see that a depth-first search will get to the goal sooner – IF we choose the right branch. Heuristics are guidelines that suggest taking one path rather than another one.

  30. Expert Systems • Knowledge-based system: a software system that embodies and uses a specific set of information from which it extracts and processes particular pieces. • Expert system: a software system based the knowledge of experts in a specialized field. • An expert system uses a set of rules to guide its processing. • The inference engine is the part of the software that determines how the rules are followed.

  31. Expert Systems • Example: What type of treatment should I put on my lawn? • NONE—apply no treatment at this time • TURF—apply a turf-building treatment • WEED—apply a weed-killing treatment • BUG—apply a bug-killing treatment • FEED—apply a basic fertilizer treatment • WEEDFEED—apply a weed-killing and fertilizer combination treatment

  32. Expert Systems • Boolean variables • BARE—the lawn has large, bare areas • SPARSE—the lawn is generally thin • WEEDS—the lawn contains many weeds • BUGS—the lawn shows evidence of bugs

  33. Expert Systems • Some rules • if (CURRENT – LAST < 30) then NONE • if (SEASON = winter) then not BUGS • if (BARE) then TURF • if (SPARSE and not WEEDS) then FEED • if (BUGS and not SPARSE) then BUG • if (WEEDS and not SPARSE) then WEED • if (WEEDS and SPARSE) then WEEDFEED

  34. Expert Systems • An execution of our inference engine • System: Does the lawn have large, bare areas? • User: No • System: Does the lawn show evidence of bugs? • User: No • System: Is the lawn generally thin? • User: Yes • System: Does the lawn contain significant weeds? • User: Yes • System: You should apply a weed-killing and fertilizer combination treatment.

  35. Artificial Neural Networks • Attempt to mimic the actions of the neural networks of the human body. • Let’s first look at how a biological neural network works: • A neuron is a single cell that conducts a chemically-based electronic signal. • At any point in time a neuron is in either an excited or inhibited state.

  36. Artificial Neural Networks • A series of connected neurons forms a pathway. • A series of excited neurons creates a strong pathway. • A biological neuron has multiple input tentacles called dendrites and one primary output tentacle called an axon. • The gap between an axon and a dendrite is called a synapse.

  37. Artificial Neural Networks Figure 13.6 A biological neuron

  38. Artificial Neural Networks • A neuron accepts multiple input signals and then controls the contribution of each signal based on the “importance” the corresponding synapse gives to it. • The pathways along the neural nets are in a constant state of flux. • As we learn new things, new strong neural pathways are formed.

  39. Artificial Neural Networks • Each processing element in an artificial neural net is analogous to a biological neuron. • An element accepts a certain number of input values and produces a single output value of either 0 or 1. • Associated with each input value is a numeric weight.

  40. Sample “Neuron” • Artificial “neurons” can be represented as elements. • Inputs are labelled v1, v2 • Weights are labelled w1, w2 • The threshold value is represented by T • O is the output

  41. Artificial Neural Networks • The effective weight of the element is defined to be the sum of the weights multiplied by their respective input values: v1*w1 + v2*w2 • If the effective weight meets the threshold, the unit produces an output value of 1. • If it does not meet the threshold, it produces an output value of 0.

  42. Artificial Neural Networks • The process of adjusting the weights and threshold values in a neural net is called training. • A neural net can be trained to produce whatever results are required.

  43. Sample “Neuron” • If the input Weights and the Threshold are set to the above values, how does the neuron act? • Try a Truth Table…

  44. Sample “Neuron” w1=.5, w2=.5, T=1 With the weights set to .5 this neuron behaves like an AND gate.

  45. Sample “Neuron” • How about now?

  46. Sample “Neuron” w1=1, w2=1, T=1 With the weights set to 1 this neuron behaves like an OR gate.

  47. Natural Language Processing • There are three basic types of processing going on during human/computer voice interaction: • Voice recognition — recognizing human words • Natural language comprehension — interpreting human communication • Voice synthesis — recreating human speech • Common to all of these problems is the fact that we are using a natural language, which can be any language that humans use to communicate.

  48. Voice Synthesis • There are two basic approaches to the solution: • Dynamic voice generation • Recorded speech • Dynamic voice generation: A computer examines the letters that make up a word and produces the sequence of sounds that correspond to those letters in an attempt to vocalize the word. • Phonemes:The sound units into which human speech has been categorized.

  49. Voice Synthesis Figure 13.7 Phonemes for American English

  50. Voice Synthesis • Recorded speech: A large collection of words is recorded digitally and individual words are selected to make up a message. Telephone voice mail systems often use this approach: “Press 1 to leave a message for Nell Dale; press 2 to leave a message for John Lewis.”

More Related