1 / 79

We want to design some truly intelligent robot

We want to design some truly intelligent robot . Or like this??. musee mechanique http://museemecanique.citysearch.com/. Application: Knowledge Discovery in Databases. We need AI and ML for all these applications. What is AI?.

edda
Télécharger la présentation

We want to design some truly intelligent robot

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. We want to design some truly intelligent robot

  2. Or like this?? • musee mechanique • http://museemecanique.citysearch.com/

  3. Application:Knowledge Discovery in Databases We need AI and ML for all these applications

  4. What is AI? Discipline that systematizes and automates intellectual tasks to create machines that: More formal and mathematical

  5. Some Important Achievements of AI • Logic reasoning (data bases) • Applied in reasoning interactive robot helper • Search and game playing • Applied in robot motion planning • Knowledge-based systems • Applied in robots that use knowledge like internet robots • Bayesian networks (diagnosis) • Machine learning and data mining • Planning and military logistics • Autonomous robots

  6. MAIN BRANCHES OF AI APPLICABLE TO ROBOTICS Artificial Intelligence Fuzzy Logic Genetic Algorithms Neural Nets

  7. ARTIFICIAL INTELLIGENCE Machine Learning Decision Theoretic Techniques Symbolic Concept Acquisition Constructive Induction Genetic learning

  8. examples of applications

  9. Un-supervised leaning Treatment of uncertainty Efficient constraint satisfaction

  10. Inductive Learning by Nearest-Neighbor Classification • One simple approach to inductive learning is to save each training example as a point in feature space • Classify a new example by giving it the same classification (+ or -) as its nearest neighbor in Feature Space. • A variation involves computing a weighted sum of class of a set of neighbors where the weights correspond to distances • Another variation uses the center of class • The problem with this approach is that it doesn't necessarily generalize well if the examples are not well "clustered."

  11. Text Mining:Information Retrieval and Filtering • 20 USENET Newsgroups • comp.graphics misc.forsale soc.religion.christian sci.space • comp.os.ms-windows.misc rec.autos talk.politics.guns sci.crypt • comp.sys.ibm.pc.hardware rec.motorcycles talk.politics.mideast sci.electronics • comp.sys.mac.hardware rec.sports.baseball talk.politics.misc sci.med • comp.windows.x rec.sports.hockey talk.religion.misc • alt.atheism • Problem Definition [Joachims, 1996] • Given: 1000 training documents (posts) from each group • Return: classifier for new documents that identifies the group it belongs to • Example: Recent Article from comp.graphics.algorithms • Hi all • I'm writing an adaptive marching cube algorithm, which must deal with cracks. I got the vertices of the cracks in a list (one list per crack). • Does there exist an algorithm to triangulate a concave polygon ? Or how can I bisect the polygon so, that I get a set of connected convex polygons. • The cases of occuring polygons are these: • ... • Performance of Newsweeder (Naïve Bayes): 89% Accuracy

  12. Rule and Decision Tree Learning • Example: Rule Acquisition from Historical Data • Data • Customer 103 (visit = 1): Age 23, Previous-Purchase: no, Marital-Status: single, Children: none, Annual-Income: 20000, Purchase-Interests: unknown, Store-Credit-Card: no, Homeowner: unknown • Customer 103 (visit = 2): Age 23, Previous-Purchase: no, Marital-Status: married, Children: none, Annual-Income: 20000: Purchase-Interests: car, Store-Credit-Card: yes, Homeowner: no • Customer 103 (visit = n): Age 24, Previous-Purchase: yes, Marital-Status: married, Children: yes, Annual-Income: 75000, Purchase-Interests: television, Store-Credit-Card: yes, Homeowner: no, Computer-Sales-Target: YES • Learned Rule • IF customer has made a previous purchase, AND customer has an annual income over $25000, AND customer is interested in buying home electronics • THEN probability of computer sale is 0.5 • Training set: 26/41 = 0.634, test set: 12/20 = 0.600 • Typical application: target marketing

  13. INDUCTIVE LEARNING Example of Risk Classification NO. RISK CREDIT DEBT COLLATERAL INCOME 1. High Bad High none $0-$15 K 2. High Unk. High none $15-$35 K 3. Mod. Unk. Low none $15-$35 K 4. High Unk. Low none $0-$15 K 5. Low Unk. Low none >$35 K 6. Low Unk. Low none >$35 K Decision variable (output) is RISK

  14. INDUCTIVE LEARNING Example of Risk Classification.. Income Credit History Collateral Debt

  15. The Block’s world

  16. Hand-Coded Knowledge vs. Machine Learning • How much work would it be to enter knowledge by hand? • Do we even know what to enter? • 1952-62 Samuel’s checkers player learned its evaluation • function • Winston’s system learned structural descriptions • from examples and near misses 1984 Probably Approximately Correct learning offers a theoretical foundation mid 80’s The rise of neural networks

  17. Concept Acquisition Example of “Arch” concept two bricks support a brick two bricks support a Pyramid

  18. Concept Acquisition (cont) Bricks and pyramids are instances of Polygon Polygon Brick Pyramid ARCH => Two bricks support a polygon

  19. Some Fundamental Issuesfor Most AI Problems • Learningnew knowledge is acquired • inductive inference • neural networks • artificial life • evolutionary approaches

  20. What we’ll be doing • Uncertain knowledge and reasoning • Probability, Bayes rule • Machine learning • Decision trees, computationally learning theory, reinforcement learning

  21. A Generalized Model of Learning Correct outputs Training System Output Input Learning Element Knowledge Base Feedback Element Performance Element Training system is used to create learning pairs (input, output) to train our robot

  22. Correct outputs Training System Output Input Learning Element Knowledge Base Feedback Element Performance Element Starts with some knowledge. The performance element participates in performance. The feedback element provides a comparison of actual output vs correct output which becomes input to the learning element. It analyzes the differences and updates the knowledge base.

  23. Neural Networks

  24. Standard computers • Referred to as Von Neumann machines • Follows explicit instructions • Sample program • if (time < noon) • print “Good morning”else print “Good afternoon”

  25. Neural networks • Modeled off the human brain • Does not follow explicit instructions • Is trained instead of programmed • Key papers • McCulloch, W. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 7:115 - 133. • Rosenblatt, Frank. (1958) The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological Review, 65:386-408.

  26. Sources for lecture • comp.ai.neural-networks FAQ • Gurney, Kevin. An Introduction to Neural Networks, 1996.

  27. Neuron drawing

  28. Neuron behavior • Signals travel between neurons through electrical pulses • Within neurons, communication is through chemical neurotransmitters • If the inputs to a neuron are greater than its threshold, the neuron fires, sending an electrical pulse to other neurons This is a simplification.

  29. Perceptron (artificial neuron)

  30. Training • Inputs and outputs are 0 (no) or 1 (yes) • Initially, weights are random • Provide training input • Compare output of neural network to desired output • If same, reinforce patterns • If different, adjust weights

  31. Example If both inputs are 1, output should be 1.

  32. Example (1,1) If both inputs are 1, output should be 1.

  33. Example (1,1) If both inputs are 1, output should be 1. 2 3

  34. Example (1,1) If both inputs are 1, output should be 1. 2 5 3

  35. Example (1,1) If both inputs are 1, output should be 1. 2 5 0 3

  36. Example (1,1) If both inputs are 1, output should be 1. 2 5 0 3

  37. Example (1,1) If both inputs are 1, output should be 1. Must increase weights! 2 5 0 3

  38. Example (1,1) If both inputs are 1, output should be 1. Repeat for all inputs until weights stop changing.

  39. f(x) x Function-Learning Formulation • Goal function f • Training set: (x(i), f(x(i))), i = 1,…,n • Inductive inference: find a function h that fits the points well • Same Keep-It-Simple bias

  40. x2 + + x1 + - + - - S xi y x1 wi + g - - xn y = g(Si=1,…,nwi xi) Perceptron(The goal function f is a boolean one) w1 x1 + w2 x2 = 0

  41. + + x0 + - - - S xi y wi g - + + - xn y = g(Si=1,…,nwi xi) Perceptron(The goal function f is a boolean one) ?

  42. x1 xi y wi g S xn Unit (Neuron) y = g(Si=1,…,nwi xi) g(u) = 1/[1 + exp(-au)]

  43. x1 x1 S S xi xi y y wi wi g g xn xn Neural Network Network of interconnected neurons Acyclic (feed-forward) vs. recurrent networks

  44. Inputs Hidden layer Output layer Two-Layer Feed-Forward Neural Network w1j w2k

  45. Backpropagation (Principle) • New example y(k) = f(x(k)) • φ(k) = outcome of NN with weights w(k-1) for inputs x(k) • Error function: E(k)(w(k-1)) = ||φ(k) – y(k)||2 • wij(k) = wij(k-1) – εE/wij (w(k) = w(k-1) - eE) • Backpropagation algorithm:Update the weights of the inputs to the last layer, then the weights of the inputs to the previous layer, etc.

  46. Comments and Issues • How to choose the size and structure of networks? • If network is too large, risk of over-fitting (data caching) • If network is too small, representation may not be rich enough • Role of representation: e.g., learn the concept of an odd number • Incremental learning

  47. RECOGNIZING A PERSON Hairstyle Glasses Facial features Height

  48. Perceptrons Early neural nets

  49. Symbolic vs. Subsymbolic AI Subsymbolic AI: Model intelligence at a level similar to the neuron. Let such things as knowledge and planning emerge. Symbolic AI: Model such things as knowledge and planning in data structures that make sense to the programmers that build them. (blueberry (isa fruit) (shape round) (color purple) (size .4 inch))

  50. The Origins of Subsymbolic AI 1943 McCulloch and Pitts A Logical Calculus of the Ideas Immanent in Nervous Activity “Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic”

More Related