html5-img
1 / 14

Connectionism

Connectionism. “Frank Rosenblatt, Alan M. Turing, Connectionism, and AI” May 6, 2011 Version 4.0; 05/06/2011 John M. Casarella Proceedings of Student/Faculty Research Day Ivan G. Seidenberg School of CSIS, Pace University. Abstract.

lavey
Télécharger la présentation

Connectionism

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Connectionism “Frank Rosenblatt, Alan M. Turing, Connectionism, and AI” May 6, 2011 Version 4.0; 05/06/2011 John M. Casarella Proceedings of Student/Faculty Research Day Ivan G. Seidenberg School of CSIS, Pace University

  2. Abstract Abstract: Dr. Frank Rosenblatt is commonly associated with Connectionism, an area of cognitive science, which applies Artificial Neural Networks in an effort to explain aspects of human intelligence. Other notable connectionists include Warren McCulloch, Walter Pitts, and Donald Hebb, but it is Alan Matheson Turing, a man of unique insight and great misunderstanding, who is noticeably absent from this list. He is commonly associated with the development of the digital computer, employing his paper tape Universal Turing Machine. There are many who associate him with providing the foundation for defining Artificial Intelligence, specifically the development of the Turing Test as the standard to be met in determining if a machine exhibits intelligence. His contribution to AI goes beyond his test, laying down the foundation of Connectionism, providing insight into and supporting later contributions to the key models of Perceptrons, Artificial Neural Networks and the Hierarchical Temporal Memory model.

  3. Introduction "I was proceeding down the road. The trees on the right were passing me in orderly fashion at 60 miles per hour. Suddenly one of them stepped in my path." John von Neumann providing an explanation for his automobile accident.

  4. Introduction The dawn of Connectionist Theory is commonly traced back to McCulloch and Pitts and their model of the Neuron, advanced by Dr. Rosenblatt through his perceptron theories Connectionist theory was strengthened by Donald O. Hebb and the Hebbian approach to neural learning The contribution to Connectionism by Rummelhart, McClelland and the Parallel Distributed Processing Group also cannot be minimized, with direct roots to Dr. Rosenblatt’s research Dr. Rosenblatt was influenced by Hebb’s concepts and was the first to associate the term “connectionist” with artificial neural networks BUT – all were preceded by Turing, who anticipated much of modern connectionism in his 1948 paper “Intelligent Machinery”

  5. Dr. Frank Rosenblatt Perceptron model evolved from Neural Nets Based on McCulloch and Pitts Major contribution derives from his investigations into the properties of perceptrons and detailed mathematical analysis Perceptron model based on probability theory as opposed to symbolic logic In 1958 defines the theoretical basis of connectionism as: “stored information takes the form of new connections, or transmission channels in the nervous system (or the creation of conditions which are functionally equivalent to new connections)” Activation and weight training Linear Separation - No XOR Minsky and Papert, what were they thinking!

  6. Dr. Frank Rosenblatt Was not primarily interested with AI devices Research focused on the physical structures and neuro-dynamics of “natural Intelligence” Looked at the perceptron as primarily a brain model, not as model for pattern recognition Foundational role for AI and Connectionist Theory

  7. The Turing Test • Critics ask if passing the test is sufficient or a necessary condition for machine intelligence • Although widely accepted, limiting in determining if a machine is capable of intelligence • Turing never claimed passing the is a necessary condition for intelligence • In his papers, claims point of test was determine if a computer can “imitate a brain” • Can it be passed at all? • If “machine intelligence” no longer a oxymoron, then one of Turing’s predictions has come true

  8. In the beginning… Turing was harboring thoughts of Machine Intelligence as early as 1941 Turing’s computing machines - model a child’s mind and then ‘educate’ it Learning from experience Start with an initial state of the mind / computer Determine the education subjected to Experiences other than education Start with a simple machine, progress to one more elaborate A key concept - “teach” a network of artificial neurons to perform specific tasks

  9. Turing and Connectionist Foundations Computing machines built out of simple, neuron-like elements Elements randomly connected together into networks Consisted of artificial neurons and devices capable of modifying the connections between them Training process renders certain pathways as effective or ineffective Every neuron executes the same logical operation of “not and’ (NAND) The idea that an initially unorganized neural network can be organized by means of “inference training” - this is significant Referred to as “unorganized machines” or “B-type unorganized machine” neural net

  10. Turing’s unorganized machine Neurons Unorganized Machines

  11. The Perception of Intelligence • How do we perceive intelligence? • What if a problem is presented to a mathematician or scientist to solve… • What if a problem is presented to a computer to solve… Intelligent humans, even highly regarded, intelligent humans are not infallible; they make errors, yet we do not consider them any less intelligent when they do, so why not apply the same standard or perception to computing machining when we attempt to determine machine intelligence.

  12. And in the end… • Jeff Hawkins (Hierarchical Temporal Memory Model) • Perceptual-memory-based predictions play a fundamental role in intelligence • An intelligent agent learns from experience • Builds a model of the world by perception • Experiences are remembered • Are available virtually instantly for inference • Professor David Gelernter • Software is extremely limited in addressing information processing problems our minds routinely handle with ease • Forget about consciousness and concentrate on the “process of thought” - Turing also so stated • Will allow us to re-focus our efforts in AI research

  13. References [1] A. M. Turing, "Computing Machinery and Intelligence," Mind, vol. 59, pp. 433 - 460, October 1950. [2] B. J. Copeland, and D. Proudfoot, "What Turing Did after He Invented the Universal Turing Machine," Journal of Logic, Language, and Information, vol. 9, pp. 491-509, 2000. [3] B. J. Copeland, and D. Proudfoot, "The Legacy of Alan Turing," Mind, vol. 108, pp. 187-195, 1999. [4] B. J. Copeland, "The Essential Turing," Oxford, Great Britain: Oxford University Press, 2004. [5] A. M. Turing, "The Turing Digital Archive," http://www.turingarchive.org/: University of Southamption and King's College Cambridge, 2002. [6] N. Block, "Psychologism and Behaviourism," Philosophical Review, vol. 90, pp. 5-43, 1981. [7] R. French, "Subcognition and the Limits of the Turing Test," Mind, vol. 99, 1990. [8] P. Hayes, and K. Ford, "Turing Test Considered Harmful," in Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, 1995, pp. 972-977. [9] K. M. Ford, and P.J. Hayes, "On Computational Wings: Rethinking the Goals of Artificial Intelligence," Scientific American Presents, vol. 9, pp. 78-83, 1998. [10] J. Copeland and Diane Proudfoot "On Alan Turing's Anticipation of Connectionism," Synthese, vol. 108, pp. 361 - 377, 1996. [11] J. Copeland and Diane Proudfoot, "Alan Turing's Forgotten Ideas in Computer Science," Scientific American, pp. 99 - 103, 1999. [12] A. M. Turing, "Intelligent Machinery," in Machine Intelligence 5, B. Meltzer, and D. Michie, Ed. Edinburgh: Edinburgh University Press, 1948, pp. 3-23. [13]B. G. Farley, and W.A. Clark, "Simulation of Self-Organizing Systems by Digital Computer," Institute of Radio Enigneers Transactions on Information Theory, vol. 4, pp. 76 - 84, 1954. [14]D. Gelernter, "Artificial Intelligence is Lost in the Woods," in Technology Review, 2007. [15]A. M. Turing, "On computable numbers, with an application to the Entscheidungsproblem," Proceedings of the London Mathematical Society, Series 2, vol. 42, pp. 230-265, 1936. [16] M. L. Minsky, and Papert, Seymour S., Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: MIT Press, 1969. [17] D. E. Rumelhart, and McClelland, J.L. editors, "Parallel Distributed Processing: Explorations in the Microstructures of Cognition." vol. 1 - Foundations Cambridge, MA: MIT Press, 1986. [18] J. L. McClelland, and Rumelhart, D.E., "Parallel Distributed Processing: Explorations in the Microstructures of Cognition." vol. 2 - Psychological and Biological Models Cambridge, MA: MIT Press, 1986. [19] D. O. Hebb, The Organization of Behavior. New York: John Wiley & Sons, 1949.

  14. References [20] W. S. McCulloch, and Pitts, Walter H., "A Logical Calculus of the Ideas Immanent in Neural Nets," Bulletin of Mathematical Biology, vol. 52, pp. 99 - 115, 1943. [21] F. Rosenblatt, "The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain," Psychological Review, vol. 65, pp. 386 - 408, 1958. [22] F. Rosenblatt, Principles of Neurodynamics. Washington, DC: Spartan Books, 1962. [23] J. Hawkins, with Sandra Blakeslee, On Intelligence, First ed. New York: Times Books, Henry Holt and Company, 2004. [24] D. George, and Hawkins, J., "Belief Propagation and Wiring Length Optimization as Organizing Principles for Cortical Microcircuits," Numenta, Inc., 2005. [25] D. George, and Hawkins, J., "Invariant Pattern Recognition using Bayesian Inference on Hierarchical Sequences," Numenta, Inc., 2006. [26] J. Hawkins, and Dileep George, "Hierarchical Temporal Memory, Concepts, Theory, and Terminology," Numenta, Inc., 2006. [27] J. Hawkins, "Hierarchical Temporal Memory (HTM): Biological Mapping to Neocortex and Thalamus," Numenta, Inc., 2007. [28] J. Hawkins, "An Investigation of Adaptive Behavior Towards a Theory of Neocortical Function," 1986. [29] D. George, "How the Brain Might Work: A Hierarchical and Temporal Model for Learning and Recognition," Doctoral Dissertation; Stanford University, 2008.

More Related