1 / 31

COMPUTATIONAL COGNITIVE SCIENCE

COMPUTATIONAL COGNITIVE SCIENCE. Cognitive Revolution. Development of computer led to rise of cognitive psychology and artificial intelligence. BINAC: the Binary Automatic Computer, developed in 1949. Artificial Intelligence.

abel-herman
Télécharger la présentation

COMPUTATIONAL COGNITIVE SCIENCE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COMPUTATIONAL COGNITIVE SCIENCE

  2. Cognitive Revolution • Development of computer led to rise of cognitive psychology and artificial intelligence BINAC: the Binary Automatic Computer, developed in 1949

  3. Artificial Intelligence • Constructing artificial computer-based systems that produce intelligent outcomes • Examples • Game playing programs • Deep blue • Intelligent robots • Mars rovers • Darpa’s urban challenge • Netflix competition • Conversational agents

  4. Weak vs. Strong AI • Weak AI — using AI as a tool to understand human cognition • Strong AI — a properly programmed computer has a “mind” capable of understanding

  5. Turing Test • Can artificial intelligence be as good as human intelligence? How can we test this? • Turing test (1950) • designed to test whether humans can distinguish between humans and computers based on conversations • A human interrogator could ask a respondent (either a computer or a human, whose identity was hidden) any question he or she wished, and based on either the computer’s or the human’s response, the interrogator had to decide if the answer was given by the computer or by the human. Alan Turing (1912-1954)

  6. A classic conversational agent • The Turing Test inspired an early, satirical, attempt to create a computerized Rogerian therapist, “Eliza”: • Eliza’s program is a set of “If…Then” rules: • – “IF person mentions ‘father’ THEN say ‘Who else in your family comes to mind when you think about this?’ • – “IF person writes ‘I remember X’ THEN say ‘Does it make you feel happy to recall X?’ • ELIZA has no intelligence itself. Intelligence comes from people interpreting its statements.

  7. Example of modern conversational agents http://testing.turinghub.com/ [following is a fragment where A=your instructor; B = very good artificial agent] A: are you human?B: What do you think? I am a Supreme Ruler.A: seriouslyB: I am always serious.A: who was Turing?B: A brilliant and tragic figure in the history of computer science.A: Do you think the Turing test is a good one?B: Some of them can. A: are you having a good day?B: I am having fun.A: How do you know you are having fun?B: By introspection.

  8. Future of AI • Computer chip capacity and processing speed are increasing exponentially • Some theorists (e.g. Ray Kurzweil) believe this will lead to a technological singularity along with dramatic improvements in AI

  9. Computational Modeling • Most modeling in cognitive science targets natural intelligence • Goal is to develop model or mimic some aspects of human cognitive functioning • produce the same errors as humans  Simulations of aspects of human behaviour

  10. Why do we need computational models? • Makes vague verbal terms specific • Provides precision needed to specify complex theories. • Provides explanations • Obtain quantitative predictions • just as meteorologists use computer models to predict tomorrow’s weather, the goal of modeling human behavior is to predict performance in novel settings

  11. Neural Networks

  12. Neural Networks • Alternative to traditional information processing models • Also known as: • PDP (parallel distributed processing approach) • Connectionist models David Rumelhart Jay McClelland

  13. Neural Networks • Neural networks are networks of simple processors that operate simultaneously • Some biological plausibility

  14. Idealized neurons (units) Inputs S Processor Output Abstract, simplified description of a neuron

  15. Neural Networks • Units – Activation = Activity of unit – Weight = Strength of the connection between two units • Learning = changing strength of connections between units • Excitatory and inhibitory connections • correspond to positive and negative weights respectively

  16. Diagram showing how the inputs from a number of units are combined to determine the overall input to unit-i. Unit-i has a threshold of 1; so if its net input exceeds 1 then it will respond with +1, but if the net input is less than 1 then it will respond with –1 An example calculation for a single (artificial) neuron final output

  17. What would happen if we change the input J3 from +1 to -1? output changes to -1 output stays at +1 do not know What would happen if we change the input J4 from +1 to -1? output changes to -1 output stays at +1 do not know final output

  18. If we want a positive correlation between the output and input J3, how should we change the weight for J3? make it negative make it positive do not know final output

  19. Multi-layered Networks • Activation flows from a layer of input units through a set of hidden units to output units • Weights determine how input patterns are mapped to output patterns output units hidden units input units

  20. Multi-layered Networks • Network can learn to associate output patterns with input patterns by adjusting weights • Hidden units tend to develop internal representations of the input-output associations • Backpropagation is a common weight-adjustment algorithm output units hidden units input units

  21. teacher /k/ target output 26 output units 80 hidden units 7 groups of 29 input units 7 letters of text input _ a _ c a t _ target letter A classic neural network: NETtalk network learns to pronounce English words: i.e., learns spelling to sound relationships. Listen to this audio demo. (after Hinton, 1989)

  22. Different ways to represent information with neural networks: localist representation Unit 6 Unit 5 Unit 4 Unit 3 Unit 1 Unit 2 concept 1 concept 2 concept 3 (activations of units; 0=off 1=on) Each unit represents just one item  “grandmother” cells

  23. Distributed Representations (aka Coarse Coding) Unit 6 Unit 5 Unit 4 Unit 3 Unit 1 Unit 2 concept 1 concept 2 concept 3 (activations of units; 0=off 1=on) Each unit is involved in the representation of multiple items

  24. Suppose we lost unit 6 Unit 6 Unit 5 Unit 4 Unit 3 Unit 1 Unit 2 concept 1 • Can the three concepts still be discriminated? • NO • YES • do not know concept 2 concept 3 (activations of units; 0=off 1=on)

  25. Representation A Representation B • Which representation is a good example of distributed representation? • representation A • representation B • neither

  26. Advantage of Distributed Representations • Efficiency • Solve the combinatorial explosion problem: With n binary units, 2n different representations possible. (e.g.) How many English words from a combination of 26 alphabet letters? • Damage resistance • Even if some units do not work, information is still preserved – because information is distributed across a network, performance degrades gradually as function of damage • (aka: robustness, fault-tolerance, graceful degradation)

  27. Neural Network Models • Inspired by real neurons and brain organization but are highly idealized • Can spontaneously generalize beyond information explicitly given to network • Retrieve information even when network is damaged (graceful degradation) • Networks can be taught: learning is possible by changing weighted connections between nodes

  28. Recent Neural Network Research(since 2006) • “Deep neural networks” by Geoff Hinton • Demos of learning digits • Demos of learned movements • What is new about these networks? • they can stack many hidden layers • can capture more regularities in data andgeneralize better • activity can flow from input to output and vice-versa Geoff Hinton • In case you want to see more details: YouTube video

  29. Samples generated by network by propagation activation from label nodes downwards to input nodes (e.g. pixels) • Graphic in this slide from Geoff Hinton

  30. Examples of correctly recognized handwritten digitsthat the neural network had never seen before • Graphic in this slide from Geoff Hinton

  31. Other Demos & Tools If you are interested, here are tools to create your own neural networks and train it on data: Hopfield network http://www.cbu.edu/~pong/ai/hopfield/hopfieldapplet.html Backpropagation algorithm and competitive learning: http://www.psychology.mcmaster.ca/4i03/demos/demos.html Competitive learning: http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html Various networks: http://diwww.epfl.ch/mantra/tutorial/english/ Optical character recognition: http://sund.de/netze/applets/BPN/bpn2/ochre.html Brain-wave simulator http://www.itee.uq.edu.au/%7Ecogs2010/cmc/home.html

More Related