1 / 58

A Computational Introduction to the Brain-Mind

A Computational Introduction to the Brain-Mind. Juyang (John) Weng Michigan State University East Lansing, MI 49924 USA weng@cse.msu.edu. Human Physical and Mental Development. Studies on the adult brain. Studies on how the brain develops. Machine Mental Development. Totipotency.

kara
Télécharger la présentation

A Computational Introduction to the Brain-Mind

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Computational Introduction to the Brain-Mind Juyang (John) Weng Michigan State University East Lansing, MI 49924 USA weng@cse.msu.edu

  2. Human Physical and Mental Development Studies on the adult brain Studies on how the brain develops

  3. Machine Mental Development

  4. Totipotency • Stem cells and somatic cells • Genomic equivalence: • All cells are totipotent: whose genome is sufficient to guide the development from a single cell to the entire adult body • Consequence: the developmental program is cell-centered

  5. Genomic Equivalence • Each somatic cell carries the complete genome in its nucleus • Evidence: cloning (e.g., sheep Dolly) • Consequences: • Genome is cell centered, directing individual cell to develop in cell’s environment • No genome is dedicated to more than one cell • Cell learning is “in place”: Each neuron does not have an extra-celluer learner: cell learning must be fully accomplished by each cell itself while it interacts with its cell’s environment

  6. How to Measure Problems in AI • Time and space complexity? • High or low “level”? • Tasks that look intelligent when a machine does it? • Rational or irrational? • Handling uncertainty? • …

  7. Task Muddiness • Independent of problem domain • Independent of technology level • Independent of the performer: machines or animals • Can be quantified • Help us to understand why AI is difficult • Help us to see essence of intelligence • Can be used to evaluate intelligent machines • Help to appreciate human intelligence

  8. Task Muddiness • Agent independent • Categories only • Each category can be extended • Categories adopted to model task muddiness: • Environment • Input • Output • Internal state • Goal

  9. Environmental Muddiness

  10. Task Executor • Human agent:the human is the sole executor • Machine agent:Dual task executor • A task is given to a human • The human programs an machine agent • The agent executes

  11. A Partial List of Input Muddiness

  12. A Partial List of Other Muddiness

  13. 2-D Muddiness Frame Size of input Visual recognition Language translation Sonar-based navigation Computer chess Rawness of input

  14. Composite Muddiness m = m1 m2 m3 … mn

  15. Autonomous Mental Development (AMD)

  16. Traditional Manual Development A: agent H: human Ec: Ecological condition T: Task A = H(Ec , T)

  17. New Autonomous Development Autonomous inside the skull A: agent H: human Ec: Ecological condition A = H(Ec )

  18. Closed brain Unbiased Sensors biased Sensors Effectors Mode of Development: AA-Learning AA-learning: Automated animal-like learning World

  19. Existing Machine Learning Types • Supervised learningClass labels (or actions) are given in training • Unsupervised learningClass labels (or actions) are not given in training • Reinforcement learningClass labels (or actions) are not given in training but reinforcement (score) is given

  20. New Classification for Machine Learning • Need for considering state imposability after the task is given • 3-tuple (s, e, b):symbolic internal representation, effector, biased sensor • State: state imposable after the task is given • Biased sensor: whether the biased sensor is used • Effector: whether the effector is imposed

  21. 8 Types of Machine Learning Learning type 0-7 is based on 3-tuple (s, e, b): Symbolic internal (s=1), effector-imposed (e=1), biased sensors used (b=1)

  22. The Developmental Approach • Enable a machine to perform autonomous mental development (AMD) • Impractical to faithfully duplicate biological AMD • Hardware: Embodiment (a robot) • Software: A developmental program • Task nonspecific • AA-learning mode, from the “birth” time through the “life” span

  23. Comparison of Approaches

  24. Developmental Program vs Traditional Learning [1] For tasks unknown at the programming time.

  25. Motives of Research for Development • Developmental mechanisms are easier to program:lower level, more systematic, task-independent, clearly understandable • Relieve humans from intractable programming tasks: vision, speech, language, complex behaviors, consciousness • User-friendly machines and robots:humans issue high-level commands to machines • Highly adaptive manufacturing systems (e.g., self-trainable, reconfigurable machining systems) • Help to understand human intelligence

  26. Task Nonspecificity • A program is not task specific means: • Open to muddy environment • Tasks are unknown at programming time • “The brain” is closed after the birth • Learn an open number of muddy tasks after birth • Avoid trivial cases: • A thermostat • A robot that does task A when temperature is high and does task B when temperature is low • A robot that does simple reinforcement learning

  27. Eight necessary operational requirements: Environmental openness: muddy environments High dimensional sensing Completeness in internal representation for each age group Online Real time speed Incremental:for each fraction of second (e.g., 10-30Hz) Perform while learning Scale up to large memory Existing works (other than SAIL) aimed at some, but not all. SAIL deals with the 8 requirements altogether 8Requirements for Practical AMD

  28. Definition of AA-Learning • A machine M conducts AA-learning if the operation mode is as follows:For t = t0, t1, t2, ... , the brain program f recursively updates the brain B, sensory input-ouput x and effector input-output z

  29. The forebrain The midbrainand hindbrain The spinal cord The Central Nervous System Kandel, Schwartz and Jessell 2000

  30. Brodmann Areas (1909) Kandel, Schwartz and Jessell 2000

  31. Sensory and Motor Pathways My hypothesis:Brain has complex networks that emerge largely shapedby signal statistics (Weng IJCNN 2010) Adapted from Kandel, Schwartz and Jessell 2000

  32. Multimodal Integration

  33. Brain’s Vision System The brain has only two exposed ends to interact with the environment: Weng IJCNN 2010

  34. Triple Loops Weng IJCNN 2010

  35. Solving the Feature Binding Problem Weng IJCNN 2010

  36. Area as A Building Block Weng IJCNN 2010

  37. Neurons as Feature Detectors: The Lobe Component Model Weng et al. WCCI 2006 • Biologically motivated: • Hebbian learning • lateral inhibition • Partition the input space into c regions • X= R1 U R2 U ... U Rc • Lobe component i: the principal component of the region Ri

  38. Different Normalizations

  39. Dual Optimality of CCI LCA • Spatial optimality leads to the best target:Given the number of neurons (limited resource), the target of the synaptic weight vectors minimizes the representation error based on “observation” x: • Temporal optimality leads to the best runner to the target: Given limited experience up to time t, find the best direction and step size for each t based on “observation” u = r x Weng & Luciw TAMD vol. 1, no. 1, 2009

  40. CCI LCA Algorithm (1)

  41. CCI LCA Algorithm (2)

  42. Plasticity Schedule (t) r = 10000 2 t t1 t2

  43. Natural Images

  44. IC from Natural Images

  45. Temporal Architectures

  46. Based on FA Ideas

  47. From FA to ED network • FA: sn = f(sl,am) s: state; a: symbol input • ED:The internal area learns:yi = fy (sl, am)The motor area learns: sn = fz (yi)s: a numeric pattern of z, a sample of Z spacea: a numeric pattern of x, a sample of X spacey: a numeric pattern of y, a sample of Y space

  48. Training and Tests Luciw & Weng IJCNN 2010

  49. Performance

  50. Three Types of Information Flow • Different directions for different intents • Mixed modes are possible • There is no “if-then-else” type of switches

More Related