1 / 39

FUSION OF LANGUAGE AND THOUGHT PROCESSES FOR CTS

FUSION OF LANGUAGE AND THOUGHT PROCESSES FOR CTS. Collaborative Technologies and Systems Conference 2003, Orlando, Florida. Leonid Perlovsky Technical Advisor, AFRL. CTS AND INTELLIGENT AGENTS. Collaborative systems include multiple interacting intelligent agents

Télécharger la présentation

FUSION OF LANGUAGE AND THOUGHT PROCESSES FOR CTS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FUSION OF LANGUAGE AND THOUGHT PROCESSES FOR CTS Collaborative Technologies and Systems Conference 2003, Orlando, Florida Leonid Perlovsky Technical Advisor, AFRL

  2. CTS AND INTELLIGENT AGENTS • Collaborative systems include multiple interacting intelligent agents • a human, machine, device or software code • Agents are • significantly autonomous and goal-oriented, perform various functions, and communicate with other agents • equipped with sensors or collect data, receive communications, extracts information • use existing knowledge, integrate new information into producing new knowledge, send communications • embody the concept of life and intelligence • What are the required intelligent agent technologies?

  3. INTELLIGENT AGENT TECHNOLOGIES • Interfaces and access • man-machine and machine-machine interfaces • knowledge and data access • Understanding of • language, situations, and environment • Fusion • knowledge and data from diverse sources and disciplines, • Decision making • heterogeneous environment, with inaccurate data, uncertain knowledge and intuitions • information exchange, knowledge management • Abilities for thinking and language • a mysterious territory

  4. LANGUAGE AND THINKINGPAST • Artificial Intelligence, 1950s-1980s • logical rules • no principal difference between thinking and language • failed, yet no replacements to logic when combining L. & T. • Linguistics • Chomskyan linguistics, computational linguistics, cognitive linguistics: relate words to words • No relation to surrounding world • Closely related and intertwined evolution in human mind • science cannot tell us yet what is language without thinking or v.v.

  5. LANGUAGE AND THINKINGFUTURE • Thinking and understanding • identify “concepts” of objects, relationships and situation in sensory data • relate concepts to needs and emotions • relate concepts to behavior • Language is a part of mind • involved in concepts, emotions, instincts • closest to concepts • words are combined into phrases like retinal signals are combined into objects (?) • Where language and thoughts come together? • Concepts? Not logical rules and artificial intelligence again?

  6. PHYSICS AND MATHEMATICS OF MINDRANGE OF CONCEPTS • Logic is sufficient to explain mind • [Newell, “Artificial Intelligence”] • No new specific mathematical concepts are needed • Mind is a collection of ad-hoc principles, [Minsky] • Specific mathematical constructs describe the multiplicity of mind phenomena • “first physical principles of mind” • [Grossberg, Zadeh, Perlovsky,…] • Quantum computation • [Hameroff, Penrose, Perlovsky,…] • New unknown yet physical phenomena • [Josephson, Penrose]

  7. GENETIC ARGUMENTSFOR THE “FIRST PRINCIPLES” • There is about 30,000 genes in human genome • relate concepts to needs • Only about 2% difference between human and apes • Say, 1% difference between human and ape minds • Only about 300 proteins • It is likely, that few general principles of concept learning are required to explain our ability to operate with concepts • If we count “a protein per concept” • If we count combinations: 300300 ~ unlimited => all languages could have been genetically h/w-ed (!?!) • Languages are not genetically hardwired • Because they have to be flexible and adaptive

  8. INFORMATION PROCESSING AND UNDERSTANDING • Understanding the meaning of signals (visual, acoustic, text) • Identify objects in signals • signals -> concepts; or words -> phrases • Associate relevant objects • objects -> scenes; or phrases -> more general concepts • Inthis task the human mind is by far superior qualitatively to existing mathematical methods • Effort has been devoted toward incorporating “biological lessons” into smart algorithms, yet success has been limited • Why is this so and how to overcome existing limitations?

  9. REASONS FOR PAST LIMITATIONS • The basis of human intelligence is in combining conceptual understanding with emotional evaluation • became exceedingly well appreciated among psychologists and neurobiologists during the last ten years • human understanding without emotional involvement is basically flawed [Damasio] • This new understanding has not been accepted by mathematical and engineering community • mathematical laws governing emotional involvement into thinking process have not been well known • there is a long-standing cultural belief that emotions are opposite to thinking and intellectually inferior • Socrates, Plato, Aristotle • reiterated by founders of Artificial Intelligence [Newell]

  10. FUNDAMENTAL MATHEMATICAL PROBLEM • Combinatorial Complexity (CC): • understanding involves evaluating a large number of combinations • words into sentences, pixels or samples into objects, objects into scenes • a general problem of all data association methods • CC was encountered for over 50 years • statistical pattern recognition and neural networks: CC of learning requirements • rule-based systems, expert systems, and AI, in the presence of variability: CC of rules • model-based systems, utilizing adaptive models: CC of computations(N and NP complete) • Chomskyan linguistics: (1-1957) rule-based, (2-1981) model-based (rules and parameters)

  11. CC AND TYPES OF LOGIC • CC is related to formal logic • law of excluded third • every logical statement is either true or false • there could be no similarity/distance measure between logical statements • CC is Godel’s “incompletenes” in a finite system • Multivalued and fuzzy logic eliminated the “law of excluded third” • yet, they are based on the math. of formal logic • e.g. fuzzy logic systems are either too fuzzy or too crisp • A similarity measure between logical statements was not introduced

  12. STRUCTURE OF MIND • Concepts • Models of objects, their relations, and situations • Evolved to satisfy instincts • Instincts • Internal sensors (e.g. sugar level in blood) • Emotions • Neural signals connecting instincts and concepts • e.g. a hungry person sees food all around • Behavior • Models of goals (desires) and muscle-movement • Hierarchy • Concept-models and behavior-models are organized in a “loose” hierarchy

  13. SIMILARITY MEASURE AND DYNAMIC LOGIC • A similarity measure • based on a similarity between models and data (words) • equivalent to emotional evaluative signals • leads to dynamics (equations of motions) improving concept-models by maximizing similarity (dynamic logic) • Dynamic Logic unifies formal and fuzzy logic • initial “fuzzy model-concept” dynamically evolve into “formal-logic or crisp model-concept” • Overcomes CC of model-based recognition • fast algorithms • low-polynomial M*N, instead of combinatorial MN • associates pixels into objects (or words into phrase-models) without CC

  14. THINKING • Understanding and learning • Model-concepts always have to be adapted to incoming signals or data • lighting, surrounding, new objects and situations • Instinct for knowledge and understanding • Models-concepts are adapted and improved even when there is no concrete “bodily” needs • Increase similarity between models and world • Emotions related to knowledge instinct • Satisfaction or dissatisfaction • change in similarity between models and world • Harmony or disharmony: aesthetic emotion • Behavior related to knowledge • Adaptation and learning of concepts (behavior in the mind)

  15. MODELING FIELD THEORY basic two-layer hierarchy: from signals to objects • Signals and Concept-Models • signals x(n), n = 1,…,N • model-object Mm(Sm,n), parameters Sm, m = 1, …; • Goal: learn object-models and understand signals • associate samples n with models m and find parameters Sm • learn signal-contents of objects (and object properties) • Maximize similarity, between signals and models, L • knowledge instinct • Likelihood or mutual information, L = l({x}) = l(x(n)) • l(x(n)) = r(m) l(x(n) | Mm(Sm,n)) (M may depend on n) • CC: L contains MN items: all associations of words and models

  16. DYNAMIC LOGIC ALGORITHM (DLA) (non-combinatorial solution) • Start with a set of signals and unknown object-models • any parameter values Sm • associate fuzzy object-model with its contents (signal composition) • (1) f(m|n) = r(m) l(n|m) /r(m') l(n|m') • Improve parameter estimation • (2) Sm = (1- a) Sm + a f(m|n) [ll(n|m)/Mm]*[Mm/Sm] • (adetermines speed of convergence) • learn signal-contents of objects • Continue iterations (1)-(2). Theorem: MFT is converging - similarity increases on each iteration - aesthetic emotion is positive during learning • Each concept-model is an agent • semi-independent, interacting with other agents • competing for evidence (among signals) • learning its properties and recognizing its signals

  17. LINGUISTICSWORDS, CONCEPTS AND GOALS • Text is a (loose) hierarchy of concepts • Word is a concept; it acquires meaning in a phrase • Phrase-concept acquires meaning in a “paragraph”,… • Model-concepts (e.g. phrases made up of words) • Simplistic “bag”-model • a set or collection of words • More complex models: word order and relationships • Real-language models • grammar (Chomsky, Pinker, Jackendoff, Rieger, Mehler…) • Goal-instinct • A search engine: find conceptual similarity between a query and text • analyze both in terms of concepts • Learn and identify model-concepts in texts (language instinct = knowledge instinct)

  18. MODELING FIELD THEORY basic two-layer hierarchy: words and phrase-concepts • Words and Concept-Models • words w(n), n = 1,…,N • model-phrase Mm(Sm,n), parameters Sm, m = 1, …; • Simplistic “bag”-model: Mm = Sm = {w}m • Goal: learn phrase-models • associate words n with models m and find parameters Sm • learn word-contents of phrases (and grammatical relationships) • Maximize similarity, between words and models, L • language instinct = knowledge instinct • Likelihood or mutual information, L = l({w}) = l(w(n)) • l(w(n)) = r(m) l(w(n) | Mm(Sm,n)) (M may depend on n) • CC: L contains MN items: all associations of words and models

  19. DYNAMIC LOGIC ALGORITHM (DLA) (non-combinatorial solution) • Start with a large body of text and unknown phrase-models • any parameter values Sm • associate fuzzy phrase-model with its contents (words) • (1) f(m|n) = r(m) l(n|m) /r(m') l(n|m') • Improve parameter estimation • (2) Sm = (1- a) Sm + a f(m|n) [ll(n|m)/Mm]*[Mm/Sm] • (adetermines speed of convergence) • learn word-contents of phrases (and grammatical relationships) • Continue iterations (1)-(2). Theorem: MFT is converging - similarity increases on each iteration - aesthetic emotion is positive during learning • Each phrase-concept-model is an agent • semi-independent, interacting with other agents • competing for evidence (among words) • learning its properties and recognizing its words

  20. CATCH ME BY HAND • If I give you enough time you’ll be able to catch me on the previous slide • the “bag”-model is non-differentiable • this is a principal moment, learning non-differentiable models requires sorting through combinations • lead to combinatorial complexity • differentiable models can be defined with a little trick

  21. INTEGRATEDLANGUAGE AND THINKING • Where language and thoughts come together? • concept-models have linguistic and objective aspects • a fuzzy concept m has sensory and linguistic model • Mm = { Mmsensory,Mmlinguistic}; • language and thoughts are fused at fuzzy pre-conscious level • before concepts are learned • Understanding language and sensory data • baby learning: “look, this is a car” • each linguistic model has an empty “slot” for objects and situations in surrounding world • each sensory or situational model has an empty “slot” for a word or phrase • language participates in thinking and v.v. • Two types of information help learning and understanding each other • help associating signals, words, models, and behavior

  22. INNER LINGUISTIC FORM HUMBOLDT, the 1830s • In the 1830s Humboldt discussed two types of linguistic forms • words’ outer linguistic form (sounds) – a formal designation • and inner linguistic form (???) – creative, full of potential • This remained a mystery for rule-based AI, structural linguistics, Chomskyan linguistics • rule-based approaches using the mathematics of logic make no difference between formal and creative • In MFT and DLA there is a difference • static form of learned (converged) concept-models • dynamic form of fuzzy concepts, with creative learning potential and emotional content

  23. WHY MIND AND EMOTIONS? • Alot about the mind can be explained:concepts, instincts, emotions, conscious and unconscious, intuition, aesthetic ability,… • but, isn’t it sufficient to solve mathematical equations or to code a computer and execute the code? • A simple yet profound question • the answer is in history and practice of science • Newton laws do not contain all of the classical mechanics • Maxwell equations do not exhaust radars and radio-communication • a physical intuition about the system is needed • an intuition about MFT was derived from biological, linguistic, cognitive, neuro-physiological, and psychological insights into human mind • Practical engineering applications of DLA require biological, linguistic, cognitive, neuro-physiological, and psychological insights in addition to mathematics

  24. MFT THEORY OF MIND • MFT dynamics: elementary thought process • a large number of model-concepts compete for incoming signals • uncertainty in models corresponds to uncertainty in associations f(m|n) • eventually, one model (m') wins a competition for a subset {n'} of input signals w(n), when parameter values match object properties, and f(m'|n) values become close to 1 for n{n'} and 0 for n{n'} • upon convergence, the entire set of input signals {n} is divided into subsets, each associated with one model-object • fuzzy a priori concepts (unconscious) become crisp concepts (conscious) • dynamic logic • Elementary thought process, consciousness and unconscious • Aristotle: in thinking, an a priori form-as-potentiality (fuzzy model) meets matter (signals) and becomes a form-as-actuality (a concept) • Jung: conscious concepts are developed by mind based on inherited structures of mind, archetypes, inaccessible to consciousness • Grossberg: models attaining a resonant state (winning the competition for signals and becoming crisp in this process) reach consciousness

  25. MFT THEORY OF MIND -understanding- • Incoming signals, {x,w} are associated with model-concepts (m) • creating phenomena (of the MFT-mind), which are understood as objects, situations, phrases,… • in other words signal subsets acquire meaning (e.g., a subset of retinal signals acquires a meaning of a chair) • Several aspects of understanding and meaning • concept-models are connected (by emotional signals) to instincts and to behavioral models that can make use of them for satisfaction of bodily instincts • an object is understood in the context of a more general situation in the next layer consisting of more general concept-models (satisfaction of knowledge instinct) • each recognized concept-model (phenomenon) sends (in neural terminology: activates) an output signal • a set of these signals comprises input signals for the next layer models, which ‘cognize’ more general concept-models • this process continues up and up the hierarchy towards the most general models: models of universe (scientific theories), models of self (psychological concepts), models of meaning of existence (philosophical concepts), models of a priori transcendent intelligent subject (theological concepts) • neural brain organization: individual modules, which form approximate hierarchies, along with a number of “parallel” and “loop-like” pathways

  26. SIGNS AND SYMBOLSmathematical semiotics • Signs: stand for something else • non-adaptive entities (mathematics, AI) • brain signals insensitive to context (Pribram) • Symbols • signs (mathematics, AI) • psychological processes connecting conscious and unconscious (Jung) • brain signals sensitive to context (Pribram) • processes of sign interpretation • Mathematics of symbol-processes • relationships to thinking • relationships to language

  27. MFT THEORY OF SYMBOLS -mathematical semiotics- • Semiotics studies symbol-content of culture • Example: consider a written word "chair" • It can be interpreted by mind to refer to something else: an entity in the world, a specific chair, or the concept "chair" in the mind • In this process, the mind, or an intelligent system is called aninterpreter, the written word is called a sign, the real-world chair is called a designatum, and the concept in the interpreter's mind, the internal representation of the results of interpretation is called an interpretant of the sign • The essence of a sign is that it can be interpreted by an interpreter to refer to something else, a designatum • This is a simplified description of a thinking process, called semiosis • its mechanism is given by the elementary thought process • Elementary thought process involving consciousness and unconscious, concepts and emotions, is a dynamic symbol process • a much more complicated entity than was originally envisioned by founders of “symbolic AI”

  28. MFT THEORY OF MIND - aesthetic emotions and beauty - • Aesthetic emotions (not related to bodily satisfaction) • Instincts for knowledge and language (learning concept-models) • Emotions (satisfaction-dissatisfaction): harmony-disharmony • Maximize similarity between models and world • between our understanding of how things ought to be and how they actually are in the surrounding world; Kant: aesthetic emotions • Beauty • Harmony is an elementary aesthetic emotion; higher aesthetic emotions • development of more complex “higher” models • The highest forms of aesthetic emotion, beauty • related to the most general and most important models • models of the meaning of our existence, of our purposiveness or intentionality • beautiful object stimulates improvement of the highest models of meaning • Beautiful “reminds” us of our purposiveness • Kant called beauty “aimless purposiveness”: not related to bodily purposes • he was dissatisfied by not being able to give a positive definition • knowledge instinct • absence of positive definition remained a major source of confusion in philosophical aesthetics till this very day

  29. MFT THEORY OF MIND - physical intuition - • Intuitive perception (imagination) of object-models and their relationships with objects in the world • involves fuzzy unconscious concept-models • in process of being learned and adapted • toward crisp and conscious models, a theory • such models satisfy or dissatisfy the knowledge instinct before they are accessible to consciousness, hence the complex emotional feel of an intuition • Beauty of a physical theory discussed often by physicists • related to satisfying our feeling of purpose in the world • satisfying our need to improve the models of the meaning in our understanding of the universe

  30. REAL WORLD APPLICATIONS • Many applications have been developed • Government • Medical • Commercial • Sensor signals processing and object recognition • Variety of sensors • Internet search engines • Based on text understanding • Financial market predictions • Market crash on 9/11 predicted a week ahead

  31. BACK UP • MFT and Buddhism • MFT vs. biology • Classical methodology flowchart • MFT flowchart • MFT vs. inverse problems • MFT predictions and testing • MFT future directions • Publications

  32. MFT AND BUDDHISM • Fundamental Buddhist notion of “Maya” • the world of phenomena, “Maya”, is meaningless deception • penetrates into the depths of perception and cognition • phenomena are not identical to things-in-themselves • Fundamental Buddhist notion of “Emptiness” • “consciousness of bodhisattva wonders at perception of emptiness in any object” (Dalai Lama 1993) • any object is first of all a phenomenon accessible to cognition • value of any object for satisfying the “lower” bodily instincts is much less than its value for satisfying higher needs, knowledge instinct • Bodhisattva’s consciousness is directed by knowledge instinct • concentration on “emptiness” does not mean emotional emptiness, but the opposite, the fullness with highest emotions related to the knowledge instinct, beauty and spiritually sublime

  33. MFT vs. BIOLOGYOF EYE ·Human eye is part of the brain • Integrated sensor-processor system in both design and in operations • multi-layer hierarchical system • integrated adaptive optimized resource allocation - Feedback from higher layers to lower layers is essential • eye-brain neural pathway contains more feedback connections than feedforward ones ·Adaptive, joint optimization of sensor-processor system network - Based on a hierarchy with feedback among layers and modules • What is the nature of this feedback? ·MFT hierarchy with feedback - Every layer has 5 basic modules/elements: (1) incoming signals (structured at lower layer, unstructured at the current layer) (2) models: phenomenology (emissivity, geometry) and simulation codes (3) similarity measure between signals and models (4) adaptation mechanism (5) outgoing signals (a structure: sign-concept)

  34. CLASSICAL METHODOLGY Result: Conceptual objects MODELS/templates • objects, sensors • physical models Recognition Input: World/scene signals Sensors / Effectors

  35. MFTbasic two-layer hierarchy: signals and concepts Result: Conceptual objects Attention / Action Correspondence / Similarity measures signals Sim.signals MODELS • objects, sensors • physical models signals Sensors / Effectors Input: World/scene

  36. MFT VS. INVERSE SCATTERING • Inverse Scattering in Physics • reconstruction of target properties by “propagating back” scattered fields • usually complicated, ill-posed problems (exception: CATSCAN) • Biological systems (mind) solve this problem all the time • by utilizing prior information (in feedback neural pathways) • Classical Tikhonov’s inversion cannot use knowledge • regularization parameter () is a constant • Morozov’s modification can utilize prior estimate of errors for  • Inverse Scattering using MFT • can utilize any prior knowledge ( became an operator) • utilizes prior knowledge adaptively ( depends on parameters)

  37. MFT PREDICTIONS AND TESTING • General neural mechanisms of the elementary thought process • confirmed by neural and psychological experiments • includes neural mechanisms for bottom-up (sensory) signals, top-down (“imagination”) model-signals, and the resonant matching between the two • Adaptive modeling abilities • well studied: adaptive parameters are synaptic connections • Instinctual learning mechanisms • studied in psychology and linguistics • Ongoing and future research will confirm, disprove, or suggest modifications to • mechanisms of language and thinking integration • mechanisms of model parameterization and parameter adaptation • reduction of fuzziness during learning • similarity measure as a foundation of knowledge and language instincts

  38. MFT FUTURE DIRECTIONS • Developing MFT models based on known linguistic models • Differentiated forms of knowledge instinct • highly differentiated emotions are involved in human conversation and human thinking • multiple measures of similarity, differentiated knowledge instinct • differentiated emotional concepts • Quantum Computing MFT devices

  39. PUBLICATIONS OXFORD UNIVERSITY PRESS www.oup-usa.org

More Related