1 / 50

Construction of Meaning

Construction of Meaning. Martin Tak ac Department of Computer Science University of Otago, New Zealand. Takáč, M.: Construction of Meanings in Living and Artificial Agents. Dissertation thesis, Comenius University, Bratislava, 2007. Supervisor: Lubica Benuskova.

truda
Télécharger la présentation

Construction of Meaning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Construction of Meaning Martin Takac Department of Computer Science University of Otago, New Zealand

  2. Takáč, M.: Construction of Meanings in Living and Artificial Agents. Dissertation thesis, Comenius University, Bratislava, 2007.Supervisor: Lubica Benuskova

  3. Motivation: What is it good for? Application aspect • Pre-defined ontologies are not sufficient in dynamic and open environments. • It is better to endow the agents with learning abilities and let them discover what is relevant and useful for them • => developmental approach to intelligent systems design

  4. Motivation: What is it good for? Philosophy of AI • Can machines understand? • Turing TestSearle’s Chinese Room Harnad’s Symbol Grounding • Cognitive Science • Better understanding of our own cognition

  5. Can machines understand? • Can animals understand? • Can human infants understand? • Depends on the definition of “understanding”. • Our approach: conceive understanding in such a way that the answer is yes and look what can we get out of it.

  6. Representamen (form) Sign Interpretant (meaning) Object (referent) Understanding • We say that an agent understandsits environment, if it picks up relevant environmental features and utilizes them for its goals/survival. • Situated making of meaning of one’s experience • Semiotics • Umwelt (von Uexkull) • Sign (Peirce) • Understanding is a gradual phenomenon in the living realm ranging from very primitive innate forms to complex learned human linguistic cognition

  7. Key features of meaning • Sensorimotor coupling with the environment • Incremental and continuous construction of meaning in interactions with open and dynamic environment • Collective coordination of individually constructed meanings [ Takáč, M.: Construction of Meanings in Living and Artificial Agents. In: Trajkovski, G., Collins, S. G. (eds.): Agent-Based Societies: Social and Cultural Interactions, IGI Global, Hershey, PA, 2009.]

  8. Goal • Propose semantic representation that: • could be incrementally and continuously (re)constructed from experience/interactions (sensorimotor coupling) • would enable the agent to understand its world • causality (prediction of consequences of actions) • planning • inference of intentions/internal states of agents • Do computational implementation and measure the results

  9. Roadmap • Semantics of distinguishing criteria • Models of autonomous construction of meanings • By sensorimotor exploration • By social instruction (labelling) • From episodes

  10. Roadmap • Semantics of distinguishing criteria • Models of autonomous construction of meanings • By sensorimotor exploration • By social instruction (labelling) • From episodes

  11. Semantics of distinguishing criteria • Distinguishing criterionisa basic semantic unit and an abstraction of the ability to distinguish, react differentially, understand (Šefránek, 2002).

  12. Semantics of distinguishing criteria • Distinguishing criterionisa basic semantic unit and an abstraction of the ability to distinguish, react differentially, understand (Šefránek, 2002). • Neuro-biological motivation • Locally tuned detectors (Balkenius, 1999) • Geometric representation • Conceptual spaces (Gärdenfors, 2000)

  13. d Metric common for the whole space  symmetrical similarity Conceptual spaces • Similarity inversely proportional to distance • Concepts represented by prototypes • learning – a prototypecomputed as centroid of instances • categorization – finding the closest prototype • Concept – (convex) regionin the space

  14. Semantics of distinguishing criteria A distinguishing criterion r : • is incrementally constructed from the incoming sequence of examples of the concept: r {x1, …, xN} (learnability) • identifies (distinguishes) instances of the concept:r(x )  [0,1] (identification) • auto-associatively completes the input: r(x ) p (auto-associativity)

  15. x d2 + Distinguishing criteria Each criterion uses its own metrics with parameters reflecting statistical properties of input sample set. All learning starts from scratch, and is online and incremental!

  16. Spectral decomposition of the covariance matrix

  17. a2 a2 a2 a2 a2 a2 a1 a1 a1 a1 a1 a1 . Receptive fields

  18. “grew“ “house“ “big“, “blue“, “triangle“ t t+1 “left_of“ Types of distinguishing criteria “a bulldozer pushed the house from the left “, “the house fell down“

  19. Roadmap • Semantics of distinguishing criteria • Models of autonomous construction of meanings • By sensorimotor exploration • By social instruction (labelling) • From episodes

  20. Roadmap • Semantics of distinguishing criteria • Models of autonomous construction of meanings • By sensorimotor exploration • By social instruction (labelling) • From episodes

  21. Mechanisms of meaning construction • We know how to construct the criterion from its sample set r {x1, …, xN} • Practical problem – to delineate the sample set (which criterion should be fed with the current stimuli?) • Unsupervised(clustering) • Environmental relevance • By pragmatic feedback • Ecological relevance • By naming (labeling) • Social relevance

  22. Mechanisms of meaning construction • We know how to construct the criterion from its sample set r {x1, …, xN} • Practical problem – to delineate the sample set (which criterion should be fed with the current stimuli?) • Unsupervised(clustering) • Environmental relevance • By pragmatic feedback • Ecological relevance • By naming (labeling) • Social relevance

  23. Meaning creation by sensorimotor exploration • Environment • Virtual child, surrounded by objects: fruits, toys, furniture. • In discrete time steps, the child performs random actions on randomly chosen objects: trying to lift them or put them down (with various parameters – force, arm angle). • Actions performed on objects cause changes of their attribute values. Simple physics simulated. • Learning • The sensations of the child are in the form of perceptual frames (sets of attribute-value pairs) of objects, actions and changes[xa, xo, xc]. • The child creates and updates criteria of objects Co, actions Ca and changesCcand their associationsV Ca  Co Cc (all sets initially empty). • Objects and actions are grouped to categories by the change. That is, if an action leads to the same change on several objects, they will all fall in the same category and vice versa.

  24. Agent lift( {force:10, angle: 45} ) World Causal module objects, actions,consequences Changes Perception Proprioception Scheduler Action repertoire Motivation system needs, goals Architecture { vertices: 3,posX: 20, posY: 7,R: 0, G: 0, B: 255 }

  25. Meaning creation by sensorimotor exploration - Results • Causal relations – able to predict consequences of own actions. • Affordances • „Objects too heavy to be lifted.“ • „Objects that cannot be put down (because they are already on the ground).“ • Growing sensitivity helpful.

  26. Roadmap • Semantics of distinguishing criteria • Models of autonomous construction of meanings • By sensorimotor exploration • By social instruction (labelling) • From episodes

  27. Child big blue { vertices: 3,posX: 20, posY: 7,R: 0, G: 0, B: 255 } LearningCategorization Percepts Language Environment Actions Perception Concepts Pragmatics(actions, causality, goals, planning) The agent’s architecture

  28. Cross-situational learning • No true homonymy assumption: • Different words have different senses, even if they share a referent (in this case, they denote different aspects of the referent). • No true synonymy assumption: • All referents of a word across multiple situation are considered instances of the same concept.  The more contexts of use, the better chance that essential properties stay invariant, while unimportant ones will vary.

  29. „big“, „blue“, „triangle“ „triangle“ „blue“ „left_of“ Construction of meaning by labeling

  30. Iterated learning

  31. Iterated learning

  32. Iterated learning

  33. Iterated learning ...

  34. Construction of meaning by labeling - results • We measured: • similarity of description between teacher and learner • ability to locate the referent(s) of a name • Good meaning similarity between two subsequent generations • Meaning shifts and drift over many generations • Replicator dynamics, more relevant and more general meanings survive. • Structural meanings more stable. [ Takáč, M.: Autonomous Construction of Ecologically and Socially Relevant Semantics. Cognitive Systems Research 9 (4), October 2008, pp. 293-311.]

  35. Roadmap • Semantics of distinguishing criteria • Models of autonomous construction of meanings • By sensorimotor exploration • By social instruction (labelling) • From episodes

  36. Roadmap • Semantics of distinguishing criteria • Models of autonomous construction of meanings • By sensorimotor exploration • By social instruction (labelling) • From episodes

  37. Episodic representation – being learned from observed/performed actions Example experiment: • Lattice 5 x 5 • 4 agents (posX, posY, dir, energy) • 10 objects (posX, posY, nutrition) • Actions: move(steps), turn(angle), eat(howMuch)

  38. Frame representation of episodes • Rolestructure[ACT,SUBJ, OBJ, SUBJ, OBJ] • Example: [ ACT ={eat: 1; howMuch: 6 }, SUBJ ={dir: 2; @energy: 10; posX: 4; posY: 3 }, OBJ ={nutrition: 129; posX: 3; posY: 3 }, SUBJ ={dir: 0; @energy: +6; posX: 0; posY: 0 }, OBJ={nutrition: -6; posX: 0; posY: 0 }]

  39. Episodic representation can be incomplete (partial) • missing roles • missing attributes • because they are internal (private) • due to noise/stochasticity • due to the developmental stage • incompleteness can be used for predictions

  40. Recall from partial episode • [ACT,SUBJ, OBJ, SUBJ, OBJ]subject’s abilities (what can I do?) • [ACT,SUBJ, OBJ, SUBJ, OBJ]object’s affordances (what can be done with it?) • [ACT,SUBJ, OBJ, SUBJ, OBJ]verb islands (how and upon what to perform the action?) • [ACT,SUBJ, OBJ, SUBJ, OBJ]action selection/planning (how to achieve a desired change?)

  41. Requirements • Open set of possible attributes • Stochastic occurrence of attributes • Learning from observed/performed actions • incremental • permanent • performance while learning&learning from performance • Fast learning – reasonable performance after seeing one or few examples

  42. Primary layer Episodic layer Architecture [ACT, SUBJ, OBJ, SUBJ, OBJ] [ACT, SUBJ, OBJ, SUBJ, OBJ]

  43. Primarylayer • transforms continuous real domain of an attribute to a vector ofreal [0,1] activities • covers the real domain with the set of nodes (1-dim detectors), each reacting to a neighborhood of some real value • neurobiological motivaton - primary sensory cortices (localisticcoding) • qualitatively important landmarks • approximates the distribution of attribute values with least possible error

  44. Episodic layer • consist of nodes{e1 , e2 , … ek } – episodic „memories“ • Nodes can be added, refined, merged and forgotten • A node ei: • maintains N, A, iA: pi , 2i , fi • reacts to a frame

  45. Episode-based learning - Results • Agents able to acquire causal relations (we measured predictive ability). • Autoassociative recall – potential for simple inferences • Subject’s abilities • Object’s affordances • Prediction • Planning • Inherently episodic organization of knowledge (implicit categories of objects, properties, relations and actions) • Prediction of unobservable properties (“empathy” or ToM) [ Takáč, M., 2008. Developing Episodic Semantics. In: Proceedings of AKRR-08 (Adaptive Knowledge Representation and Reasoning). ]

  46. Mirroring effect, „empathy“, inference of internal states • A0 sensed (A3  O3): [ACT ={eat: 1; howMuch: 4; }, SUBJ ={dir: 1; posX: 2; posY: 0 }, OBJ ={nutrition: 1792; posX: 3; posY: 0 }, SUBJ ={dir: 0; posX: 0; posY: 0 }, OBJ ={nutrition: -4; posX: 0; posY: 0 }] • A0 recalled: [ACT ={eat: 1(100%);howMuch: 2(50%)} SUBJ ={dir: 0(50%);@energy: 40(46%);posX: 1(100%);posY: 0(100%)}, OBJ ={nutrition: 1795(98%);posX: 3(100%);posY: 0(100%)}, SUBJ ={dir: 0(100%);@energy: 2.5(45%);posX: 0(100%);posY: 0(100%)} OBJ ={nutrition: -4(99%);posX: 0(100%);posY: 0(100%)}] Pragm. Success =0.83

  47. Adding communication (future work) • For successful inter-agent communication, the meanings should be mutually coordinated and associated with some signals in a collectively coherent way. • Speech act as a type of action • Collective dynamics • Pragmatic and contextual language representation • connected to particular states of the speaker (SUBJ) and the hearer (OBJ), possibly leading to changes of their states (∆SUBJ, ∆OBJ) • prediction/production of different utterances depending on a personal style and affective state of the speaker, or to infer the internal state of the speaker from its utterance in some context.

  48. Conclusion - what we have done • Non-anthropocentric conceptual apparatus for study of meanings in different kinds of agents (virtual, embodied, alive, human...) • Computational representation of meanings amenable to autonomous construction • supported by implemented models. • Interesting hybrid computational architecture that features: • openness in terms of possible attributes and categories or their gradual change (no catastrophic forgetting) • online learning – from scratch, incremental, fast andpermanent • dynamic organization • amenable to analysis of internal structures

  49. Conclusion - what we haven’t done • Cognitive modeling • fit of particular empirical/developmental data • Neuroscience • fit of particular brain structures • Real-scale models/applications • complex environments, many agents, noise tolerance • Full-blown semantics • abstract meanings, cultural scenarios and many more • … we even haven’t got to language yet…

  50. Thank you for your attention!

More Related