160 likes | 302 Vues
Short Term Memories and Forcing the Re-use of Knowledge for Generalization. Laurent Orseau INSA/IRISA Rennes, France. Neural Networks. Good learning devices for many interesting problems toward generalization
E N D
Short Term Memories and Forcing the Re-use of Knowledge for Generalization Laurent Orseau INSA/IRISA Rennes, France
Neural Networks • Good learning devices for many interesting problems toward generalization • In theory, RNNs are equivalent to Turing Machines for representation, but learning is not the same problem
Marcus et al. Task (1999) • Surface, explicit sequence: ABCDE • E follows ABCD, not another letter • Abstraction: AAB, BBD, EEA, … (≠ABA, BCB, …) • Repetition is important, not explicit symbols • Infants can learn it • (R)NNs cannot! • One solution (Dominey et al.): Add special Short Term Memories (STMs) to a Temporal Recurrent Network
Only “Mere” Abstraction? • STMs: abstraction generalization? • Refine Dominey’s STMs • Use the temporal framework, even for static tasks • Force the re-use of knowledge • Experiment on a general classification task
Short Term Memories • STM #d activated if any input activated a second time after d time steps • Seq. AA (or BB, etc.) activates STM #1 • Seq. BAB activates STM #2 • Seq. ABCA activates STM #3 • etc.
Architecture (1) • Simple TDNN: • k input sets • Input set i-1 = copy of input set i at t-1 • STMs on inputs; • k STMs/input set, each for a different delay • automatically updated (internal)
Architecture (2) • External loop (no learning) • Agent “hears” what it says, “thinks aloud”
External Loop • Feed forward: agent does not hear what it says • External loop: agent says ABCDE agent hears ABCDE • + Actions and inputs are merged: teacher says ABCDE agent hears ABCDE STMs used for both: copy or recognition of repetition • Supervision with reinforcements
Action Selection & Learning • Action selection: • Each input-action tested • One with best predicted reinforcement chosen • If agent must say input-action a: • If it does say a: teacher rewards it • If it says another letter: teacher punishes it • If it says nothing: teacher says a in the agent’s place and rewards it
Forcing the Re-use of Knowledge • The teacher uses the loop to make the agent re-use its own knowledge Auto-stimulation • Ex 1: • Knowing: ABC, ABCD, CDE • Teacher: AB Agent: C Hears: ABC (loop) • Agent: D Hears: ABCD • Agent: E • By saying AB, the teacher forces CDE
Classification Task abcde | fghij | klmno | pqrst | uvwxy | z A | B | C | D | E | F • Task1 (rote learning): • What is the group of m? C • But does not know what A contains, etc. • Task2, knowing Task1: • Is m in the group C? yes • Seems simple for human, but not for NNs alone • Where is the need for abstraction?
Learning Task1 • rote learning, no generalization needed. • Teacher: gpa Agent: a Hears: gpaa • Teacher: gpb Agent: a Hears: gpba • … • Teacher: gpz Agent: f Hears: gpzf • gp: • Name of the problem • Needed to be re-used
Re-use STM #4 teacher isfb isfb f b b y (yes) isfcgpfb n (no) STM #4 STM #4 Learning Task2 (1) isfb g p f Or
Learning Task2 (2) • Training on: • Letters: a.. j, l, n, p, r, s, t • Groups: A, B, C, D • Must generalize to all letters and groups! • (R)NNs alone: could do only clustering,no generalization to unseen groups
Parameters • TDNN: • Inputs/actions: 26 letters of the alphabet • k=4: 4 input sets, 4 STMs/input set • Output: reinforcement • Must learn to stay quiet apart from what is provided
Results • Task1: • 8 hidden neurons • learned perfectly (no generalization) • Task2: • weights frozen • 5 neurons added • Perfect generalization: • Seen groups, unseen letters • But also unseen groups, unseen letters • Not possible without STMs