1 / 115

Neural networks for structured data

Neural networks for structured data . Table of contents. Recurrent models Partially recurrent neural networks Elman networks Jordan networks Recurrent neural networks BackPropagation Through Time Dynamics of a neuron with feedback Recursive models Recursive architectures

deva
Télécharger la présentation

Neural networks for structured data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural networks for structured data

  2. Table of contents • Recurrent models • Partially recurrent neural networks • Elman networks • Jordan networks • Recurrent neural networks • BackPropagation Through Time • Dynamics of a neuron with feedback • Recursive models • Recursive architectures • Learning environment • Linear recursive networks • Recursive networks for directed and cyclic graphs • Graph neural networks

  3. Introduction  1 • Network architectures able to process “plain” data, collected within vectors, are said to be static; they just define a “mapping” between input ​​and output values • In other words… once the network has been trained, it realizes a function between inputs and outputs, calculated according to the learning set • The output at time t depends only from the input at the same time • The network does not have “shortterm” memory

  4. Introduction  2 • Static networks: Feedforward neural networks • They learn a static I/O mapping, Yf(X),XandYstatic patterns (vectors) • They model static systems: classification or regression tasks • Dynamic networks: Recurrent neural networks • They learn a nonstationary I/O mapping, Y(t)f(X(t)),X(t) and Y(t) are timevarying patterns • They model dynamic systems: control systems, optimization problems, artificial vision and speech recognition tasks, time series predictions

  5. Dynamic networks • Equipped with a temporal dynamic, these networks are able to capture the temporal structure of the input and to “produce” a timeline output • Temporal dynamic:unit activations can change even in the presence of the same input pattern • Architecture composed by units having feedback connections, both between neurons belonging to the same layer or to different layers • Partially recurrent networks • Recurrent networks

  6. Partially recurrent networks • Feedforward networks equipped with a set of input units, called state or context units • The state units’ activation corresponds to the activation, at the previous time step, of the units that emit feedback signals, and it is sent to the units receiving feedback signals • Elman network (1990) • Jordan network (1986)

  7. Elman networks  1 • The context unit outputs are equal to those of the corresponding hidden units at the previous (discrete) instant: xc,i(t) xh,i(t 1) • To train the network, the Backpropagation algorithm is used, in order to learn the hiddenoutput, the inputhidden and the contexthidden weights • Feedback connections on the hidden layer, with fixed weights equal to one • Context units equal, in number, to the hidden units, and considered just as input units

  8. Elman networks  2 • All the output functions operate on the weighed sum of the inputs • They are linear (identity) functions for both the input and the context layers (that act just as “buffers”), whereas sigmoidal functions are used in the hidden and the output layers • The context layer inserts a singlestepdelay in the feedback loop: the output of the context layer is presented to the hidden layer, in addition to the current pattern • The context layer adds, to the current input, a value that reproduces the output achieved at the hidden layer based on all the patterns presented up to the previous step

  9. Elman networks  3 • Learning all the trainable weights are attached to feedforward connections • The activation of the context units is initially set to zero, i.e.xc,i(1)0, i att1 • Input patternxt: evaluation of the activations/outputs of all the neurons, based on the feedforward transmission of the signal along the network • Weight updating using Backpropagation • Let t t1and go to 2) • The Elman network produces a finite sequence of outputs, one for each input • The Elman network is normally used for object trajectory prediction, and for the generation/recognition of linguistic patterns

  10. Jordan networks  1 • Feedback connections on the output layer, with fixed weights equal to one • Selffeedback connections for the state neurons, with constant weights equal to a; a 1 is the recency constant

  11. Jordan networks  2 • The network output is sent to the hidden layer by using a context layer • The activation, for the context units, is determined based on the activation of the same neurons and of the output neurons, both calculated at the previous time step xc,i(t) xo,i(t 1) axc,i(t 1) • Selfconnections allow the context units to develop a local or “individual” memory • To train the network, the Backpropagation algorithm is used, in order to learn the hiddenoutput, the inputhidden and the contexthidden weights

  12. Jordan networks  3 • The context layer inserts a delay step in the feedback loop: the context layer output is presented to the hidden layer, in addition to the current pattern • The context layer adds, to the input, a value that reproduces the output achieved by the network based on all the patterns presented up to the previous step, coupled with a fraction of the value calculated, also at the previous step, by the context layer itself (via selfconnections)

  13. Recurrent networks  1 • A neural network is said to be recurrent if it contains some neurons whose activations depend directly or indirectly from their outputs • In other words, following the signal transmission through the network, cyclic paths exist that connect one or more neurons with themselves: • without crossing other neurons direct feedback (xi(t) explicitly appears in the evaluationai(t1)) • and/or crossing other neuronsundirect feedback • A fully connected neural network is always a recurrent network

  14. Recurrent networks  2 • A recurrent network processes a temporal sequence by using an internal state representation, that appropriately encodes all the past information injected into their inputs • memory arises from the presence of feedback loops between the outputs of some neurons and the inputs of other neurons belonging to previous layers • assuming a synchronous update mechanism, the feedback connections have a memory element (a onestep delay) • The inputs are sequences of arrays: where Tp represents the length of the pth sequence (in general, sequences of finite length are considered, even if this is not a necessary requirement)

  15. Recurrent networks  3 • Using an MLP as the basic block, multiple types of recurrent networks may be defined, depending on which neurons are involved in the feedback • The feedback may be established from the output to the input neurons • The feedback may involve the outputs of the hidden layer neurons • In the case of multiple hidden layers, feedbacks can also be present on several layers • Therefore, many different configurations are possible for a recurrent network • Most common architectures exploit the ability of MLPs to implement non linear functions, in order to realize networks with a non linear dynamics

  16. Recurrent networks  4 y2 • The behaviour of a recurrent network (during a time sequence) can be reproduced by unfolding it in time, and obtaining the corresponding feedforward network y y1 x(t) = f(x(t1),u(t)) y(t) = g(x(t),u(t)) u2 u u1

  17. Recurrent processing • Before starting to process the pthsequence, the state of the network is initialized to an assigned value (initial state) xp(0) • Every time the network begins to process a new sequence, there occurs a preliminary “reset” to the initial state, losing the memory of the past processing phases, that is, we assume to process each sequence independently from the others • At each time step, the network calculates the current output of all the neurons, starting from the input up(t) and from the statexp(t1)

  18. Processing modes • Let us suppose that the Lth layer represents the output layer… • The neural network can be trained to transform the input sequence into an output sequence of the same length (Input/Output transduction) • A different case is when we are interested only in the network response at the end of the sequence, so as to transform the sequence into a vector • This approach can be used to associate each sequence to a class in a set of predefined classes

  19. Learning in recurrent networks • Backpropagation Through Time(BPTT, Rumelhart, Hinton, Williams, 1986) • The temporal dynamics of the recurrent network is “converted” into that of the corresponding unfolded feedforward network • Advantage: very simple to calculate • Disadvantage: heavy memory requirements • RealTime Recurrent Learning(Williams, Zipser, 1989) • Recursive calculation of the gradient of the cost function associated with the network • Disadvantage: computationally expensive

  20. Learning Set • Let us consider a supervised learning scheme in which: • input patterns are represented by sequences • target values are represented by subsequences • Therefore, the supervised framework is supposed to provide a desired output only with respect to a subset of the processing time steps • In the case of sequence classification (or sequence coding into vectors) there will be a single target value, at time Tp

  21. Cost function • The learning set is composed by sequences, each associated with a target subsequence where ϵ stands for empty positions, possibly contained in the target sequence • The cost function, measuring the difference between the network output and the target sequence, for all the examples belonging to the learning set, is defined by where the instantaneous error epW(ti)is expressed as the Euclidean distance between the output vector and the target vector (but, other distances may also be used)

  22. BackPropagation Through Time  1 • Given the outputs to be produced, the network can be trained using BPTT • Using BPTT means… • …considering the corresponding feedforward network unfolded in time  the length Tp of the sequence to be learnt must be known • …updating all the weights wi(t),t1,…,Tp, in the feedforward network, which are copies of the same wiin the recurrent network, by the same amount, corresponding to the sum of the various updates reported in different layers  all the copies of wi(t) should be maintained equal

  23. BackPropagation Through Time  2 • Let N be a recurrent network that must be trained, starting from 0to Tp, on a sequence of length Tp • On the other hand, let N* be the feedforwatd network obtained by unfoldingN in time • With respect to N* and N, the following statements hold: • N* has a “layer” that contains a copy of N, corresponding to each time step • Each layer in N* collects a copy of all the neurons contained in N • For each time step t[0, Tp], the synapse from neuron iin layer lto neuron jin layer l1in N* is just a copy of the same synapse in N

  24. BackPropagation Through Time  3 Recurrent network Feedforward network corresponding to a sequence of length T4

  25. BackPropagation Through Time  4 • The gradient calculation may be carried out in a feedforwardnetworklike style • The algorithm can be derived from the observation that recurrent processing in time is equivalent to constructing the corresponding unfolded feedforward network • The unfolded network is a multilayer network, on which the gradient calculation can be realized via standard BackPropagation • The constraint that each replica of the recurrent network within the unfolding network must share the same set of weights has to be taken into account (this constraint simply imposes to accumulate the gradient related to each weight with respect to each replica during the network unfolding process)

  26. BackPropagation Through Time  5 • The meaning of backpropagation through time is highlighted by the idea of network unfolding • The algorithm is nonlocal in time  the whole sequence must be processed, by storing all the neuron outputs at each time step  but it is local in space, since it uses only local variables to each neuron • It can be implemented in a modular fashion, based on simple modifications to the Back-Propagation procedure, normally applied to static MLP networks

  27. Dynamics of a neuron with feedback 1 • Let us consider a network constituted by only one neuron, equipped with a selffeedback; then: • Depending on synaptic weigths and inputs, functions f(a1) anda1lmmay intersect in one or more points  or may have no intersections (i.e., solutions) , that are equilibrium points of the dynamic system • If a solution exists information latching r å ( ) + m  = + + Þ = a1l y f a a w w y w y z + 1 1 1 1 ,i i 1 , r 1 1 1 , 0 1 = i 1 w1,r+1 w1,0 æ ö r 1 1 å l = m = - + z with: w w ç è ø i 1 , 0 1,i w1,i w w = + + i 1 1 , r 1 1 , r 1

  28. Dynamics of a neuron with feedback 2 • When using a step transfer function, there can be at most 2 solutions Step transfer function

  29. Dynamics of a neuron with feedback 2 • When using a step transfer function, there can be at most 2 solutions • l0.5, m1.5(1 solution) Step transfer function

  30. Dynamics of a neuron with feedback 2 • When using a step transfer function, there can be at most 2 solutions • l0.5, m1.5(1 solution) • l0.5, m0.5(2 solutions) Step transfer function

  31. Dynamics of a neuron with feedback 2 • When using a step transfer function, there can be at most 2 solutions • l0.5, m1.5(1 solution) • l0.5, m0.5(2 solutions) • l0.5, m0.5(1 solution) Step transfer function

  32. Dynamics of a neuron with feedback 2 • When using a step transfer function, there can be at most 2 solutions • l0.5, m1.5(1 solution) • l0.5, m0.5(2 solutions) • l0.5, m0.5(1 solution) • l0.5, m0.5(0 solutions) Step transfer function

  33. Dynamics of a neuron with feedback 3 • In the case of continuous transfer functions, at least one solution always exists • It can be shown that the same property holds true for any recurrent network Logistic transfer function

  34. Dynamics of a neuron with feedback 3 • In the case of continuous transfer functions, at least one solution always exists • It can be shown that the same property holds true for any recurrent network • l0.5, m1.5(1 solution) Logistic transfer function

  35. Dynamics of a neuron with feedback 3 • In the case of continuous transfer functions, at least one solution always exists • It can be shown that the same property holds true for any recurrent network • l0.5, m1.5(1 solution) • l0.5, m0.5(1 solution) Logistic transfer function

  36. Dynamics of a neuron with feedback 3 • In the case of continuous transfer functions, at least one solution always exists • It can be shown that the same property holds true for any recurrent network • l0.5, m1.5(1 solution) • l0.5, m0.5(1 solution) • l0.15, m0.5(3 solutions) Logistic transfer function

  37. Dynamics of a neuron with feedback 3 • In the case of continuous transfer functions, at least one solution always exists • It can be shown that the same property holds true for any recurrent network • l0.5, m1.5(1 solution) • l0.5, m0.5(1 solution) • l0.15, m0.5(3 solutions) • l0.15, m0.3(1 solution) Logistic transfer function

  38. Dynamics of a neuron with feedback 3 • In the case of continuous transfer functions, at least one solution always exists • It can be shown that the same property holds true for any recurrent network • l0.5, m1.5(1 solution) • l0.5, m0.5(1 solution) • l0.15, m0.5(3 solutions) • l0.15, m0.3(1 solution) • l0.5, m0.5(1 solution) Logistic transfer function

  39. Dynamics of a neuron with feedback 4 • The inability to obtain closedform solutions imposes to proceed in an iterative fashion • Given the weights and fixed the inputs, an initial value y(o) is a assigned to the output vector • Starting from this initial condition, the network then evolves according to a dynamic law for which any solution represents an equilibrium point • Ultimately, the presence of cyclic paths “introduces a temporal dimension” in the network behaviour

  40. Still recurrent networks • The simplest dynamic data type is the sequence, which is a natural way to model temporal domains • In speech recognition, the words, which are the object of the recognition problem, naturally flow to constitute a temporal sequence of acoustic features • In molecular biology, proteins are organized in amino acid strings • The simplest dynamic architectures are recurrent networks, able to model temporal/sequential phenomena 40

  41. Structured domains  1 • In many realworld problems, the information is naturally collected in structured data,that have a hybrid nature, both symbolic and subsymbolic,and cannot be represented regardless of the links between some basic entities: • Classification of chemical compounds • DNA analysis • Theorem proving • Pattern recognition • World Wide Web 41

  42. Structured domains  2 • Existing links, between the basic entities, are formalized in complex hierarchical structures such as trees and graphs • Recursive neural networks are modeled on the structure to be learnt, producing an output that takes into account both the symbolic data, collected in the nodes, and the subsymbolic information, described by the graph topology 42

  43. Example 1: Inference of chemical properties • Chemical compounds are naturally represented as graphs (undirected and cyclic) 43

  44. Example 2: Pattern recognition • Each node of the tree contains local features, such as area, perimeter, shape, etc., of the related object, while branches denote inclusion relations 44

  45. Example 3: Logo recognition 45

  46. RAG transformation into a directed graph Example 4: A different way for representing images Segmentation Region Adjacency Graph (RAG) extraction 46

  47. Example 5: Firstorder logic terms 47

  48. Can we process graphs as they were vectors? • Graphs can be converted into vectors choosing a visiting criterion of all the nodes, but… • Representing a graph as a vector (or a sequence) we will probably “lose” some potentially discriminating information • If the linearization process produces long vectors/ sequences, learning can become difficult • Example:binary tree representation using brackets, based on a symmetrical, early, or postponed visit a(b(d,e),c(,e)) Newick format dbeace Symmetrical visit 48

  49. Heuristics and adaptive coding • Instead of selecting a fixed set of features, a network can be trained to automatically determine a fixedsize representation of the graph 49

  50. Recursive neural networks  1 • The recursive network unfolding happens in a spatial and temporal dimension, based on the underlying structure to be learnt • At each node v, the state is calculated by means of a feedforward network as a function of the node label and of the states of the child nodes • The state at the node vis the input for the output networkinv 50

More Related