210 likes | 343 Vues
This workshop presentation delves into the relationship between network topology and behavioral patterns in spiking neural networks. It examines the construction of transfer functions and the implications of unipolar versus bipolar signal levels in understanding neural codes. Through convolution and interference network abstractions, we analyze a simplistic nerve-like structure, aiming to interpret the underlying dynamics and delays within neural networks. The findings suggest that neural systems can detect codes and sounds without requiring bipolar signals, revealing insights for future studies in computational neuroscience.
E N D
How Network Topology Defines its Behavior -Serial Code Detection with Spiking Networks Dr. Gerd Heinz Gesellschaft zur Förderung angewandter Informatik e.V Berlin-Adlershof Workshop „Autonomous Systems” Herwig Unger & Wolfgang Halang Hotel Sabina Playa, Cala Millor Mallorca, 13-17 Oct. 2013 Sensor- und Motor- Homunculus. Natural History Museum, London
Contents Abstract Convolution A Small Interference Network Construction of Transfer Functions Applying a Convolution Spike Output Frequency Analysis Unipolar or Bipolar Signal Levels? Interpreting Bursts Examples
Abstract • Compared with technical sensors, sound and code analysis of nerve system is fascinating • We differ between the whisper of the wind or the branding of waves, we know the songs of birds, we hear dangerous noises of a defect car engine, we feel, if an airplane starts • And we speak and understand languages: Do we have a chance, to interprete the function of a nerve net on the level of net structure? • We try to analyze a simplest delaying network in nerve-like structure • Our net consists of delays T and weights W • Basing on Interference Network (IN) abstraction we transform the net into a transfer function H of a linear time-invariant system (LTI-system) • We use convolution between input time-function and transfer function to find the "behaviour" of the LTI-system * The work bases on the book "Neuronale Interferenzen", Kap.8b, S.181, download: www.gfai.de/~heinz/publications/NI/index.htm
Convolution • "Faltung" (terminus created by Felix Bernstein, 1922): • Discrete form (Cauchy product): • Example: FIR-filteras direct implementationof convolution, form: Y = X * S
N N' N + N' . . . . . . . . . x(t) y(t) y(t) x(t) A Small Interference Network • Form: Our Abstraction: Delay vector: Weight vector: Transfer function:
fs = 1/ts Construction of Transfer Function H (Transfer function of LTI-system) Discrete transfer function H seen as discrete time function with sample distance ts = 1/fs and with growing index i : i = [… 2, 3, 4, 5, 6, 7, 8, 9, …] H = [… wi-1, wi, wi+1, wi+2, wi+3, wi+4, wi+5, wi+6, …] • Length of H is greater the delay difference: length(H) ≥ max(T) – min(T) • Construction of the transfer function of the net by addition of weights: H(j) = H(j) + W(i) mit j = T(i) : H(T(i)) = H(T(i)) + W(i)
Get Transfer Function with Scilab function [H] = trans(T,W,fs); if length(T) == length(W) then T = T * fs; // apply sample rate of H T = round(T); // T becomes index: integer H = 1:max(T); H = H * 0; // create an empty H for i = 1:length(T), // for all T(i), W(i) j = T(i), // delay becomes the H-index j H(j) = H(j) + W(i), // add the weight to H end // for else // if printf('\n\nerror: T and W have different size\n'); end // if endfunction; H is the transfer function of a LTI-system!
H X Y Applying a Convolution What is the system answer Y for different input functions X ? It is simple the convolution with H , the multiplication of time series y(t) = h(t) * x(t) • Using vectors Y = X * H • Scilab form Y = convol(H,X) • Fourier Analysis of H F = abs(fft(H))
Barker Codes and Spikes • Hebbian rule in neuro-science shows, that a neuron needs high synchronous emissions to learn • We need spikes at the output of the neuron • Barker codes maximize spike-like output of long sequences in RADAR technology: • Example: H = [1, 1, 1, -1, 1] (Barker code no. 5) X = rev(H) Y = convol(X,H) = [1, 0, 1, 0, 5, 0, 1, 0, 1] • But neurons don't have negative signal values!What can we do?
Spectral Analysis of Transfer Function H • FFT of any unipolar transfer function shows the maximum for frequency f = 0 Hz (DC) • It is not possible to learn with unipolar H ; codes are AC: Highest level at 0 Hz Unipolar{0…1} Bipolar{-1…1}
Unipolar or Bipolar Signal Levels? • Unipolar signals, unipolar synapses: {-1…0…1}
Unipolar or Bipolar Signal Levels? • Bipolar signals, bipolar synapses: {0…1}
Unipolar or Bipolar Signal Levels? • Unipolar signals, bipolar synapses (neuron) {0…1} {-1…1}
Unipolar or Bipolar Signal Levels? unipolar signals and bipolar synapses (neuron) X, Y: uni {0…1} H: bi {-1…1} Big surprize: • Using unipolar signals X, Y and bipolar H, the system is not significant worse compared to the best case uni/uni Test it: • Use relating Scilab sources underwww.gfai.de/~heinz/publications/papers/2013_autosys.pdfwww.gfai.de/~heinz/techdocs/index.htm#conv Conclusion • Nerve systems do not need bipolar signals to detect code and sound, if the synapses are bipolar (inhibiting or exciting)!
Interpreting Bursts • Noisy groups of pulses are known at different locations in nerve system • Is it possible, to find the net structure behind them?
The Inversive Procedure • We interprete a burst as transfer function H (seen as pulse response) and reproduce the delays T and weights W of the network behind: function [T,W] = net(H,fs); // returns T and W j=1; // W-index j for i=1:length(H) // H-index i if H(i) == 0 then ; // do nothing else // write the value to W, the index to T W(j) = H(i); // value to W T(j) = i; // index to T j = j+1; // increment j end; // endif end; // endfor T = T ./ fs; // multiply with sample duration T = T - min(T); // scale to min: reduced T-vector endfunction;
Example H = f(T,W) • Delays T, weights W, transfer function H, reducing vectors: index r Delays: Weights: Reduced T, W: Transfer function:
Example Key X and keyhole Hunipolar max(FFT) at 0 Hz (uni/uni)
Conclusion • To characterize time- and frequency domain, we transform delays and weights of a simplest interference network into a LTI transfer response • A procedure [H] = trans(T,W,fs) calculates the (time-discrete) transfer functionH (pulse response) of the net from delay vector T (delay mask) and weight vector W • The FFT shows learning problems for unipolar signals and unipolar H because of highest DC-value • A mixture between unipolar signals and bipolar transfer function (weights) acts as good alternative (nerve nets) • Interpreting bursts as transfer functions (pulse responses), we design an inverse procedure [T,W] = net(H,fs) that reconstructs the net structure [T,W] from transfer function H • Find Scilab sources and the paper on the webwww.gfai.de/~heinz/publications/papers/2013_autosys.pdfwww.gfai.de/~heinz/techdocs/index.htm#conv
Relevance for ANN • The transfer function or pulse response H is responsible for all sequential properties of a network: for code and sound generation or detection • The lecture shows, that smallest delays and delay differences change the pulse response H of the network • Remembering the "Neural Networks" (NN, ANN) approach with layers clocked by clock cycles we find, that the NN-approach destroyes the sequential structure of each network complete • In no case ANN or NN are candidates to understand the function of nerve like structures • Thinking about nerves we need interferential approches that does not destroy the delay structure of the net.
Vielen Dank für die Aufmerksamkeit! Und der Herr sprach: "So führte ich euch auf den Weg der Erkenntnis. Gehet nun, und traget die Botschaft in die Welt hinaus!" Dr. G. Heinz, GFaI Volmerstr.3 12489 Berlin Tel. +49 (30) 814563-490 www.gfai.de/~heinz heinz@gfai.de Erfolgreiche Google-Suchterme: "Interferenznetze", "Mathematik des Nervensystems", "Heinz", "Akustische Kamera"