1 / 23

Functional Brain Signal Processing: EEG & fMRI Lesson 8

M.Tech. (CS), Semester III, Course B50. Functional Brain Signal Processing: EEG & fMRI Lesson 8. Kaushik Majumdar Indian Statistical Institute Bangalore Center kmajumdar@isibang.ac.in. Artificial Neural Network (ANN). What does a single node in an ANN do?. x 1. w 12. x 2. w 22. w 32.

lena
Télécharger la présentation

Functional Brain Signal Processing: EEG & fMRI Lesson 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. M.Tech. (CS), Semester III, Course B50 Functional Brain Signal Processing: EEG & fMRILesson 8 Kaushik Majumdar Indian Statistical Institute Bangalore Center kmajumdar@isibang.ac.in

  2. Artificial Neural Network (ANN) • What does a single node in an ANN do? x1 w12 x2 w22 w32 y2 x3 w42 x4 w52 x5

  3. More Nodes x1 y1 x2 y2 output x3 y3 Output layer x4 y4 x5 Hidden layer 1 if inside, 0 if outside the closed region x6 Input layer

  4. Number of Hidden Layers • There must be two hidden layers to identify the following annulus. A neural network is basically a function approximator, which can approximate continuous functions by piecewise linear functions (interpolation). Neural networks are also known as universal approximator.

  5. Separation or Classification • A separation or classification is nothing but approximating the surface separating the (mixed) data. In other words it approximates a continuous function generating the separating surface. A classifier will have to approximate the function whose graph is this curve.

  6. Classification by ANN • Most classification tasks are accomplished by separating the data with curve(s) consisting only a single line. Therefore for most classification tasks ANNs with a single hidden layer is sufficient. • However number of nodes in the hidden layer is to be determined by trial and error for optimal classification.

  7. Universal Approximation • For any continuous mapping there must exist a three-layer neural network (having an input or ‘fanout’ layer with n processing elements, a hidden layer with 2n + 1 processing elements, and an output layer with m processing elements) that implements exactly. Hecht-Nielsen, 1988.

  8. Duda et al., Chapter 6, p. 283 & 289 Backpropagation Neural Network • By far the most widely used type of neural network. • It is simple yet powerful neural network even for complex models having hundred of thousands of parameters. • Its conceptual simplicity and high success rate makes it a mainstay in adaptive pattern recognition. • Offers means to calculate input to hidden layer weights.

  9. Regularization • It is a deep issue concerning complexity of the network. Number of input and output nodes is fixed. But number of hidden nodes and connection weights are not. These are free parameters. If there are too few of them the training set cannot be adequately learned. If there are too many of them, generalization of the network will be poor

  10. Regularization (cont.) (apart from enhanced computational complexity). That is, its performance on the test data set will fall down (while on training data set its performance may remain very high). Training seizure pattern Testing seizure pattern

  11. Hecht-Nielsen, 1988 Backpropagation Architecture Three layer General y1 y2 x1 x2 x3 x4

  12. Hecht-Nielsen, 1988 Backpropagation Architecture (cont.)

  13. Backpropagation Algorithm has to be minimized, where t and z are target and network output vectors respectively. c is # output nodes. where is the learning rate. m stands for the m’th iteration.

  14. Subasi and Ercelebi, Comp. Meth. Progr. Biomed., 78: 87 – 99, 2005 Epileptic EEG Signal

  15. http://en.wikipedia.org/wiki/Daubechies_wavelet DB4 Wavelet DB wavelets do not have closed form representation (cannot be expressed by an elegant mathematical formula, like Morlet wavelet).

  16. http://www.bearcave.com/misl/misl_tech/wavelets/daubechies/index.htmlhttp://www.bearcave.com/misl/misl_tech/wavelets/daubechies/index.html DB4 Wavelet Generation: Cascade Algorithm g(n), h(n) are impulse response functions. Ψ(t) is the wavelet. DB4 will contain only 4 taps or coefficients.

  17. EEG Data • Electrode placement was according to 10 – 20 system. • 4 signals selected as F7 – C3, F8 – C4, T5 – O1 and T6 – O2. • Sample frequency 200 Hz. • Band-pass filtered in 1 – 70 Hz range upon acquisition. • EEG was segmented at 1000 time point window (5s).

  18. Feature Extraction by DB4 Wavelets EEG signals decomposed by high-pass (called ‘detail signal’) and low-pass (called ‘approximation’) FIR filtering

  19. Assignment • Preprocess depth EEG signals (to be given) by wavelet transforms (DB4 wavelet is seen to be more efficient than other wavelets, see Subasi & Ercelebi, 2005 and Vardhan & Majumdar, 2011). This will extract features from the signals. • Use a three layer (that is, with only one hidden layer) perceptron neural network to

  20. Assignment (cont.) classify the features to separate out the seizure portion from non-seizure portion in the signals.

  21. References • A. Subasi and E. Ercelebi, Classification of EEG signals using neural networks and logistic regression, Comp. Meth. Progrm. Biomedicine, 78: 87 – 99, 2005. • I. Kaplan, Daubechies D4 wavelet transform, http://www.bearcave.com/misl/misl_tech/wavelets/daubechies/index.html

  22. References (cont.) • R. Hecht-Nielsen, Theory of the backpropagation neural network, INNS 1988, p. I-593 – I-605. Freely available at http://s112088960.onlinehome.us/annProjects/Research%20Paper%20Library/backPropTheory.pdf • I. Daubechies, Ten lectures on wavelets, SIAM, 1992. p. 115, 132, 194, 242.

  23. THANK YOUThis lecture is available at http://www.isibang.ac.in/~kaushik

More Related