1 / 22

Artificial Neural Networks

Bruno Angeles McGill University – Schulich School of Music MUMT-621 Fall 2009. Artificial Neural Networks. Outline. Biological Neurons To the Digital World Activation Function Feed-forward vs. Feedback Applications Training Methods Conclusion. Biological Neurons. Synapses Axon Soma

varick
Télécharger la présentation

Artificial Neural Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bruno Angeles McGill University – Schulich School of Music MUMT-621 Fall 2009 Artificial Neural Networks

  2. Outline • Biological Neurons • To the Digital World • Activation Function • Feed-forward vs. Feedback • Applications • Training Methods • Conclusion

  3. Biological Neurons • Synapses • Axon • Soma • Nucleus • Dendrites • ON/OFF • Threshold • 100 Hz [1]

  4. Biological Neurons – Stimulation vs. Plasticity • A neuron excites another neuron repeatedly  ↑ strength of connection  easier for same excitation to occur • A neuron is not stimulated for a long time by another one  ↓ connection effectiveness (plasticity)

  5. xn Wn … x3 W3 y W2 x2 W1 x1 To the Digital World • Inputs: x1..n • Weights: w1..n • Positive: excitory • Negative: inhibitory • Output • Summer: Σ • Activation function: • Step function • S-shaped • etc.

  6. Activation Function Sigmoid function (s-shaped) Step function

  7. Feed-forward Neural Network xn … yn x3 … x2 y1 x1 All arrows go in the direction of the outputs. This is the most popular way of connecting an Artificial Neural Network (ANN).

  8. Feedback Neural Network xn … yn x3 … x2 y1 x1 Not all arrows go in the direction of the outputs.

  9. Hidden layers Black Box xn … yn x3 … x2 y1 x1 Hidden Layers

  10. ANN – Why, when? • when data is available, but not theory • when the input data is complex: no obvious pattern • when robustness to noise is wanted

  11. Applications • Object Recognition • Medical Diagnosis • Obstacle Avoidance • Environment Exploration • Sales Forecasting • Marketing • Identifying Fraud http://www.youtube.com/watch?v=FKAULFV8tXw http://www.youtube.com/watch?v=nIRGz1GEzgI

  12. Applications – Speech Technology and Music Technology • Recognition of human speakers • Text-to-speech applications • Transcription of polyphonic music • Music Information Retrieval http://www.youtube.com/watch?v=igNo-mPVYsw

  13. Training the Network • Unsupervised Learning • Compression • Filtering • Supervised Learning • Pattern recognition • Function approximation • Reinforcement Learning • View on long-term success

  14. Unsupervised Training • Cost function c(x,y) known • No known data set that minimizes c(x,y) • Try to minimize c(x,y)

  15. Supervised Training – Backpropagation • Need ANN with hidden layer(s) • Need ANN with differentiable activation function • Randomly initialize all weights • Adjust each layer’s weights to minimize error

  16. Supervised Training – Issues Overfitting Local minima [1] Solutions: • minimize # of neurons • jitter (add noise to input) • early stopping (training & validation) [1] Solution: momentum (including previous weight updates)

  17. Reinforcement Learning • Long-term reward objective • At each frame, state of ANN is given • Choose among possible actions the one that maximizes final reward

  18. Backpropagation – An Example 1 4 x1 2 6 y 5 x2 3 Each node n has weights and an activation function fn. z is the expected output given inputs x1 and x2. wij is the weight of node i’s input to node j. Propagate the error backwards  backpropagation.

  19. Backpropagation – An Example 1 4 x1 2 6 y 5 x2 3 Now use the computed propagated errors to adjust the weights of all nodes. η controls the learning speed of the network. fnis the activation function of node n. u is fn’s response to the node’s inputs.

  20. Conclusion • Pros: • Ability to learn  Flexible • Powerful • “Black box” • Cons: • Pitfalls • Example of tanks • Can be combined with other methods

  21. Thank you!

  22. Bibliography [1] Buckland, Mat. 2002. AI Techniques For Game Programming: Premier Press. [2] Karaali, O., G. Corrigan, N. Massey, C. Miller, O. Schnurr, and A. Mackie. 1998. A high quality text-to-speech system composed of multiple neural networks. Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing 2:1237-40. [3] Marmanis, H., and D. Babenko. 2009. Algorithms of the Intelligent Web: Manning Publications. [4] Marolt, M. 2001. Transcription of polyphonic piano music with neural networks. Proceedings of the 10th Mediterranean Electrotechnical Conference, 2000. MELECON 2000. 2:512-5. [5] Murray, J. C., H. R. Erwin, and S. Wermter. 2009. Robotic sound-source localisation architecture using cross-correlation and recurrent neural networks. Neural Networks 22 (2):173-89. [6] Rho, S., B. Han, E. Hwang, and M. Kim. 2007. MUSEMBLE: A Music Retrieval System Based on Learning Environment. Proceedings of the 2007 IEEE International Conference on Multimedia and Expo:1463-6.

More Related