Understanding Computation in Neural Networks
250 likes | 352 Vues
Explore different types of neural networks, functions, and learning problems, including perceptron, backpropagation, clustering, and categorization. Learn about regression, classification, and generalization in this comprehensive guide.
Understanding Computation in Neural Networks
E N D
Presentation Transcript
Computation in neural networks M. Meeter
Calculating a function Perceptron learning problem Input Patterns Desired output [+1, +1, -1, -1] [+1, -1, +1] [-1, -1, +1, +1] [+1, +1, -1] [-1, -1, -1, -1] [-1, -1, +1, -1] [-1, -1, -1] [-1, +1, +1, -1] [-1, +1, +1] [+1, -1, +1, -1]
Types of networks & functions • Attractor • Feedfwrd Hebbian • associative (Hebbian) • competitive • Feedfwrd error corr. • perceptron • backprop • completion, autoass. memory • association, assoc. memory • clustering • categorization, generalization • nonlinear, same
Types of networks • Attractor • Feedfwrd Hebbian • associative (Hebbian) • competitive • Feedfwrd error corr. • perceptron • backprop • completion, autoass. memory • association, assoc. memory • clustering • categorization, generalization • nonlinear, same
A Classification
Generalization 76 128 ?
Regression = generalization Univariate Linear Regression prediction of values
Types of networks • Attractor • Feedfwrd Hebbian • associative (Hebbian) • competitive • Feedfwrd error corr. • perceptron • backprop • completion, autoass. memory • association, assoc. memory • clustering • categorization, generalization • nonlinear, same
Classification - discrete Perceptron learning problem Prototypical Input Patterns Desired output [+1, +1, -1, -1] [+1, -1, +1] [-1, -1, +1, +1] [+1, +1, -1] [-1, -1, -1, -1] [-1, -1, +1, -1] [-1, -1, -1] [-1, +1, +1, -1] [-1, +1, +1] [+1, -1, +1, -1]
Classification - discrete Perceptron learning problem Prototypical Input Patterns Desired output [+1, +1, -1, -1] [+1, -1, +1] [-1, -1, +1, +1] [+1, +1, -1] [-1, -1, -1, -1] [-1, -1, +1, -1] [-1, -1, -1] [-1, +1, +1, -1] [-1, +1, +1] [+1, -1, +1, -1]
Xi X1 X2 Xn Classification in Perceptron wji threshold
Effe tussendoor… • Bij perceptron etc.: net input knoop>0 dan activatie 0 • Niet altijd gewenst: daarom heeft knoop in continue vormen perceptron / backprop een ‘bias’, een activatie die altijd bij input opgeteld wordt • Effect: verschuiven threshold
- Threshold Input= + Input= mixture Threshold Classification in 2 dimensions
Discriminant Analysis Find center of two categories, draw line in between, then one diagonal in middle = discrimination line Produces exact same result
Generalization = Regression Univariate Linear Regression prediction of values
Activation function Xi (·) X1 X2 v = xi*wji (v) = av + b Xn Perceptron with linear activation rule y wji j Change weights with rule, minimizing Se2 Bias
X 1 X 2 X i X n Multivariate Multiple Linear Regression Multivariate = multiple independent variables X =multiple inputs y 1 1 y 2 2 Multiple = multiple dependent variables Y =multiple outputs Y2 X Y1
linear nonlinear y y x x Linear vs. nonlinear regression • Here: quadratic • General: wrinkle-fitting
X X Multi-Layer Perceptron å = v x * w i ji i y1 y2 • Fit any function: “Universal approximators” X= [x1, x2, .., xi, .., xn]
Too complex model y x Bad Extremely bad Overfitting y Too simple model x
Clustering Competitive learning: • next week • ART
Conclusions • Neural networks similar to statistical analyses • Perceptron -> categorization / generalization • Backprop -> same but nonlinear • Competitive l. -> clustering • But… • Whole data set vs. one pattern at a time
? ? Feature extraction with PCA Unsupervised Learning Hebbian Learning