1 / 28

CS344: Introduction to Artificial Intelligence

CS344: Introduction to Artificial Intelligence. Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 38 : PAC Learning, VC dimension; Self Organization. VC-dimension Gives a necessary and sufficient condition for PAC learnability . C. Def:-

reese
Télécharger la présentation

CS344: Introduction to Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS344: Introduction to Artificial Intelligence Pushpak BhattacharyyaCSE Dept., IIT Bombay Lecture 38: PAC Learning, VC dimension; Self Organization

  2. VC-dimension Gives a necessary and sufficient condition for PAC learnability.

  3. C Def:- Let C be a concept class, i.e., it has members c1,c2,c3,…… as concepts in it. C1 C3 C2

  4. Let S be a subset of U (universe). Now if all the subsets of S can be produced by intersecting with Cis, then we say C shatters S.

  5. The highest cardinality set S that can be shattered gives the VC-dimension of C. VC-dim(C)= |S| VC-dim: Vapnik-Cherronenkis dimension.

  6. y 2 – Dim surface C = { half planes} x IIT Bombay

  7. y S1= { a } {a}, Ø a x |s| = 1 can be shattered IIT Bombay

  8. y S2= { a,b } {a,b}, {a}, {b}, Ø b a x |s| = 2 can be shattered IIT Bombay

  9. y S3= { a,b,c } b a c x |s| = 3 can be shattered IIT Bombay

  10. IIT Bombay

  11. y S4= { a,b,c,d } A B C D x |s| = 4 cannot be shattered IIT Bombay

  12. Fundamental Theorem of PAC learning (Ehrenfeuct et. al, 1989) • A Concept Class C is learnable for all probability distributions and all concepts in C if and only if the VC dimension of C is finite • If the VC dimension of C is d, then…(next page) IIT Bombay

  13. Fundamental theorem (contd) (a) for 0<ε<1 and the sample size at least max[(4/ε)log(2/δ), (8d/ε)log(13/ε)] any consistent function A:ScC is a learning function for C (b) for 0<ε<1/2 and sample size less than max[((1-ε)/ ε)ln(1/ δ), d(1-2(ε(1- δ)+ δ))] No function A:ScH, for any hypothesis space is a learning function for C. IIT Bombay

  14. Paper’s • 1. A theory of the learnable, Valiant, LG (1984), Communications of the ACM 27(11):1134 -1142. • 2. Learnability and the VC-dimension, A Blumer, A Ehrenfeucht, D Haussler, M Warmuth - Journal of the ACM, 1989. Book Computational Learning Theory, M. H. G. Anthony, N. Biggs, Cambridge Tracts in Theoretical Computer Science, 1997.

  15. Self Organization

  16. Self Organization Biological Motivation Brain

  17. Higher brain Brain Cerebellum Cerebrum 3- Layers: Cerebrum Cerebellum Higher brain

  18. Maslow’s hierarchy Search for Meaning Contributing to humanity Achievement,recognition Food,rest survival

  19. Higher brain ( responsible for higher needs) 3- Layers: Cerebrum (crucial for survival) Cerebrum Cerebellum Higher brain

  20. Mapping of Brain Back of brain( vision) Lot of resilience: Visual and auditory areas can do each other’s job Side areas For auditory information processing

  21. Left Brain and Right Brain Dichotomy Left Brain Right Brain

  22. Words – left Brain Music Left Brain – Logic, Reasoning, Verbal ability Right Brain – Emotion, Creativity Tune – Right Brain Maps in the brain. Limbs are mapped to brain

  23. Character Reognition , O/p grid . . . . I/p neuron

  24. Kohonen Net Self Organization or Kohonen network fires a group of neurons instead of a single one. The group “some how” produces a “picture” of the cluster. Fundamentally SOM is competitive learning. But weight changes are incorporated on a neighborhood. Find the winner neuron, apply weight change for the winner and its “neighbors”.

  25. Winner Neurons on the contour are the “neighborhood” neurons.

  26. Neighborhood: function of n Weight change rule for SOM W(n+1) = W(n) + η(n) (I(n) – W(n)) P+δ(n) P+δ(n) P+δ(n) Learning rate: function of n δ(n) is a decreasing function of n η(n) learning rate is also a decreasing function of n 0 < η(n) < η(n –1 ) <=1

  27. Winner δ(n) Convergence for kohonen not proved except for uni-dimension Pictorially . . . .

  28. A … P neurons o/p layer Wp … … . n neurons Clusters: A : A : : B : C :

More Related