1 / 17

William Gregory Sakas Hunter College, Department of Computer Science

Introduction to Computational Natural Language Learning Linguistics 79400 (Under: Topics in Natural Language Processing ) Computer Science 83000 (Under: Topics in Artificial Intelligence ) The Graduate School of the City University of New York Fall 2001. William Gregory Sakas

roz
Télécharger la présentation

William Gregory Sakas Hunter College, Department of Computer Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Computational Natural Language LearningLinguistics 79400 (Under: Topics in Natural Language Processing)Computer Science 83000 (Under: Topics in Artificial Intelligence)The Graduate School of the City University of New YorkFall 2001 William Gregory Sakas Hunter College, Department of Computer Science Graduate Center, PhD Programs in Computer Science and Linguistics The City University of New York

  2. Meeting 4: • Notes: • My Web page was a little messed up. Sorry about that! Should be OK now. www.hunter.cuny.edu/cs/Faculty/Sakas • There is a link to this course, but we will probably move to the new blackboard system soon. • I got some email asking about the details of how ANN’s work. Yes. Working out the math for a simple perceptron is fair game for a midterm question. A good link to check out:pris.comp.nus.edu.sg/ArtificialNeuralNetworks/perceptrons.html • And I will be happy to arrange to meet with people to go over the math (as I will today at the beginning of class).

  3. Now we have to talk about learning. Training simply means the process by which the weights of the ANN are calculated by exposure to training data. Supervised learning: Training data Supervisor's answers 0 1 1 0 0 0 0 1 1 0 1 1 One datum at a time This is a bit simplified. In the general case, it is possible to feed the learner batch data. But the models we will look at in this course data is fed one datum at a time.

  4. ANN's prediction based on the current weights (which haven't converged yet) 0 0 1 0 From the supervisor's file. Ooops! Gotta go back and increase the weights so that the output unit fires. From the training file

  5. Let’s look at how we might train an OR unit.First: set the weights to values picked out of a hat.and the bias activation to 1. Then: feed in 1,1.What does the network predict? a0 1 a7 w09 -.3 1 Boolean OR w79 a1 .5 f(net9) a8 w89 a9 .1 1 The prediction is fine (f(.3) = 1) so do nothing.

  6. Now: feed in 0,1. What does the network predict? a0 1 a7 w09 -.3 0 w79 a1 .5 w91 f(net9) 1 a8 w89 a9 .1 1 Now got to adjust the weights. ANN’s predicition = 0 = f(-.3),But supervisor’s answer = 1 (remember we’re doing boolean OR) But how much to adjust? The modeler picks a value:  = learning rate (Let’s pick .1 for this example)

  7. The weights are adjusted to minimize the errorrate of the ANN. Perceptron Learning Procedure: wij = old wij +  (Supervisor’s answer - ANN’s prediction) So for example, the ANN predicts 0 and the supervisor says 1 wij = old wij + .1 ( 1 - 0) I.e. all weights are increased by .1

  8. For multilayer ANN’s, the error adjustment is backpropagated through the hidden layer. ey approx= w3 (Supervisor’s answer-ANN’s prediction) w3 w4 w1 ex approx= w4 (Supervisor’s answer - ANN’s prediction) w2 ez = w1 ey + w2 ex Backpropagated adjustment for one unit.Of course the error is calculated for ALL units.

  9. In summary: • Multilayer ANN’s are Universal Function Approximators - they can approximate any function a modern computer can represent. • They learn without explicitly being told any “rules” - they simply cut up the hypothesis space by inducing boundaries.Importantly, they are "non-symbolic" computational devices.That is, they simply multiply activations by weights.

  10. So,what does all of this have to do with linguistics and language? • Some assumptions of “classical” language processing (roughly from Elman (1995)) • symbols and rules that operate over symbols (S, VP, IP, etc) • static structures of competence (e.g. parse trees) • More or less, the classical viewpoint is language as algebra • ANN’s make none of these assumptions, so if an ANN can learn language, then perhaps language as algebra is wrong. • We’re going to discuss the pros and cons of Elman’s viewpoint in some depth next week, but for now, let’s go over his variation of the basic, feedforward ANN that we’ve been talking about.

  11. Localist representation in a standard feedforward ANN book boy dog run see eat rock Output nodes . . . . . Hidden nodes . . . . . Input nodes book boy dog run see eat rock Localist = each node represents a single item. If more than one output node fires, then a group of items can be considered activated. Basic idea is activate a single input node (representing a word) and see which group of output nodes (words) are activated.

  12. Elman’s Single Recurrent Network book boy dog run see eat rock 1-to-1 exact copy of activations "regular" trainable weight connections book boy dog run see eat rock 1) activate from input to output as usual (one input word at a time), but copy the hidden activations to the contextlayer. 2) repeat 1 over and over - but activate from the input AND copy layers to the ouput layer.

  13. From Elman (1990) Templates were set up and lexical items were chosen at random from "reasonable" categories. Categories of lexical items NOUN-HUM man, woman NOUN-ANIM cat, mouse NOUN-INANIM book, rock NOUN-AGRESS dragon, monster NOUN-FRAG glass, plate NOUN-FOOD cookie, sandwich VERB-INTRAN think, sleep VERB-TRAN see, chase VERB-AGPA move, break VERB-PERCEPT smell, see VERB-DESTROY break, smash VERB-EA eat Templates for sentence generator NOUN-HUM VERB-EAT NOUN-FOOD NOUN-HUM VERB-PERCEPT NOUN-INANIM NOUN-HUM VERB-DESTROY NOUN-FRAG NOUN-HUM VERB-INTRAN NOUN-HUM VERB-TRAN NOUN-HUM NOUN-HUM VERB-AGPAT NOUN-INANIM NOUN-HUM VERB-AGPAT NOUN-ANIM VERB-EAT NOUN-FOOD NOUN-ANIM VERB-TRAN NOUN-ANIM NOUN-ANIM VERB-AGPAT NOUN-INANIM NOUN-ANIM VERB-AGPAT NOUN-INANIM VERB-AGPAT NOUN-AGRESS VERB-DESTORY NOUN-FRAG NOUN-AGRESS VERB-EAT NOUN-HUM NOUN-AGRESS VERB-EAT NOUN-ANIM NOUN-AGRESS VERB-EAT NOUN-FOOD

  14. Training data Supervisor's answers Resulting training and supervisor files. Files were 27,354 words long, made up of 10,000 two and three word "sentences." womansmashplatecatmovemanbreakcarboymovegirleatbreaddog smashplatecatmovemanbreakcarboymovegirleatbreaddogmove

  15. Cluster (Similarity) analysis Hidden activations were for each word were averaged together. For simplicity assume only 3 hidden nodes (in fact there were 150).After the SRN was trained, the file was run through the network. The activations at the hidden nodes was recorded (I made up these numbers for the example).Now the average was taken for every word: boysmashplate...dragoneatboy...boyeatcookie... <.5 .3 .2><.4 .4 .2> <.2 .3 .8>...<.6 .1 .3><.1 .2 .4> <.9 .9 .7>...<.7 .6 .7><.4 .3 .6> <.2 .3 .4> <.70 .60 .53><.40 .40 .20> <.20 .30 .80><.60 .10 .30><.25 .25 .50> <.20 .30 .40> boysmashplatedragoneatcookie

  16. Each of these vectors represents a point in 3-D space. Some points are near to each other and form "clusters" <.70 .60 .53><.40 .40 .20> <.20 .30 .80><.60 .10 .30><.25 .25 .50> <.20 .30 .40> boysmashplatedragoneatcookie • Hierarchical Clustering: • calculate the distance between all possible pairs of points in the space. • find the closed two points • make them a single cluster – i.e. treat them as a single point* • recalculate all pairs of points (you will have one less point to deal with the firs t time you hit this step). • go to step 2. • * note there are many ways to treat clusters as single points. One could make a single point in the middle, one could calcuate median's etc. For Elman's study, I don't think it matters which he used, all would yield similar results, although this is just a guess on my part.

  17. Each of these words represents a point in 150-Dimentional space averaged from all activations generated by the network when processing that word. Each joint (where there is a connection) represents the distance between clusters. So for example, the distance between animals and humans is approx .85 and the distance between ANIMATES and INANIMATES is approx 1.5.

More Related