1 / 24

Learning linguistic structure with simple recurrent networks

Learning linguistic structure with simple recurrent networks. February 20, 2013. Elman’s Simple Recurrent Network (Elman, 1990). What is the best way to represent time? Slots? Or time itself? What is the best way to represent language? Units and rules? Or connectionist learning?

braith
Télécharger la présentation

Learning linguistic structure with simple recurrent networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning linguistic structure with simple recurrent networks • February 20, 2013

  2. Elman’s Simple Recurrent Network (Elman, 1990) • What is the best way to represent time? • Slots? • Or time itself? • What is the best way to represent language? • Units and rules? • Or connectionist learning? • Is grammar learnable? • If so, are there any necessary constraints?

  3. The Simple Recurrent Network • Network is trained on a stream of elements with sequential structure • At step n, target for output is next element. • Pattern on hidden units is copied back to the context units. • After learning the network comes to retain information about preceding elements of the string, allowing expectations to be conditioned by an indefinite window of prior context.

  4. Learning about words from streams of letters (200 sentences of 4-9 words) Similarly, SRNs have also been used to model learning to segment words in speech (e.g., Christiansen, Allen and Seidenberg, 1998)

  5. Learning about sentence structure from streams of words

  6. Learned and imputed hidden-layer representations (average vectors over all contexts) ‘Zog’ representationderived by averagingvectors obtained byinserting novel item in place of each occurrence of ‘man’.

  7. Within-item variation by context

  8. Analyis of SRN’s using Simpler Sequential Structures (Servain-Schreiber, Cleeremans, & McClelland) The Grammar The Network

  9. Hidden unit representationswith 3 hidden units True Finite State Machine GradedState Machine

  10. Training with Restricted Set of Strings 21 of the 43 valid strings of length 3-8

  11. Progressive Deepening of the Network’s Sensitivity to Prior Context Note: Prior Context is only maintained if it is prediction-relevant at intermediate points.

  12. Elman (1991)

  13. NV Agreementand Verb successor prediction • Histograms show summed activation for classes of words: • W = who • S = period • V1/V2 / N1/N2/PNindicate singular, plural, or proper • For V’s: • N = No DO • O = Optional DO • R = Required DO

  14. Prediction withan embedded clause

  15. PCA components representing agreement and Verb Argument Constraints

  16. Components trackingconstituents withinclauses of differenttypes.

  17. Role of Prediction Relevance of Head in Carrying Context Across an Imbedding • If the network at right is trained with symmetrical embedded strings, it does not reliably carry the prior context through the embedding (and thus fails to correctly predict the final letter esp. for longer embeddings). • If however subtle asymmetries on transitional probabilities are introduced (as shown) performance predicting the correct letter after emerging from the embedding becomes perfect (although very long strings were not tested). • This happens because the initial context ‘shades’ the representation as shown on the next slide.

  18. Hidden unit reps in the network trained on theasymmetrical embeddedsub-grammars. Representations of same internal sequence in different sub-grammars is more similar than different sequences in the same subgrammar - The model is capturing the similarity of nodes across the two sub-grammars - Nonetheless, it is able to shade these representations in order to allow it to predict the correct final token

  19. Importance of Starting Small? • Elman (1993) found that his 1991 network did not learn a corpus of sentences with lots of embeddings if the training corpus was stationary from the start. • However, he found that training went much better if he either: • Started with only simple sentences and gradually increased the fraction of sentences with embedded clauses • Started with a limited memory (erasing the context) after 3 time steps, then gradually increasing the interval between erasures. • This forced the network to ‘start small’, and seemed to help learning.

  20. A Failure of Replication • Rhode and Plaut revisited ‘Starting Small’. • They considered the effects of adding semantic constraints. • They also used different training parameters; with Elman’s the network appeared to settle into local minima.

  21. Grammar and Semantic Constraints Complex regimen: 75% of sentences contain embeddings throughout. Simple regimen: Start without embeddings, increment in steps up to 75%

  22. Rohde and Plaut generally found an advantage for ‘starting big’: • Performance was generally better using the final corpus in which 75% of sentences contain embeddings from the start (Complex regimen), compared to starting with only simple sentences and gradually increasing % of embeddings (Simple regimen). • An advantage for starting small only occurred when: • The final corpus contained embeddings in 100% of sentences. • Semantic and agreement constraints between head noun and embedded verb were both completely eliminated (Corpus A’). A’100 Conditions A-E are ordered by proportion of sentences in which semantic constraints operate between head noun and subordinate clause (from 0 to 1.0). In A through E above, the subordinate verb always agrees in number with the head noun where appropriate. Not so in A’

  23. Effect of initial weight range (Elman used +/- .001)

  24. Discussion • Specific questions about the SRN: • Can it be applied successfully to other tasks? • Is its way of representing context psychologically realistic? • Can something like it be scaled up to address languages with large vocabularies? • More general questions about language • Is language acquisition a matter of learning a grammar? • Are innate constraints required to learn it?

More Related