1 / 66

Statistical approaches to language learning

Statistical approaches to language learning. John Goldsmith Department of Linguistics May 11, 2000. Trends in the study of language acquisition. 1 Chomsky-inspired: “principles and parameters” (since 1979) 2 Transcribe and write a grammar

Télécharger la présentation

Statistical approaches to language learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical approaches to language learning John Goldsmith Department of Linguistics May 11, 2000

  2. Trends in the study of language acquisition 1 Chomsky-inspired: “principles and parameters” (since 1979) 2 Transcribe and write a grammar 3 Compute statistics, and develop a minimum description length (MDL)

  3. 1 Principle and parameters The variation across languages boils down to two things: • alternate settings of a small set of “parameters” (a few hundred?), each of which has only a small number of possible settings (2? 3? 4?) • learn some arbitrary facts, like the pronunciation of words

  4. What’s a “parameter”, for instance? 1. Pro-drop parameter: yes/no. Yes? Spanish, Italian. Subject is optional; subject may appear before or after the verb; verb agrees with the subject (present or absent) with overt morphology. No? English, French. Subject is obligatory. Dummy subjects (It is raining, There is a man at the door.)

  5. Or, noun-adjective order… Noun precedes adjective: French, Spanish: F. la voiture rouge “the red car” but literally “the car red” Noun follows adjective: English

  6. Criticisms: 1. This approach intentionally puts a lot of information into the innate language “faculty.” How can we be sure the linguist isn’t just cataloging a lot of differences between English and Spanish (e.g.) and proclaiming that this is a universal difference? 2. You don’t need an innate language faculty to realize that children have to learn whether nouns precede adjectives or not.

  7. 3. The theory is completely silent about the learning of morphemes and words. It implies (by the silence) that this stuff is easy to learn. But maybe it’s the hardest stuff to learn, requiring such a sophisticated learning apparatus that the grammar will be easy (by comparison) to learn.

  8. 2. Transcribe and write a grammar Long tradition; landmark is Roger Brown’s work in the 1960s. Value: extremely important empirical basis. Criticism: tells us very little about the how or the process of language acquisition.

  9. 3. Statistics and minimum description length Recent work -- probabilities in the lab: • Saffran, J., Aslin, R., & Newport, E. (1996). Statistical learning by 8-month-old infants. Science, 274, 1926-1928. She argues that even quite young children can extract information about the “chunking” of sounds into pieces on the basis of their frequent occurrences together.

  10. The linguist’s acquisition problem: What “must” happen in order for someone to end up knowing a particular language. We (linguists) can map out models (and run them on computers) that show how easy (or hard) it is to arrive at a grammar of English (etc.) on the basis of various assumptions.

  11. We can’t tell which kinds of information a child uses. But we can argue that learning X or Y is easier/harder/the same if you assume the child has access to certain kinds of data (e.g., semantic, grammatical).

  12. Probabilistic and statistical approaches The fundamental premise of probabilistic approaches to language is this: Degrees of (un)certainty can be quantified.

  13. Two problems of languageacquisition that have beenseriously tackled 2 closely related problems: 1. Segmenting an utterance into word-sized pieces (Brent, de Marcken, others) 2. Segmenting words into morphemes. (Goldsmith)

  14. Minimum Description Length Jorma Rissanen (1989) Analysis Analyzer Data Select the analyzer and analysis such that the sum of their lengths is a minimum.

  15. Analysis Analyzer Analysis Analyzer Analysis Analyzer Data Analysis Analyzer Etc... Analysis Analyzer

  16. The challenge Is to find a means of quantifying • the length of an analyzer, and • the length of an analysis

  17. “Compressed form of data?” Think of data as a dense, rich, detailed description (evidence), and Think of compressed form as • Description in high level language + • Description of the particulars of the event in question (a.k.a. boundary conditions, etc.)...

  18. Example: Utterance: “theolddogandthenotsooldcatgotintotheyardwithoutanybodynoticing” 62 letters as it stands. Or: 1 = the 2=old 3=dog 4 = not 123and24so2catgotinto1yardwithoutanybody4icing. 46 symbols here, 12 above, total of 58 --

  19. Compare with Early Generative Grammar (EGG) Data Linguistic Theory Preference: A1/A2 Analysis 1 Analysis 2

  20. Analysis Linguistic theory Data Data Yes/No Linguistic theory Analysis Data Linguistic theory 1 is better/ 2 is better Analysis 1 Analysis 2

  21. Implicit in EGG was the notion... that the best Linguistic Theory could be selected by... Getting a set of n candidate LTs; submitting to each a set of corpora; search (using unknown heuristics) for best analyses of each corpus within each LT; The LT wins for whom the sum total of all of the analyses is the smallest.

  22. No cost to UG • In EGG, there was no cost associated with the size of UG -- in effect, no plausibility measure.

  23. In MDL, in contrast…. • we can argue for a grammar for a given corpus. • We can also argue at the Linguistic Theory level if we so choose...

  24. Distinction between heuristicsand “theory” • In the context of MDL, the heuristics are extratheoretical, but from the point of view of the (psycho-)linguist, they are very important. • The heuristics propose; the theory disposes.

  25. The goal: To produce a morphological analysis of a corpus from an “unknown” language automatically that is, with no knowledge of the structure of that language built in; To produce both generalizations about the language, and a correct analysis of each word in the corpus.

  26. raw data Linguistica Analyzed data

  27. Implemented in Linguistica, a program that runs under Windows that you can download at: humanities.uchicago.edu/faculty/goldsmith

  28. Other work in this area • Derrick Higgins on Thursday; • Michael Brent 1993; • Zellig Harris: 1955 and 1967, follow-up: Hafer and Weiss 1974

  29. Global approach • Focus on devising a method for evaluating a hypothesis, given the data. • Finding explicit methods of discovery is important, but those methods play no role in evaluating the analysis for a given corpus. (Very similar in conception to Chomsky’s notion of an evaluation metric.)

  30. Framework for evaluation: Jorma Rissanen’s Minimum Description Length (“MDL”). Quite intricate; but we can get a very good feel for the general idea with a naïve version of MDL...

  31. Naive description length Count the total number of letters in the list of stems and affixes: the fewer, the better.

  32. Intuition: A word which is morphologically complex reveals that composite character by virtue of being composed of (one or more) strings of letters which have a relatively high frequency throughout the corpus.

  33. Naive description length: 2 Lexicographers know what they are doing when they indicate the entry for the verb laugh as laugh, ~s, ~ed, ~ing -- They recognize that the tilde “ ~” allows them to utilize the regularities of the language in order to save space and specification, and implicitly to underscore the regularity of the pattern that the stem possesses.

  34. Morphological analysis is not merely a matter of frequency. Not every word that ends in –ing is morphologically complex: string, sing, etc.

  35. Naive Minimum Description Length: Analyze the words of a corpus into stem + suffix with the requirement that every stem and every suffix must be used in at least 2 distinct words. Tally up the total number of letters in (a) each of the proposed stems, (b) each of the proposed suffixes, and (c) each of the unanalyzed words, and call that total the “naive description length”.

  36. Corpus: jump, jumps, jumping laugh, laughed, laughing sing, sang, singing the, dog, dogs total: 62 letters Analysis: Stems: jump laugh sing sang dog (20 letters) Suffixes: s ing ed (6 letters) Unanalyzed: the (3 letters) total: 29 letters. Naive Minimum Description Length Notice that the description length goes UP if we analyze sing into s+ing

  37. Frequencies matter, but only in the overarching context of a total morphological analysis of all of the words of the language.

  38. Let’s look at how the work is done, step by step...

  39. Corpus Pick a large corpus from a language -- 5,000 to 1,000,000 words.

  40. Corpus Feed it into the “bootstrapping” heuristic... Bootstrap heuristic

  41. Corpus Bootstrap heuristic Out of which comes a preliminary morphology, which need not be superb. Morphology

  42. Corpus Bootstrap heuristic Feed it to the incremental heuristics... Morphology incremental heuristics

  43. Corpus Out comes a modified morphology. Bootstrap heuristic Morphology modified morphology incremental heuristics

  44. Corpus Is the modification an improvement? Ask MDL! Bootstrap heuristic Morphology modified morphology incremental heuristics

  45. Corpus If it is an improvement, replace the morphology... Bootstrap heuristic modified morphology Morphology Garbage

  46. Corpus Send it back to the incremental heuristics again... Bootstrap heuristic modified morphology incremental heuristics

  47. Continue until there are no improvements to try. Morphology modified morphology incremental heuristics

  48. Bootstrapping...initial hypothesis = initial morphology of the corpus

  49. First: a set of candidate suffixesfor the language Using some interesting statistics.

  50. 4. Weight the stickiness (3) by how often the string shows up in the corpus 1. Observed frequency of a string (e.g., ing) 3. The computed “stickiness” of that string 2. Predicted frequency of the same string if there were no morphemes in the language

More Related