1 / 29

Syntactic category acquisition

Syntactic category acquisition. Early words (Clark 2003). Early words (Clark 2003). people daddy, mommy, baby animals dog, kitty, bird, duck body parts eye, nose, ear food banana, juice, apple, cheese toys ball, balloon, book cloths shoe, sock, hat vehicles car, truck, boat

ron
Télécharger la présentation

Syntactic category acquisition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Syntactic category acquisition

  2. Early words (Clark 2003)

  3. Early words (Clark 2003) • people daddy, mommy, baby • animals dog, kitty, bird, duck • body parts eye, nose, ear • food banana, juice, apple, cheese • toys ball, balloon, book • cloths shoe, sock, hat • vehicles car, truck, boat • household items bottle, keys, bath, spoon • routines bye, hi, uh oh, night-night, thank you, no • activities up, down, back • sound imitation woof, moo, ouch, baa baa, yum yum • deictics that

  4. How do children learn syntactic categories such as nouns, verbs, and prepositions?

  5. The meaning of syntactic categories • Nouns typically denote objects, persons, animals (nouns are non-relational and atemporal; Langacker) • Verbs typically denote events and states (verbs are relational and temporal; Langacker)

  6. Cues for syntactic category acquisition • Semantic cues (Gentner 1982; Pinker 1984) • Pragmatic cues (Bruner 1975) • Phonological cues (Monaghan et al. 2005) • Distributional cues (Redington et al. 1998)

  7. Maratsos and Chalkely (1980) • Nouns: the __, X-s • Verbs: will __, X-ing, X-ed,

  8. Objections to distributional learning • ‘Noisy input data’ • Det Adj __ P N …. Syntactic categories are commonly defined in terms of their distribution; thus, it cannot be a surprise that distributional information is informative about syntactic category status. The argument is trivial or even circular.

  9. Objections to distributional learning • Distributional learning mechanisms do not search blindly for all possible relationships between linguistic items, i.e. the search is focused on specific distributional cues (Reddington et al. 1998). The vast number of possible relationships that might be included in a distributional analysis is likely to overwhelm any distributional learning mechanism in a combinatorial explosion. (Pinker 1984)

  10. Objections to distributional learning • This assumption crucially relies on Pinker‘s particular view of grammar. If you take a construction grammar perspective, grammar (or syntax) is much more concrete (Redington et al. 1998). The interesting properties of linguistic categories are abstract and such abstract properties cannot be detected in the input. (Pinker 1984)

  11. Objections to distributional learning Even if the child is able to determine certain correlations between distributional regularities and syntactic categories, this information is of little use because there are so many different cross-linguistic correlations that the child wouldn’t know which ones are relevant in his/her language.(Pinker 1984) • Syntactic categories vary to some extent across languages (i.e. there are no fixed categories). Children recognize any distributional pattern regardless of the particular properties that categories in different languages may have (Redington et al. 1998)

  12. Objections to distributional learning • Children do not learn categories from isolated examples (Redington et al. 1998). Spurious correlations will occur in the input that will be misguiding. For instance, if the child hears John eats meat. John eats slowly. The meat is good. He may erroneously infer The slowly is good is a possible English sentence.(Pinker 1984)

  13. Redington et al. 1998 - Data All adult speakers of the CHILDES database (2.5 million words). Bigram statistics: Target words: 1000 most frequent words in the corpus Context words: 150 most frequent words in the corpus Context size: 2 words preceding + 2 words following the target word: x the __ of x in the __ x x will have __ the x

  14. Bigram statistics Context vectors: Target word 1 210-321-2-0 Target word 2 376-917-1-5 Target word 3 0-1-1078-1298 Target word 4 1-4-987-1398

  15. Statistical analysis • Hierarchical cluster analysis over context vectors: dendogram • Treatment of polysemous words • ‘Slicing’ of the denogram • Comparison of the clusters of the dendogram to a ‘benchmark’ (Collins Cobuild lexical dictionary)

  16. Hierarchical cluster analysis

  17. Exp 1: Context size Result: Local contexts have the strongest effect, notably the word immediately preceding the target word is important. "Learners might be innately biased towards considering only these local contexts, whether as a result of limited processing abilities (e.g. Elman 1993) or as a result of language specific representational bias." (Redington et al. 1998)

  18. Exp 2: Number of target words Level of accuracy Number of target words Distributional learning is most efficient for high frequency open class words.

  19. Exp 3: Category type Result: nouns < verbs < function words „Although content words are typically much less frequent, their context is relatively predictable … Because there are many more content words, the context of function words will be relatively amaophous." (Redington et al. 1998)

  20. Exp 4: Corpus size Level of accuracy Number of words

  21. Exp 5: Utterance boundaries Result: Including information about utterance boundaries did not improve the level of accurarcy.

  22. Exp 6: Frequency vs occurrence ‘Frequency vectors’ were replaced by ‘occurrence vectors’: Frequency vector Occurrence vector 27-0-12-0-0-12-2 1-0-1-0-0-1-1 0-213-2-1-45-3-0 0-1-1-1-1-1-0 Result: The cluster analysis still revealed significant clusters, but performance was much better when frequency information was included.

  23. Exp 7: Removing function words Early child language includes very few function words. Thus, Redington et al. removed all function words from the context and repeated the cluster analysis without function words. Result: The results decreased but were still significant.

  24. Exp 8: Knowledge of word classes The cluster analyses were performed over the distribution of individual items. It is conceivable that the child recognizes at some point discrete syntactic categories (cf. semantic bootstrapping), which may facilitate the categorization task. Result: Representing particular word classes through discrete category labels (e.g. N), does not improve the categorization of other categories (e.g. V).

  25. Mintz et al. 2002. Cognitive Science (1) The man [in the yellow car] … (2) She [has not yet been] to NY. • 1. Information about phrasal boundaries improves performance. • 2. Local contexts have the strongest effect (cf. Redington et al. 1998). • 3. The results for Ns are better than the results for Vs (cf. Redington et al. 1998).

  26. Monaghan et al. 2005. Cognition (1) Nouns vs. verbs (2) Open class vs. closed class. 1. Distributional information 2. Phonological information

  27. Phonological features of syntactic categories • Length Open class words are longer than closed class words • Stress Closed class words usually do not carry stress • Stress Nouns tend to be more often trochaic than verbs (i.e. verbs are often iambic) • Consonants Closed class words have fewer consonant cluster • Reduced vowels Closed class words include a higher proportion of reduced vowels than open class words

  28. Phonological features of syntactic categories • Interdentals Closed class words are more likely to begin with an interdental fricative than open class words • Nasals Nouns are more likely than verbs to include nasals • Final voicing Nouns are more likely than verbs to end in a voiced consonant • Vowel position Nouns tend to include more back vowels than verbs • Vowel height The vowels of verbs tend to be higher than the vowels of verbs

  29. Results Phonological features do not just reinforce distributional information, but seem to be especially powerful in domains in which distributional information is not so easily available. • Distributional information is especially useful for categorization of high frequency open class words. • Phonological information is more useful for catego-rization of low frequency open class words (Zipf 1935). • Phonological information is also useful for the distinction between open and closed class words.

More Related