1 / 31

Using Extra-Linguistic Cues to Identify Good Word Learning Instances

Using Extra-Linguistic Cues to Identify Good Word Learning Instances. Tamara Nicol Medina John Trueswell Lila Gleitman University Of Pennsylvania Jesse Snedeker Harvard University Society for Research in Child Development April 2, 2009, Denver, CO. Just look at the world!.

rubys
Télécharger la présentation

Using Extra-Linguistic Cues to Identify Good Word Learning Instances

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using Extra-Linguistic Cues to Identify Good Word Learning Instances Tamara Nicol Medina John Trueswell Lila Gleitman University Of Pennsylvania Jesse Snedeker Harvard University Society for Research in Child Development April 2, 2009, Denver, CO

  2. Just look at the world! • Observe physical and temporal contingencies between words and objects. • (At least, for physically observable objects.) • Experimental evidence supports ease of mapping • Fast mapping (e.g., Carey, 1978; Mervis & Bertrand, 1994; Behrend et al., 2001; Jaswal & Markman, 2001) • Cross-situational word learning (e.g., Yu & Smith, 2007; Smith & Yu, 2008; Vouloumanos, 2008; Xu &Tenenbaum, 2007)

  3. It’s Not that Easy! (Augustine, Locke, Quine, Gleitman, Fodor, Siskind, etc.) • Reference problem • Book? Cat? Shoes? Chair? Cheerios? Cup? Rug? Pants? Head? Hand? … • Frame problem • Dog or Puppy? Hand or Finger? Red or Ball? • Naturalistic learning conditions • Medina, Trueswell, Snedeker, & Gleitman. (2008). When the shoe fits: Cross-situational word learning in realistic learning environments. BUCLD.

  4. How do learners narrow down the possibilities? • Linguistic context(Landau & Gleitman, 1985; Gleitman, 1990; Gillette, Gleitman, Gleitman, & Lederer, 1999) • Learning biases • Whole object constraint(Markman, 1989) • Mutual exclusivity(Markman & Wachtel, 1988; Markman, Wasow, & Hansen, 2003) • Social-attentional cues(e.g., Baldwin 1991, 1993; Tomasello & Akhtar, 1995; Bloom, 2002; Behne, Carpenter, & Tomasello, 2005)

  5. Social-Attentional Cues • Nonverbal cues can reduce the range of possible interpretations. • Direction of speaker eye-gaze(Baldwin, 1991, 1993; Trueswell & Gleitman, 2003; Nappa, Wessel, McEldoon, Gleitman, & Trueswell, 2009) • Joint attention: Occurs naturally when parent and child are focused on the same thing at the same time(Baldwin, 1991; Bruner, 1978) • ~ 70% of mothers’ utterances (Collis, 1977; Harris, Jones, & Grant, 1983; Tomasello & Todd, 1983) • Positively associated with early vocabulary acquisition (Tomasello & Todd, 1983; Harris, Jones, Brookes, & Grant, 1986; Tomasello, Mannle, & Kruger, 1986; Akhtar, Dunham, & Dunham, 1991)

  6. Quality of Learning Instances(Baldwin, 1991) • But what about the lack of perfect contingency between word and referent?

  7. Follow-In vs Discrepant Labeling “Look! A dax!”

  8. Quality of Learning Instances(Baldwin, 1991) • But what about the lack of perfect contingency between word and referent? • Follow-in labeling:eye gaze, voice direction, and body posture oriented at object child is currently focused on • 16-19 mo-olds mapped correctly • Discrepant labeling:eye gaze, voice direction, and body posture directed at a hidden (but previously seen) object, while infant is focused on another object • Infants did not map the word to the focused-object.

  9. Social-attentional cues in interaction(Frank, Goodman, & Tenenbaum, In Press) • Rollins corpus (CHILDES): mom and baby (6 mo) • Social-attentional cues • Infant: Hands, Mouth (infant only), “Touch”, Looking (direction of eye gaze) • Caregiver: Hands, “Touch”, Looking (direction of eye gaze) • Cross-situational word-learning model successfully discovered the mappings between words and objects.

  10. Social-attentional cues in interaction(Frank, Goodman, & Tenenbaum, In Press) • Rollins corpus (CHILDES): mom and baby (6 mo) • Social-attentional cues • Infant: Hands, Mouth (infant only), “Touch”, Looking (direction of eye gaze) • Caregiver: Hands, “Touch”, Looking (direction of eye gaze) • Cross-situational word-learning model successfully discovered the mappings between words and objects.

  11. Social-attentional cues in interaction(Frank, Goodman, & Tenenbaum, In Press) • Rollins corpus (CHILDES): mom and baby (6 mo) • Social-attentional cues • Infant: Hands, Mouth (infant only), “Touch”, Looking (direction of eye gaze) • Caregiver:Hands, “Touch”, Looking (direction of eye gaze) • Cross-situational word-learning model successfully discovered the mappings between words and objects. • Joint attention? Follow-in? • What would interaction look like if child were initiating actions?

  12. Our Goals • Look at a representative sample of parent-child interactions. • Explore the conditions under which word meaning is transparent (or not) from extra-linguistic cues alone: • Presence (or absence) of cues • Timing and coordination of cues • Joint attention? • Follow-in?

  13. Selection of Stimuli • Large video corpus of parent-child interactions in natural settings (home, outdoors, etc.) • Snedeker, J. (2001). Interactions between infants (12-15 months) and their parents in four settings. Unpublished corpus.

  14. Selection of Stimuli • Word learning “norming” study • Gertner, Y., Fisher, C., Gleitman, L., Joshi, A., & Snedeker, J. (In progress). Machine implementation of a verb learning algorithm. • Adapation of Human Simulation Paradigm (Gillette, Gleitman, Gleitman, & Lederer, 1999; Snedeker and Gleitman, 1999) • Randomly selected six instances of highly frequent content words. • Each instance was edited into a 40-second “vignette”. • Sound turned off. • Visual context only cue to word meaning, placing viewers in the situation of the early word learner. • Utterance of target word (at 30 sec) indicated by a BEEP. • Guess the “mystery” word in each vignette.

  15. (silence) (silence) <BEEP> 30 sec (silence) 10 sec Drawings courtesy of Emily Trueswell

  16. Selection of Stimuli • Two types of vignettes • “High Informative” – vignettes guessed by >50% of participants • “Low Informative” – vignettes guessed by <33% of participants

  17. Selection of Stimuli • Two types of vignettes • “High Informative” – vignettes guessed by >50% of participants • Rare (only 7% of vignettes). • All basic level objects. • “Low Informative” – vignettes guessed by <33% of participants • Stimuli for Current Study • 8 nouns: bag, ball, book, horse, necklace, nose, phone, shoe • One HI vignette • One LI vignette

  18. Pilot Study: Children • N = 12 (ages 3;1 to 5;4) • Modified for fun! • Shorter vignettes with funny noises • “What do you think the parent said?” • Celebratory animation

  19. Pilot Study: Children 2 = 3.84 *

  20. Extra Linguistic Cue Coding • Is the target object visible in the scene (on screen)? • Is the child moving or reaching towards the object? • Is the child handling the object? • Is the child looking at the object? • Is the child looking at the parent? • Is the parent moving or reaching towards the object? • Is the parent handling the object? • Is the parent looking at the object? • Is the parent looking at the child?

  21. Presence of Target Object Error bars reflect Standard Error of the Mean.

  22. Cue Occurrence at Word Onset 2 = 4.00 2 = 7.27 2 = 4.27 * * *

  23. Joint Attention at Word Onset Child Looking at and/or Handling Object AND Parent Looking at Object 2 = 4.00 *

  24. What is the timing of cues? • Follow-in? • Parent refers to object under child’s focus of attention. • First onset of cues relative to word onset.

  25. First Onset of Cues Error bars reflect Standard Error of the Mean. Child Looking at Object t(1,12)=1.56, p=0.14 Child Moving/Reaching Toward Object t(1,12)=2.05, p=0.06 Child Handling Object t(1,12)=2.96, p=0.01 Parent Handling Object t(1,8)=1.09, p=0.31

  26. First Onset of Cues Error bars reflect Standard Error of the Mean. Parent Looking at Child t(1,13)=0.54, p=0.59 Parent Looking at Object t(1,12)=0.08, p=0.93 Child Looking at Parent t(1,6)=0.02, p=0.98 Parent Moving/Reaching Toward Object t(1,7)=0.15, p=0.89

  27. Differentiating HI and LI Vignettes • High Informative • Follow-In: Utterance of target word immediately after first onset of child’s shift in focus towards object. • Joint Attention: Co-occurring high rates of child’s attention to object and parent’s attention to child and object. • Low Informative • Delayed follow-in. • Low joint attention.

  28. Implications • Basic Level Object Terms provide a scaffold for further learning. • Word order, syntax, abstract lexical items, etc. • Vindication of Bruner/Baldwin’s social conditions for word learning found in natural parent-child interactions. • Word learning is successful when cues align.

More Related