1 / 16

SPOKEN LANGUAGE COMPREHENSION

SPOKEN LANGUAGE COMPREHENSION. Anne Cutler. Addendum: How to study issues in spoken language comprehension. The psycholinguist’s problem. We want to know HOW spoken language is comprehended But the process of comprehension is a mental operation, invisible to direct inspection

baird
Télécharger la présentation

SPOKEN LANGUAGE COMPREHENSION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SPOKEN LANGUAGE COMPREHENSION Anne Cutler Addendum: How to study issues in spoken language comprehension

  2. The psycholinguist’s problem • We want to know HOW spoken language is comprehended • But the process of comprehension is a mental operation, invisible to direct inspection • So we have to devise ways of looking at the process indirectly (in the laboratory, mostly) • These laboratory methods often involve measuring RT - the speed with which a decision is made, or a target detected • It is important to know how to relate the laboratory task to the processes one wants to study!!! (a “linking hypothesis”) • And also to make sure that the task is reflecting what we want it to reflect i.e. there are no uncontrolled artifacts, which might provide alternative interpretations….

  3. The lectures so far • Lecture 1: speech is fast and continuous, but somehow listeners have to identify the words in it, because words are known entities, and identifying those known entities and putting them together is the only way to reach the (unknown) goal, i.e. the speaker’s message. But words themselves are not unique – they resemble one another and can be embedded within one another…. So speech can contain many spurious words, not part of the message • So do listeners process only the words which the speaker uttered, or do other words become activated and have to be eliminated?

  4. The lectures, contd. • Lecture 2: all the words which are simultaneously activated compete with one another • Lecture 3: although competition alone could explain how segmentation occurs, there are also processes of segmentation, and they differ across languages • Lecture 4: the process of activating words involves continuous evaluation of information in the speech signal; there is no necessary intermediate stage in which such representations as syllable, mora are extracted • Lecture 5: the lawful phonological processes which result in, say, lean being spoken as leam, or petit being spoken sometimes with a final vowel and sometimes with a final consonant, neither disrupt nor facilitate processing

  5. LEXICAL DECISION(hear words and decide: is that a real word?) Lexical decision is the simplest word processing task. For instance, it can be tell us whether words can be recognised, or nonwords rejected, as soon as enough of them is heard that no other words are possible (the “uniqueness point”). Auditory lexical decision is dependent on word length – no response possible until end because it could always become a nonword after all!

  6. CROSS-MODAL PRIMING(Hear prime, see target; decide: is it a real word?) Prime and target may be identical (e.g. give-GIVE), or related by association (give-TAKE) CMP is a way of looking at what is activated when a listener hears speech – we measure the RT to decide whether the visual target is a real word, and if that RT varies when the spoken prime varies, then we have observed an effect of the spoken prime. Hearing “give” makes recognising TAKE easier.

  7. CROSS-MODAL FRAGMENT PRIMING (decision: is that a real word?) Whether the prime and target are the same or related by association is one factor; whether the prime is presented as a whole or just as a fragment is a separate factor Likewise, hearing “octo-” makes recognising OCTOPUS easier

  8. Eye-tracking experiment Eye-tracking also looks at activation – listeners who hear “ha-” look at the ham or the hamster – both are potential candidates

  9. GATING(hear a word in fragments of increasing size, and at each fragment guess what the word is) • E.g. p- pr- pra- prak- pract- practi- practik- • The “gated” fragments can be of constant size (e.g. 50 ms, 100 ms, 150 ms etc.); or they can systematically add more phonetic information (e.g. each fragment adds another phoneme transition – Fragment 1: to middle of 1st phoneme; 2: to middle of 2nd phoneme; 3 to middle of 3rd phoneme; etc.) • Gating tells us what information can be used at a particular point – but it is a problem that listeners sometimes stick with bad guesses…

  10. PHONEME DETECTION AND FRAGMENT DETECTION(hear speech, listen for target phoneme or fragment) These are also very simple tasks. They probably don’t directly reflect prelexical processing, but they can reflect how easy it is to extract information (below the word level) at a given point. Thus they might reflect segmentation, or how easy a preceding word or phoneme was to process, etc.

  11. WORD SPOTTING(hear nonsense item – is there a real word in it?) Word spotting is especially good for looking at word recognition in context – by minimising the context, we can look at the local effect of a context on how hard or easy it is to find a word

  12. WORD RECONSTRUCTION Change a nonword into a real word by altering a single sound This task was used to look at the processing of vowels and consonants – which type of phoneme constrains word identity more strongly? Then, it was also used to look at whether rhythmic categories like the mora are used in recognising words

  13. PHONETIC CATEGORIZATION(hear artificial sounds, decide what they are) Listeners normally hear speech sounds which are more or less good exemplars of their categories. It is possible to make an artificial continuum from one sound to another, and present these sounds to listeners. What they report in the middle is not new non-sounds, but a sudden switch from tokens of one category to tokens of the other - “categorical perception”. The phonetic categorization task (developed for phonetic research) has also been useful in psycholinguistics. E.g. categorical functions can shift if one decision would make a word but the other would make a nonword (Ganong)…

  14. Other tasks Blending task: Construct a blend of two pseudo-names, using the 1st part of the 1st name and the 2nd part of the 2nd name. Has been used to look at what information in the signal is more vs. less important (e.g. is place of articulation unspecified?) Reversal task: A sort of language game – reverse the parts of a word (e.g. syllables, phonemes….). Has been used to look at people’s internal representations of words (e.g. are syllable boundaries clear? Are intervocalic consonants ambisyllabic?). Artificial language learning: Hear nonsense input, try to learn the “words” (and other structures) it consists of. Has been used to look at how easily different sequences can be segmented, and whether listeners have expectations about what words will be like.

  15. What the tasks told us • For instance: there are lots of types of converging evidence for multiple concurrently active words – after capt- BOTH captain and captive are facilitated etc. • (N.B. Different tasks look at the same aspect of processing) • For instance: segmentation relies on language-specific information • Different tasks look at the same aspect of processing, again – e.g. with word-spotting we discover language-specific segmentation procedures, then we can predict that listeners will use these procedures also in learning new languages, and test this with an artificial vocabulary learning experiment

  16. Summary • Studying spoken language comprehension can’t be done directly (we can’t look into the brain), only indirectly, with the help of laboratory methods • This means that we have to translate the bigger questions we are interested in into questions which can be answered using our laboratory methods • For instance: can Finnish listeners use vowel harmony to help them find word boundaries? We turn that into a smaller question: is a boundary easier to find if the vowels on either side of it are disharmonious rather than harmonious? And then we can use word-spotting – a real word, abutted to a nonsense context (harmonious or disharmonious)

More Related