1 / 20

Models of word reading

Models of word reading. Questions from your text. Are words recognized as whole-words or using subword features? Are words identified through meaning or through phonology? Is word recognition top-down (dependent on context) or bottom-up (stimulus-driven)?

ferguson
Télécharger la présentation

Models of word reading

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Models of word reading

  2. Questions from your text • Are words recognized as whole-words or using subword features? • Are words identified through meaning or through phonology? • Is word recognition top-down (dependent on context) or bottom-up (stimulus-driven)? • Is word recognition a single-mechanism or multiple mechanisms? • Does it involve activation or search?

  3. Answers from your prof • Are words recognized as whole-words or using subword features? • M'u: Why must there be a conflict? • Are words identified through meaning or through phonology? • M'u: Why must there be a conflict? • Is word recognition top-down (dependent on context) or bottom-up (stimulus-driven)? • M'u: Why must there be a conflict? • Is word recognition a single-mechanism or multiple mechanisms? • Obviously multiple! • Does it involve activation or search? • Activation is a search mechanism!

  4. Two classes of models • A.) Context-driven, top-down models • B.) Stimulus-driven, bottom-up models

  5. A.) Context-driven (top-down) models • What kind of information is top down? • Semantic, pragmatic context • Syntactical constraints • Implicit knowledge about redundancies and constraints at the orthographic and phonological levels- about letter and phoneme co-occurrences • This does have some effects • Readers can predict 1/4 of words in a text from prior info • T. Landauer: 45% of sentences are perfectly reconstructable from maximizing word order frequencies • However, if you measure the eye-movements of skilled readers, they don't seem to be making much use of this info, because they typically fixate (fleetingly!) on almost every word except for a few very common and predictable closed-class words

  6. B.) Stimulus-driven (bottom-up) models • What kind of information is bottom up? • Bigram frequencies, letter frequencies, neighbourhood frequencies • Bottom-up models are stage models • 1.) Extract features • 2.) Combine feature to pick out one word • 3.) Look up meaning associated with that word • What happens if stages 1 and 2 are very very fast?

  7. Take the middle path • Models that are all top-down or all bottom-up are wrong! • It is impossible to take seriously the notion that there is only word-level representations, or that there are no supra-word constraints • After all: there are stimuli and there is context for those stimuli! • Timing and ontology: Very fast processes may appear as stable objects • Let’s consider two model classes we won’t take seriously: • Whole word models • Component letter models

  8. i.) Whole-word models • Early whole-word reading models claimed people took in the whole word at one feel swoop, without their brain breaking it down into components • Main evidence was the word superiority effect: people were better able to say whether or not a letter was contained in a briefly-seen letter string if that string was a real word than if it wasn't. • This seemed to suggest that the word had "cohesive perceptual properties that transcend its components” (p. 428). • However, you only get the effects if you not only have a brief exposure to the string, but also do 'backwards masking'- that is, blot out the string with noise • This seemed to suggest that it was not a language finding, but a short-term memory finding: words were easier to remember than NWs.

  9. ii.) Component letter models • There used to be some suggestion that words could ONLY be read letter by letter • One reason to doubt this is simply perceptual: when we fixate on the centre of the word, our angle of vision subtends enough angle to take in one half of the word into each hemifield • But again, processing-speed considerations suggest that it depends on what it means to say ‘letter by letter’ • RT effects in LD have been found by manipulating frequency of bigrams (letter pairs) in HF words.

  10. iii.) Multi-level/Parallel Coding Models • Main idea: there is more than one unit of representation being used at the same time • Coltheart's dual route model (with may contemporary adherents) suggests that here are two routes for reading which are operated in tandem • A fast, whole-word pathway using recognition • A grapheme-phoneme correspondence pathway using rules • Deep dyslexics: Are massively impaired on NWs versus words

  11. iv.) Activation or Logogen Models • A word is a distributed representation across (at least) orthographic/phonological and semantic space • Morton (1969) called the complex neural entity that corresponds to a word a logogen. • A logogen eventually results in a positive or negative ID when a certain threshold of activation is reached.

  12. Interactive-Activation and Connectionist Models • The modern incarnation of word-reading models is PDP or connectionist models • One criticism of early models was their whole-word nodes (though this may be intended as metaphors for logogens)

  13. McLelland & Seidenberg (1989) • This model remains highly influential • It gave up the word level altogether • Instead of having word nodes, they had only three-letter sections of words • Phonology is connected directly to those three-letter nodes • feature detection feeds into those nodes • There is interactive activation between words, letters, phonemes, and acoustic/visual features

  14. McLelland & Seidenberg (1989) • Pros: • Succeeds at capturing all sorts of phenomena: the interaction between word frequency and regularity of grapheme-to-phoneme conversion (HF REG fastest; LF IRR slowest); regularities in reading nonwords which are related to irregular words. • Cons • Less successful at capturing some aspects of rapid nonword reading • Can't account for the dual route data very well • A 'toy' that only used 4-letter words

  15. Lexical-Search Models • An active and ordered search • Depends on recognizing the basic orthographic syllable structure (BOSS): the root word • Main support: Real words faster than nonwords • But activation models also support this

  16. Why I use mathematical models • Psychological models are metaphors, not engineering blueprints • Every model proposed captures some aspect of reading, sometime different aspects • One answer to the question ‘Which is the best model?’ is: The model that is most satisfying to you (and that depends on what question you want to answer) • This leads to debates that are frustrating because they are endless by their very nature: the disagreements are aesthetic, not scientific

  17. Why I use mathematical models • What I desire most- my explanatory aesthetic- is the ability to successfully predict lexical access behaviors- to be able to bet on principled grounds how an experiment will come out before doing it • What I desire is to know, to begin, is: a.) What variables contribute to lexical access b.) How important each variable is to that process c.) How those variables interact d.) How to decide when one variable (set) is more important than another in any particular measure

  18. Why I use mathematical models • A good measure is simple correlation: How strongly does the output of one model predict real output? • This is guaranteed to be a simple linear correlation: predicted RTs against real RTs. • This is continuous: we always know when we have done better than previous attempts • With such measures, we don’t have to argue about whose model is better: if your model can predict more of the variance than mine can, yours is better.

  19. Why I use mathematical models • Downside: Making claims about (neural or cognitive) mechanisms of mathematical models is tough • cf. Imaging the effects of ON • Upside: Mechanisms almost certainly are tough: massively distributed, massively parallel systems just don’t have simple mechanistic models! • Moreover, it is possible that the simplest functional unit of word-reading is some complex function of the simplest components that we have identified

  20. Let a million flowers bloom • The final model of word reading will not be one model: it will be a subtle understanding which emerges from understanding the point of many disparate models that make many disparate points • R. Lewontin: the organism as ‘the nexus of weakly determinate forces’ • We cannot grasp a complex truth in a simple way, because simple descriptions of complexity are only (more or less useful) lies • If you want to understand something: increase your modes of interaction with that thing! (Think of language as being more like a person than a machine.)

More Related