1 / 41

Hearing and Language Chapter 9

Hearing language. Hearing and Language Chapter 9. Hearing. A receptor is a cell, often a specialized neuron, that is suited by its structure and function to respond to a particular form of energy, such as sound. A receptor’s function is to convert that energy into a neural response.

rhett
Télécharger la présentation

Hearing and Language Chapter 9

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hearing language Hearing and LanguageChapter 9

  2. Hearing • A receptor is a cell, often a specialized neuron, that is suited by its structure and function to respond to a particular form of energy, such as sound. • A receptor’s function is to convert that energy into a neural response. • An adequate stimulus is the energy form for which the receptor is specialized. • The pattern of the information contained in the sensory stimulus makes information meaningful. • Many people consider audition, along with vision, to be the most important senses.

  3. Hearing • Sensationis the acquisition of sensory information. • Perceptionis the interpretation of this information. • The adequate stimulus for audition is vibration in a conducting medium, such as air. • Frequency refers to the number of cycles or waves of compressions and decompressions • Frequency provides the perception ofpitch. • A pure tone, produced by striking a tuning fork for example, has only one frequency. • Complex sounds, such as that produced by a clarinet, are composed of multiple frequencies.

  4. Hearing • The intensity of a sound is perceived as loudness. • Intensity refers to the amplitude or size of the wave. It represents the physical energy of a sound. • Loudness is influenced by the frequency of the sound. • For example, we are most sensitive to sounds between 2000 and 4000 Hz, the range in which most conversation occurs. • Sounds outside this range would seem less loud. • Also, the intensity of the sound influences the perception of pitch. ◊

  5. Hearing • The outer ear, orpinna: • captures the sound and amplifies it by funneling it into the smaller auditory canal. • selects for sounds in front and to the side of us, partially blocking those behind. ◊

  6. Hearing • The Middle Ear • The tympanic membrane, oreardrum, collects the vibrations and transmits them to theossicles. • The lever action of the ossicles transfers the vibration to the cochlea. • Amplification as the vibration passes from the eardrum to the smaller base of the stirrup compensates for the loss of energy in passing from air into the liquid inside the cochlea. • We can detect sound when the eardrum vibrates as little as the diameter of the hydrogen atom. ◊

  7. The Outer, Inner, and Middle EarFigure 9.4

  8. Hearing • The inner ear includes the cochlea. • The cochlea is divided into the fluid-filledvestibular canal, tympanic canal, cochlear canal. • The stirrup sends vibrations throughout the cochlea and to the sound-analyzing structure. ◊ Figure 9.5

  9. Hearing • The organ of Corti rests on the basilar membrane. • It consists of four rows of hair cells and thetectorial membrane above the hair cells. • Vibration bends the hair cells, opening potassium and calcium channels. • This depolarizes the cells and sets off signals in the neurons. • The less numerousinner hair cells provide the majority of information about the auditory stimulus. • Lengthening and shortening of the outer hair cellsagainst the tectorial membrane adjusts the organ of Corti’s rigidity. • This amplifies weak signals and provides adjustable frequency selectivity.

  10. Hearing • The Auditory Pathway • Auditory neurons form part of theauditory (8th) cranial nerves. • The neurons travel to the inferior colliculi, to the medial geniculate nucleus of the thalamus, and to the auditory cortex. • Neurons from each ear go to both temporal lobes, but there are more connections to the opposite side than the same side. • The auditory cortex istopographically organized: Neurons from adjacent receptors project to adjacent cells in the cortex, forming a map of the unrolled cochlea. ◊

  11. Auditory Pathway and Auditory CortexFigure 9.7

  12. Hearing • Secondary auditory areas are involved in analyzing complex sounds and understanding their meaning. • Thedorsal streamtravels from the auditory cortex to the parietal lobes (for spatial location of sounds) and to the frontal lobes (for directing eye movements and planning movements). It is the auditory “where” system. • Theventral streampasses from the temporal to the frontal lobes. It is involved in identifying sounds and is the auditory “what” system. ◊

  13. Hearing • Frequency Analysis • Frequency theory assumes that the auditory mechanism transmits the actual sound frequencies to the auditory cortex for analysis there. • Telephone theory: Individual neurons in the auditory nerve fire at the same frequency as the rate of vibration of the sound source. • Volley theory: Groups of neurons follow the frequency of a sound at higher frequencies. • Even volleying fails to follow sounds beyond about 5200 Hz, so the frequency theory is inadequate. ◊

  14. Hearing • Place theory: The frequency of a sound is identified according to the location of maximal vibration on the basilar membrane and, therefore, which neurons are firing most. • Higher frequencies cause the base end to vibrate most and low frequencies cause the apex end to vibrate most. • The auditory cortex is topographically organized, in the form of atonotopic map. Thus, each successive area responds to successively higher frequencies. ◊

  15. Tonotopic MapFigure 9.12

  16. Hearing • However, place theory alone is inadequate. • The basilar membrane vibrates about equally at the lowest frequencies. • Frequency-specific neurons have not been found below 200 Hz. • Frequency-place theory: • Neurons follow the sound’s frequency below about 200 Hz. • Higher frequencies are detected by place analysis. ◊

  17. Hearing • Analyzing Complex Sounds • The basilar membrane does aFourier analysis of a complex sound, separating it into its sine wave components. Figure 9.14: Fourier analysis of a clarinet note

  18. Hearing • We also must sort out meaningful sounds embedded in a confusing background of sounds. This is known as thecocktail party effect. • Selective attention enhances some sounds and suppresses others. • These separated sounds become auditory objects. • We must then identify a sound. • This occurs in the ventral “what” area. • Voices are identified in the superior temporal area. • Environmental sounds are identified primarily in posterior temporal areas and to some extent in frontal areas.

  19. Hearing • Locating Sounds With Binaural Cues • Binauralcues permit us to locate sounds quickly and accurately. • When a source is to one side or the other, the head blocks some of the sound energy; thus, there is adifference in intensityat the two ears. • A sound directly to the left or the right of the listener takes about 0.5 ms to travel the additional distance to the second ear. Thus, there is also adifference in time of arrival at the two ears. ◊

  20. Differential Intensity & Time of Arrival CuesFigure 9.17

  21. Hearing • At low frequencies, a sound arriving from one side of the body will be at a different phase of the wave at each ear. • As a result, the rising or falling pressure will be different at the two eardrums. Figure 9.18: Phase differences at the two ears

  22. Hearing • Brain Circuits for Locating Sounds • The “time of arrival” circuit (studied most extensively) contains coincidence detectors. • In this circuit, a longer pathway from one ear compensates for the delay in sound reaching the other ear. • A coincidence detector fires most when it receives input from both ears at the same time. • As a result, each detector is specialized for sounds arriving at a particular angle to the body. ◊

  23. Difference in Time of Arrival CircuitFigure 9.19

  24. Hearing • Next, this directional information must be integrated with • information from the visual environment • and information about the position of the body in space. • This involves the parietal lobes, part of the dorsal “where” stream. ◊

  25. Language • Language includes the generation and understanding of written, spoken, and gestural communication. • Broca’s area was discovered in 1861 when Paul Broca studied a stroke patient with injury in the frontal area. • Symptoms of Broca’s aphasiainclude: • nonfluent speech; • anomia, or trouble finding words; • articulation problems; • lack of grammatical, or function, words. • Reading and writing are impaired as much as speech. • Comprehension is impaired when the meaning depends on grammatical words.

  26. Language • Wernicke’s areais in the posterior portion of the left temporal lobe. • Wernicke’s Aphasia • The individual has trouble understanding spoken and written language. • However, the termreceptive aphasiais misleading, because the patient has as much difficulty producing language as understanding it. • Speech, for example, is fluent but meaningless (and is often referred to as “word salad”). ◊

  27. Language-Related Areas of the CortexFigure 9.20

  28. Language • The Wernicke-Geschwind model is an effort to explain how Broca’s area and Wernicke’s area interact to produce language. • Answering a Spoken Question: AUDITORY CORTEX ➞ WERNICKE’S AREA ➞ BROCA’S AREA Broca’s area then communicates with the facial area of the motor cortex to produce speech. • Reading Aloud: The visual information is first transformed into an auditory form in the angular gyrus; then ANGULAR GYRUS ➞ WERNICKE’S AREA ➞ BROCA’S AREA • What would happen if the response is to be written? • This model is generally accurate but oversimplified; for example, functions are not so localized.

  29. Wernicke-Geschwind Model of LanguageFigure 9.21

  30. Language • Reading, Writing, and Their Impairment • Alexia, the inability to read, andagraphia, the inability to write, are presumably due to disruption of pathways in theangular gyrus. • These pathways connect the visual projection area with the auditory and visual association areas. • Dyslexia, an impairment of reading, can be acquired through damage, but is more often developmental. ◊

  31. Language Brain Differences in Dyslexia • The left planumtemporale, where Wernicke’s area is located, is typically larger than on the right; in dyslexics it is larger on the right or equal in size. • Neurons in the left planumtemporale lack orderly arrangement. • The most reliably identified genes are involved in neuron guidance and migration. ◊

  32. Anomalies in the Dyslexic BrainFigure 9.24 Left planum temporale in a normal brain (left) and in the brain of a person with dyslexia (right)

  33. Language • People with dyslexia have both auditory and visual-perceptual difficulties. • They have trouble detecting the frequency and amplitude changes that distinguish letter sounds. • Words are read backwards, mirror-image letters (b and d) are confused, and words appear to move around on the page. • According to the magnocellular hypothesis, dyslexia involves deficiencies in auditory and visual magnocellular cells. ◊

  34. Language • According to thephonological hypothesis, the major problem is an impairment of phoneme processing. • A phoneme is a small sound unit that distinguishes one word from another (as in book versus cook). • fMRI indicates the problem is in an auditory word analysis area, not in the area that recognizes words by their visual form. • Dyslexia occurs much less frequently in countries where the languages are phonologically simpler. ◊

  35. Language • Recovery From Aphasia • The right hemisphere can take over language functions following left-hemisphere damage, as long as the injury occurs early in life. • If damage occurs later in life, language control is more likely to shift into bordering areas in the left hemisphere. • The ability of other areas to take over language functions may be due to their normal participation in language. For example:

  36. Language • The right hemisphere contributes prosody to speech; prosody is the use of intonation, emphasis and rhythm to convey meaning. • The right hemisphere also is important in understanding information from language that is not specifically communicated by the meaning of the words: • when meaning must be inferred from an entire discourse; • when the meaning is figurative rather than literal, as in the moral of a story. ◊

  37. Language • A Language-Generating Mechanism? • Children have a remarkable readiness to learn language; on average, they learn a new word every 90 waking minutes. • This readiness led researchers to believe there is alanguage acquisition device, a part of the brain that is dedicated to learning and producing language. • Other researchers agree that there are biological mechanisms that make language acquisition so easy. • For example, speaking and signing children follow the same sequences in learning language. • Their interpretation: Language has co-opted areas specialized for abilities that language requires.

  38. Language • Specializations in the brain suggest that it is innately well fitted for creating and learning language. • The left hemisphere is dominant for language in 90% of right-handed people and most left-handed. • Broca’s area is larger and the lateral fissure and planum temporale are longer on the left than on the right. • Even newborns show left-hemisphere response to speech. • Sign language activates the left hemisphere, even in individuals deaf from birth. ◊

  39. Language • Animal language research has the potential to reveal the roots of language. • Because chimps lack an adequate larynx for forming word sounds, researchers have attempted to communicate with them using various sign and symbol languages. • Washoe learned to use 132 symbols, but critics said the behavior was not complex enough to represent true language. • Washoe and three other chimps taught her son Loulis 47 signs and they regularly carried on conversations among themselves. • The communications of Loulis, the bonoboKanzi, and the parrot Alex suggest to some that they are evidence of evolutionary foundations of our language abilities.

  40. Language • Other animals also share some of the brain organization associated with human language. • Chimps, monkeys, dolphins, and canaries show left-hemisphere dominance for meaningful sounds or gestures. • Similar structures in these animals likely provide prelanguage communicative abilities. • We share apparent genetic antecedents, such as FOXP2. • However, the key to language probably lies in slight modifications of those genes and in the genes that are turned on or turned off. ◊

  41. Language • We do share mirror neurons with other species, and they may be critical to the development of language. Figure 9.32 Brown areas are active during imitation of others’ actions. They overlap Broca’s and Wernicke’s areas (yellow).

More Related