1 / 25

The Role of Temporal Fine Structure Processing

The Role of Temporal Fine Structure Processing. Pronounced by : Hwang se mi. List. ABSTRACT Introduction THE ROLE OF TFS IN PITCH PERCEPTION MASKING AND THE ROLE OF TFS IN DIP LISTENING THE ROLE OF TFS IN SPEECH PERCEPTION THE EFFECT OF HEARING LOSS

oswaldoc
Télécharger la présentation

The Role of Temporal Fine Structure Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Role of Temporal Fine Structure Processing Pronounced by:Hwang se mi

  2. List ABSTRACT Introduction THE ROLE OF TFS IN PITCH PERCEPTION MASKING AND THE ROLE OF TFS IN DIP LISTENING THE ROLE OF TFS IN SPEECH PERCEPTION THE EFFECT OF HEARING LOSS ON THE ABILITY TO USE TFS INFORMATION POSSIBLE REASONS FOR THE EFFECT OF COCHLEAR HEARING LOSS ON SENSITIVITY TO TFS

  3. ABSTRACT • Complex broadband sounds (auditory filters) -> relatively narrowband singnals (E and TFS) What is the TFS? - based on individual phase lock Looking for role of TFS • In masking • Pitch perception • Speech perception

  4. Introduction By Moore in 2002 Tonotopy is simulated by short term Fourier analysis What’s mean Magnitude? How to catch the phase? (Hilbert transform) – Bracewell 1986 length of the vector : envelope at that time rate of rotation: frequency

  5. * Hilbert transform • Hilbert transform 의 필요성 • 인과성 조건 • 시간영역에서의 인과성 • 주파수 영역에서의 인과성

  6. Glasberg and Moore 1990; Moore 2003 (Hilbert transform application)

  7. TFS’ Applicatoin range in human Mammal <= 4~5 KHz > (weaken) Useful phase lock : 10KHz possible (Heinz et al. 2001) Human : upper limit is not confirmed

  8. THE ROLE OF TFS IN PITCH PERCEPTION • Pure tone vs. complex tone in FTS (Moore (2003) and Plack and Oxenham (2005)) • Duration (steady pure tone vs. short tone) – ((Heinz et al. 2001)) • Steady complex tone vs(resolved). unresolved harmonics below 14(record) vs. upper 14 (TFS < E), weaken (Moore et al. 2006a)

  9. FM discrimination in TFS FM <= 5Hz (fail to predict, carrier 의 phase locking, using place cue) FM >= 10Hz (FM mixed with AM) In high frequency (e.g., 4000Hz) –”sluugish”, less effective interaural phase differences or interaural correlation ((Blauert 1972; Grantham and Wightman 1978, 1979)) Certain complex stimuli ((Siveke et al. 2008)) Modulation > detection threshold

  10. MASKING AND THE ROLE OF TFS IN DIP LISTENING 2. Moore와 Glasberg 의 실험 Masking intensity: 80dBSPL Fm : 250, 1000, 3000, 5275 Hz Signal Freq . : Fm * 1.8 Beat rate: 4,8,16,32,64 Hz (masker release) [result: 4Hz : avg. 25dB , 64Hz: avg. 10dB ] Result: It is possible effective dip listening masker and sunusoid frequency precise in phase locking. In High freq. (incorrect phase lock, depend on a little masker beat rate.) 1. What’s “listen in the dips”?

  11. THE ROLE OF TFS IN SPEECH PERCEPTION using vocoder for getting E and TFS(Dudley 1939). What’s “ E-speech” E is extrated from each band. (noise vocoder ->noise band modulate) (tone vocoder-> sinusoid centered at the freq. of band) good intelligibility for speech in quiet ((Shannon et al. 1995))

  12. What’s TFS –Speech? Bandpass signal/ E = FM sinusoidal carrier 같은 Rms 진폭을 갖기 위해 long term 진폭으로 scale 된 후 결합

  13. TFS-Speech Distort -> training Distinguish non-processed speech In Remove E -> econstructed (called cochlear filters by Ghitza) Reconstructed E make a contribution to the intelligibility of TFS-speech (though envelope cues alone are not sufficient) * Minimal speech intelligibility ( bandwidth <= 4 ERB, 0.08~8.02KHz, band number >= 8) Learning is high intelligibility with TFS-speech in conjunction with E

  14. Hopkins et al. (2008) TFS in speech perception -J: ERB n-wide band number Included TFS and E (step 4 [0~32]) -Other bands noise or tone vocoded conveyed E -SRT required: 50% High freq. band vocoded. Measured non-processed signal Situation: competing =talker subject number: 9 of nomal hearing Error rate: standard deviation Result: -j= As 0->32 (added TFS): speech identification was increased. (SRT decreased as 15dB) -Attribution to speaker indentification and tonal language

  15. THE EFFECT OF HEARING LOSS ON THE ABILITY TO USE TFS INFORMATION 1) 낮은 비율에서 FM탐지 (Lacher-Fougère and Demany 1998; Moore and Skrodzka 2002), 2) lateralizaton (based oninteraural phase difference) (Lacher-Fougère and Demany 2005) 3) 복합음에서 기본주파수 유무에 따른 discrimination (Moore and Moore 2003a)

  16. Hopkins&Moore (2007) experiments in moderate hearing loss discriminate a harmonic complex tone (F0=100, 200, or 400 Hz) the tones contained many components Passed though a fixed bandpass filter centered on the upper (unresolved) harmonics bandpass filter was centered on the 11th harmonic harmonic and frequency-shifted is small. (without hearing loss)

  17. In normal E is the same and shifted tone as having a higher pitch than the harmonic tone (de Boer 1956; Moore and Moore 2003b) The smallest detectable frequency shift (index d′=1) : 0.05F0 Even untrained normally hearing 0.2F0 or better (Moore and Sek 2008). moderate cochlear hearing performed very poorly the lowest center frequency tested by Hopkins and Moore was 1,100 Hz. -> 1000Hz 밑에서 청력손실이 있는 사람들은 일반적으로 주변에 배경소음이 유동적일 때 speech를 이해하기 어렵다

  18. Speech perception studies measured identification scores for unprocessed, E, and TFS-speech in quiet for three groups(nomal, young moderate hearing loss, elderly moderate hearing loss) Lorenzi et al. (2006) ->nomal hearing trained -> 90% correct with E- and TFS-speech -> moderate hearing loss-> poorly with TFS-speech, the younger hearing-impaired group : correlation r=0.83

  19. Lorenzi et al. (2008) measured the identification of E- and TFS-speech in quiet high-frequency mild to-severe hearing loss and normal (G20 dB HL) audiometric thresholds below 2 kHz hearing-impaired: (6.25%) normal-hearing: 20–50%

  20. experiment of Hopkins et al. (2008), moderate cochlear hearing loss: j=0 ->j=32 5dB improved Nomal : 15dB individual differences are not at present clear TFS information was not correlated range 250 to 4,000 Hz.

  21. cochlear implant systems Nie et al TFS information to speech recognition in cochlear implant adding this FM signal improved performance (71%)

  22. POSSIBLE REASONS FOR THE EFFECT OF COCHLEAR HEARING LOSS ON SENSITIVITY TO TFS why cochlear hearing loss reduced ability to process TFS information? 1. Reduced precision of phase locking In complex sound, effected by two tone suppression (Miller et al. 1997) 2. response at different points along the basilar membrane -> affect TFS correlation

  23. 3. More complex and more rapidly varying TFS 4. Hearing loss may produce a shift in frequency-place mapping((Liberman and Dodds 1984; Sellick et al. 1982) 5. There may be central changes such as loss of inhibition,

  24. The end 경청해주셔서 감사합니다.

More Related