1 / 27

Attention

Attention. Outline: Overview bottom-up attention top-down attention physiology of attention and awareness inattention and change blindness. Credits: major sources of material, including figures and slides were:

dena
Télécharger la présentation

Attention

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Attention • Outline: • Overview • bottom-up attention • top-down attention • physiology of attention and awareness • inattention and change blindness

  2. Credits: major sources of material, including figures and slides were: • Itti and Koch. Computational Modeling of Visual Attention. Nature Reviews Neuroscience, 2001. • Sprague, Ballard, and Robinson. Modeling Attention with Embodied Visual Behaviors, 2005. • Fred Hamker. A dynamic model of how feature cues guide spatial attention. Vision Research, 2004. • Frank Tong. Primary Visual Cortex and Visual Awareness. Nature Reviews Neuroscience, 2003. • and various resources on the WWW

  3. How to think about attention? • William James: “Everyone knows what attention is” • overt vs. covert attention • attention as a filter • attention as enhancing the signal produced by a stimulus • tuning system to a specific stimulus attribute • attention as a spotlight • location-, feature-, object-, modality-, task- based • attention as binding together features • attention as something that speeds up processing • attention as distributed competition

  4. Important Questions • what is affected by attention? • where in the brain do we see differences between attended/unattended conditions? • what controls attention? • how many things can you attend to? • is attention a useful notion at all? Or is it too blunt and unspecific?

  5. Bottom-up Attention

  6. Points to note: • saliency of location depends on its surround • integration into single saliency map (where?) • inhibition of return is important • how are things updated across eye movements • purely bottom-up models provide very poor fit to most experimental data

  7. infants may be primarily driven by visual saliency at about a year of age they start with gaze-following: “looking where sombody else is looking” foundational skill important for learning language, ... does not emerge normally in certain developmental disorders theory: they learn to exploit the caregiver’s direction of gaze as a cue to where interesting things are Looking to maximize visual reward G. Deák, R. Flom, and A. Pick (18 and 12 month-olds)

  8. Carlson & Triesch (2003): discrete regions of space (N=10) interesting object/event in one location, some- times moving randomly caregiver (CG) looks at object with probability pvalid Infant: • can look at CG or any region of space • only sees what is in the region it looks at • decides when and where to shift gaze

  9. Overview of Infant Model • Infant model is simple two agent system (Findlay & Walker, 1999): • “when agent” decides when to shift gaze • “where agent” decides where to look fixation time object in view inst. reward “when” agent: shift gaze? yes/no instantaneous reward (habituating) “where” agent: where to? CG in view CG head pose new location

  10. TD error Infant Model Details Habituation: reward for looking at an object decreases over time: β: habituation rate, hfix(0) habituation level at beginning of fixation, t: time since start of fixation Agents learn with tabular SARSA algorithm: Q: state action value, α: learning rate Q(st,at) = Q(st,at) + α[rt+1+ γQ(st+1,at+1) - Q(st,at)] Softmax action selection balances exploration/exploitation (τ >0: temperature)

  11. Simulation Results Caregiver Index (CGI): ratio of gaze shifts to CG Gaze Following Index (GFI): ratio of gaze shifts following CG’s line of regard (error bars are standard deviations of 10 runs) learning time • basic set indeed sufficient for gaze following to emerge • model first learns to look at CG, then learns gaze following

  12. no learning if things that CG looks at are not rewarding no learning if CG aversive (Autism?) learning poor if CG too rewarding (Williams syndrome?) Variation of Reward Structure time until GFI>0.3

  13. Sprague, Ballard, and Robinson (2005): VR platform to study visual attention in complex behaviors where several goals have to be negotiated (“Walter”) rewards are coupled to successful completion of behaviors Scheduling Visual Routines

  14. Abstraction hierarchy:

  15. Behaviors modeled as RL agents:

  16. Maximum Q Values and best actions: obstacle avoidance sidewalk following litter pickup

  17. Growing uncertainty about state unless you look: • control of eye gaze by behavior that experiences biggest loss due to uncertain state information

  18. switching contexts with a state machine:

  19. comparing Walter to Human subjects in same task: how often does a behavior control gaze in the “on sidewalk” context?

  20. comparing Walter to Human subjects in same task: which behavior controls the eye gaze across different contexts?

  21. Motter (1994) Modulation of V4 activity

  22. Hamker (2004) Model

  23. Feedback from higher level exerting input gain control:

  24. a. switching from red to green. b. spatial effects due to feedback from premotor areas

  25. Model vs. Experiment: Experiment Model

  26. Super, Spekreijse, and Lamme (2001): monkey’s task: detect texture defined region and saccade to it record from orientation selective cell in V1 how is cell’s response correlated with monkey’s percept? Detection of stimuli and V1 activity

  27. enhancement of late (80-100ms) response only if target is actually detected by the monkey “seen” “not seen”

More Related