1 / 25

Perception of Space Perception of 3 D structure - static observer

Perception of Space Perception of 3 D structure - static observer – stereo, motion parallax, geometric cues etc Representation of structure and location of objects – moving observer Need to interact with objects – aiming movements

page
Télécharger la présentation

Perception of Space Perception of 3 D structure - static observer

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Perception of Space Perception of 3 D structure - static observer – stereo, motion parallax, geometric cues etc Representation of structure and location of objects – moving observer Need to interact with objects – aiming movements Need to get from one place to another in large scale space - Navigation

  2. Moving observer: Self motion generated optic flow. Point of heading indicated by “focus of expansion” Humans can locate FOE within a few degrees (Warren) A bit problematic is S is fixating off the FOE Later: Simon Rushton – use of visual direction to control heading Warren – both are used. Note Srinivasan – bees Also flow influences walking speed. Flow is not necessary for estimating distance travelled. Blind walking very accurate - Loomis Vestibular + proprioception/efference copy. These Are usually highly correlated. A cue conflict induces Re-calibration eg treadmill walking.

  3. Need to take account of self-motion.

  4. Systematic distortions of perceived visual speed during walking enhance perceptual precision in the measurement of visual speed Precision more important than accuracy (prism adaptation expt - demo)

  5. By slowing down the apparent rate of visual flow during self-motion, our visual system is able to perceive differences between actual and expected flow more precisely. This is useful in the control of action. Eg intercepting a moving object while walking ((??) Cf Barlow – subtracting mean improves discrimination. Previously, apparent slowing of optic flow during walking had been interpreted as a suppression of flow to promote the perception of a stable world.

  6. Optic flow “parsing” – flow generated by ego motion from that generated by object motion

  7. Stationary observer: cloud of limited lifetime dots. Note this is a cue conflict situation. No vestibular signal.

  8. Effect of optic flow is mediated by global not local effects.

  9. Simulate ground plane plus sky Discounting background flow despite lack of overlap. he magnitude of the effect in the Opposite condition appears to be around 60%–70% of that in the Full condition.

  10. Areas implicated?? MSTd (note vestibular input)) Also V7a VIP CSv Note – Angleaki – Bayesian combination of visual and vestibular information For evaluation of self motion Note – parsing without vection

  11. Figure 1. Optic flow field and decomposition into self-motion and object-motion components. Fajen & Matthis: What is the contribution of non-visual factors about self-motion to flow parsing.. Proprioceptive, effecrent commands, intertial (vestibular) cues. Fajen BR, Matthis JS (2013) Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion. PLoS ONE 8(2): e55446. doi:10.1371/journal.pone.0055446 http://www.plosone.org/article/info:doi/10.1371/journal.pone.0055446

  12. Use HMD in VR – normal walking. Subject judges whether he/she could pass through the aperture formed by two converging objects if walking as fast as possible.

  13. On catch trials manipulate gain between actual walking speed and speed of flow field. This dissociates non-visual from visual cues. Retinal motion generated by moving objects remains same. • Result: Gain influences judgment, but not as much as expected from visual manipulation • Only 20% of predicted effect. Therefore non-visual influence on self-motion perception • contributes to flow parsing.

  14. However, is flow parsing necessary for interception of moving objects during self-motion? Can use constant bearing angle strategy. Unresolved Calibration and prediction is another possibility.

  15. Multiple object tracking http://ruccs.rutgers.edu/finstlab/MOT-movies/MOT-Occ-baseline.mov

  16. Wolbers et al NN 2008 Spatial Updating Because egocentric object locations constantly change as we move through an environment, only continuous updating enables us to effectively act on objects or to avoid getting lost. This process has been termed spatial updating, and it is of major importance whenever objects go out of view, when people walk with little vision in the dark. Successful spatial updating requires an observer to perceive the initial spatial positions of external objects and to create a corresponding internal representation. Subdivisions of the posterior parietal cortex code for spatial location in multiple body-based reference frames. Such locational cues form the basis of an egocentric map of the surrounding space that critically depends on the precuneusand connected inferior and superior parietal areas. Unknown howthehuman brain continuously integrates the wealth of incoming information during complete body displacements.

  17. Only the precuneus and the left dorsal precentralgyrus showed a combination of both main effects in the delay phase (middle); not only were BOLD responses elevated during updating as compared with static trials, but they also showed a linear increase related to the number of objects. This indicates that both regions are sensitive to working memory load and the presence of optic flow, suggesting a prominent role for spatial updating.

  18. Humans can update up to four spatial positions during simulated self-motion. Pointing errors and reaction times increased with increasing working memory load And were elevated when self-motion cues were present. Activation in the precuneus and the dorsal precentral30 gyrusclosely followed both experimental manipulations, thus suggesting their importance for the updating process. Only the Pre-cuneus involved when pointing response required. support for the existence of transient spatial maps in medial parietal cortex. Visual spatial updating linked to the interplay of self-motion processing with the construction of updated representations in the precuneus and the context-dependent planning of potential motor actions in dorsal precentralgyrus. When navigating in familiar environments or over longer durations, humans predominantly monitor changes in orientation and position using path integration and later reconstruct object locations from enduring allocentric representations. Medial prefrontal cortex and hippocampus - involved in visual path integration – position only updating versus Spatial updating over short time scales in novel environments operates on transient, egocentric representations, in which the relationship between each object and the observer must be constantly updated as the observer move

  19. Electrocortical stimulation in the precuneus can induce the sensation of translational self-motion39 and BOLD responses in this structure correlate with the subjective experience of self-motion1 updating of the stored egocentric object vectors mediated by dense connections between area MST and the precuneus5, providing the latter with crucial information about translational self motion may contain a human homolog of the monkey parietal reach region the storing and updating of egocentric representations of space, independent of potential actions, constitutes the most parsimonious interpretation of the activation in the precuneus Dorsal pre-motor: whenever subjects responded by pointing, the egocentric spatial map in the precuneuswas transformed into corre- sponding vectors for pointing movements in PMd, which could be accomplished via direct connections between both regions.

  20. Angelaki et al

  21. 1. Top-down and bottom-up signals of attention control are not totally separated, and my question is where are they integrated? A paper by Thompson et al. (2005) shows that FEF has a salience map that topologically integrate those signals as revealed by error signals. Do you know any other regions also have similar or different mechanisms that integrate bottom-up and top-down signals? 2. When we discussed the difficulty to attach labels or names to smell, I was thinking about the creation of language and how that is limited in terms of naming or describing smells. My thought was this could possibly be attributed to 1) wiring of language related regions to regions that process visual features of objects throughout the development of language, and 2) relatively few connection between olfactory structures in the brain and language areas compared to visual and language areas. In the textbook the possible reasons described are 1) the processing of odors skip thalamus, which is relevant for language processing, 2) competition between odor and language processing for cognitive resources. If there's competition of resources, why could the study that used odor cues to reactivate declarative memory during sleep work (the one you mentioned in class)? Declarative memory must be encoded when the odors are presented during learning. And odor also has also been used to be the cue for memory of word list (eg. Herz 1997).

  22. You mentioned that humans have about 25 receptors for bitter taste, as opposed to only 3 receptors for sweet taste. Your/the book's explanation for this is essentially evolution, that humans needed to protect themselves from bitter things that might harm them like poisons. Is this the explanation that scientists give to many things that remain unexplained? “Just-so stories” 'what is the difference in coding of taste/smell from audition, vision, proprioception, and touch?

More Related