260 likes | 385 Vues
This research explores how retinal oscillations encode large contiguous visual features and their implications for cognitive models in system design. By examining the rate and temporal codes used by retinal neurons across various species, including primates and cats, the study highlights the potential for improved object detection and autonomous navigation in robotics. The integration of spiking neurons and biomimetic computing methods paves the way for future developments in adaptive control and visual processing systems, enhancing both commercial and academic applications.
E N D
Cognitive Systems: Human Cognitive Models in System Design Retinal Oscillations that Encode Large Contiguous Features: Implications for how the Nervous System Processes Visual Information Garrett Kenyon Los Alamos National Laboratory
A Model Retina Kolb, Fernandez & Nelson
1 0 sec Rate Code (Retina) light intensity Dacey & Lee, Nature, 1994 (monkey)
Poisson Retina Detector time Pop-out from synchrony
Temporal Code (Retina): Topology separate bars single bar Neuenschwander & Singer, Nature, 1996 (cat)
Temporal Code (Model): Topology 12 1 2 23 3 34 4 2 -2 -50 50 msec
Temporal Code (Retina): Size increasing size Neuenschwander & Singer, Vision Res., 1999 (cat)
Temporal Code (Model): Size increasing size
Retinal Oscillations in other Species Frog Primate Ishikane et al, 1999 Frishman et al., 2000 Also oscillations in Rabbit, Salamander and Human retina
4 2 -40 0 -40 0 40 40 0 Experimental Test Cat Model 66 9.8º9.8º 44 6.3º6.3º autocorrelation 0.7º0.7º 11 msec msec Neuenschwander & Singer (personal communication)
0 0 200 200 400 400 msec msec Single Trial Discrimination Cat Model 100% 80% percent correct 60% Neuenschwander & Singer (personal communication)
Object Detection with Spiking Neurons 3rd module orientation 2nd module orientation 1st module orientation input layer y x
image frame Lukas-Kanade population code Depth Map
Biomimetic Computing: Future Work • Implicit object/terrain classification using spiking neurons • Coherent motion • Smooth contours • Textures • Depth • Autonomous navigation and obstacle avoidance • Combine stereo and motion processing • Adaptive control: Visual feedback • Incorporate spiking neurons • On board hardware • Prototype robot with artificial vision • Commercial and Academic partners
Biomimetic Computing at LANL Garrett Kenyon (P-21) Bryan Travis (P-21) John George (P-21) James Theiler (NIS-2) Greg Stephens (postdoc) Mark Flynn (postdoc) Kate Denning (grad student, UCSD) Sarah Kolitz ( post baccalaureate) Nils Whitmont (post baccalaureate) Alex Nugent (post baccalaureate)