1 / 69

C o lo ur an algorithmic approach

PhD Research Topic. C o lo ur an algorithmic approach. Thomas Bangert thomas.bangert@qmul.ac.uk http://www.eecs.qmul.ac.uk/~tb300/pub/PhD/ColourVision2.pptx. understanding how natural visual systems process information. Visual system: about 30% of cortex most studied part of brain

breena
Télécharger la présentation

C o lo ur an algorithmic approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PhD Research Topic Colouran algorithmic approach Thomas Bangert thomas.bangert@qmul.ac.uk http://www.eecs.qmul.ac.uk/~tb300/pub/PhD/ColourVision2.pptx

  2. understanding how natural visual systems process information Visual system: • about 30% of cortex • most studied part of brain • best understood part of brain

  3. Image sensors • Binary sensor arraymonochromatic ‘external retina’ • Luminance sensor arraydichromatic colour • Multi-Spectral sensor arraytetrachromatic colour What do these direct links to the brain do?

  4. Lets hypothesise … When an astronomer looks at a star, how does he code the information his sensors produce? It was noticed that parts of spectrum were missing.

  5. Looking our own star – the sun • x

  6. Each atomic element absorbs at specific frequencies …

  7. We can Code for these elements … We can imagine how coding spectral element lines could be used for visual perception … by a creature very different to us… a creature which hunts by ‘tasting’ the light we reflect… seeing the stuff we are made of Colour in this case means atomic structure and chemistry…

  8. Where do we start with humans? Any visual system starts with the sensor. What kind of information do these sensors produce? How do we use that information to code what is relevant to us? Let’s first look at sensors we ourselves have designed!

  9. Sensors we build X Y

  10. The Pixel Sensors element may be: • Binary • Luminance • RGB The fundamental unit of information!

  11. The Bitmap 1 2 0 2-d space represented by integer array 0 1

  12. What information is produced? 2-d array of pixels: • Black & White Pixel: • single luminance value, usually 8 bit • Colour Pixel • 3 colour values, usually 8-bitRGB

  13. What does RGB mean? • It is an instruction for producing light stimuli • Light stimuli for a human standard observer • Light stimuli produce perception • RGB codes the re-production of measured perceptual stimuli • It is assumed that humans are trichromatic • It tells us nothing about what colour means!

  14. The Standard Observer CIE1931 xy chromaticity diagram primaries at: 435.8nm, 546.1nm, 700nm The XYZ sensor response now we extract the colour information from the sensor readingsThe Math: … 2-d as z is redundant

  15. Understanding CIE chromaticity Best understood as a failed colour circle White in center Saturated / monochromatic colours on the periphery Everything in between is a mix of white and the colour

  16. But does it blend? Does it match? The problem of ‘negative primaries’ Monochromatic Colours

  17. ? What the Human Visual System (HVS) does is very different!

  18. Human Visual System (HVS) Part 1 Coding Colour

  19. The Sensor 2 systems: day-sensor & night-sensor To simplify: we ignore night sensor system Cone Sensors very similar to RGB sensors we design for cameras

  20. BUT: sensor array is not ordered arrangement is random note:very few blue sensors, none in the centre

  21. sensor pre-processing circuitry

  22. First Question: What information is sent from sensor array to visual system? Very clear division between sensor & pre-processing (Front of Brain) andvisual system (Back of Brain) connected with very limited communication link

  23. Receptive Fields All sensors in the retina are organized into receptive fields Two types of receptive field. Why?

  24. What does a receptive field look like? In the central fovea it is simply a pair of sensors. • Always 2 types: • plus-centre • minus-centre

  25. What do retinal receptive fields do? Produce an opponent value:simply the difference between 2 sensors This means: it is a relative measure, not an absolute measure and no difference = no information to brain

  26. Sensor Input Luminance Levels it is usual to code 256 levels of luminance Linear: Y Logarithmic: Y’

  27. - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + + + ++ + ++ + + - - -- - -- - - - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + Receptive Field Function Min Zone Max-Min Function Output is difference between average of center and max/min of surround Tip of Triangle Max Zone

  28. Dual Response to gradients Why? Often described assecond derivative/zero crossing

  29. Abstracted Neurons only produce positive values. Dual +/- produces positive & negative values.Together: called a channel means signed values.Produces directional information Information sent to higher visual processing areas Location, angle luminance, equiluminance and colour This is a sparse representation This is a type of data compression.Only essential information is sent! From this the percept is created Conversion from this format to bitmap?

  30. starting with the sensor:Human Sensor Responseto non-chromatic light stimuli

  31. HVS Luminance Sensor Idealized A linear response in relation to wavelength. Under ideal conditions can be used to measure wavelength.

  32. Spatially Opponent HVS:Luminance is always measured by taking the difference between two sensor values.Produces: contrast value Which is done twice, to get a signed contrast value

  33. Moving from Luminance to Colour • Primitive visual systems were in b&w • Night-vision remains b&w • Evolutionary Path • Monochromacy • Dichromacy (most mammals – eg. the dog) • Trichromacy (birds, apes, some monkeys) • Vital for evolution: backwards compatibility

  34. Electro-Magnetic Spectrum Visible Spectrum Visual system must represent light stimuli within this zone.

  35. Colour Vision Young-HelmholtzTheory Argument:Sensors are RGB thereforeBrain is RGB 3 colour model

  36. Hering colour opponency model Fact: we never see reddish green or yellowish blue. Therefore: colours must be arranged in opponent pairs: RedGreen BlueYellow 4 colour model

  37. Colour Sensorresponse to monochromatic light Human Bird 4 sensors Equidistant on spectrum

  38. How to calculate spectral frequency with 2 poor quality luminance sensors. 1 . 0 a shift of Δ from a known reference point 0 . 8 G R 0 . 6 0 . 4 Sensor Value 0 . 2 0 . 0 λ-Δ λ λ+Δ Wavelength Roughly speaking:

  39. the ideal light stimulus Monochromatic Light Allows frequency to be measured in relation to reference.

  40. Problem:natural light is not ideal • Light stimulus might not activate reference sensor fully. • Light stimulus might not be fully monochromatic. ie. there might be white mixed in

  41. Solution: Then reference sensor can be normalized Which is subtracted. A 3rd sensor is used to measure equiluminance.

  42. Equiluminance & Normalization Also called Saturation and Lightness. • Must be removed first – before opponent values calculated. • Then opponent value = spectral frequency • Values must be preserved – otherwise information is lost.

  43. a 4 sensor design 2 opponent pairs • only 1 of each pair can be active • min sensor is equiluminance

  44. What is Colour? Colour is calculated exactly the same as luminance contrast. The only difference is spectral range of sensors is modified. Colour channels are: RG By Uncorrected colour values are contrast values. But with white subtracted and normalized: Colour is Wavelength!

  45. How many sensors? 4 primary colours require 4 sensors!

  46. Human Retina only has 3 sensors!What to do? We add an emulation layer. Hardware has 3 physical sensors but emulates 4 sensors No maths … just a diagram!

  47. Testing Colour Opponent model What we should see What we do see Unfortunately it does not matchThere is Red in our Blue

  48. Pigment Absorption Data of human cone sensors Red > Green

  49. Solution: HVS colour representation must be circular! Which is not a new idea, but not currently in fashion. 480nm 620nm 540nm

  50. Dual Opponency with Circularity an ideal model using 2 sensor pairs

More Related