html5-img
1 / 95

C o lo ur an algorithmic approach

MSc in Computer Sciency by Research. Project Viva. C o lo ur an algorithmic approach. Thomas Bangert tb300@eecs.qmul.ac.uk. understanding how visual system process information. Visual system: about 30% of cortex most studied part of brain best understood part of brain. Image sensors.

hamlin
Télécharger la présentation

C o lo ur an algorithmic approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MSc in Computer Sciency by Research. Project Viva Colouran algorithmic approach Thomas Bangert tb300@eecs.qmul.ac.uk

  2. understanding how visual system process information Visual system: • about 30% of cortex • most studied part of brain • best understood part of brain

  3. Image sensors • Binary sensor array • Luminance sensor array • Multi-Spectral sensor array

  4. Where do we start? We first need a model of what light information means. Any visual system starts with a sensor: What kind of information do these sensors produce? Let’s first look at sensors we have designed!

  5. Sensors we build X Y

  6. The Pixel Sensors element may be: • Binary • Luminance • RGB The fundamental unit of information!

  7. The Bitmap 1 2 0 2-d space represented by integer array 0 1

  8. What information is produced? 2-d array of pixels: • Black & White Pixel: • single luminance value, usually 8 bit • Colour Pixel • 3 colour values, usually 8-bit

  9. ? Where we need to start: the fundamentals of the sensor

  10. Human Visual System (HVS) The fundamentals!

  11. The Sensor 2 systems: day-sensor & night-sensor To simplify: we ignore night sensor system Cone Sensors very similar to RGB sensors we design for cameras

  12. BUT: sensor array is not ordered arrangement is random note:very few blue sensors, none in the centre

  13. sensor pre-processing circuitry

  14. First Question: What information is sent from sensor array to visual system? Very clear division between sensor & pre-processing (Front of Brain) andvisual system (Back of Brain) connected with very limited communication link

  15. Receptive Fields All sensors in the retina are organized into receptive fields Two types of receptive field. Why?

  16. What does a receptive field look like? In the central fovea it is simply a pair of sensors. • Always 2 types: • plus-centre • minus-centre

  17. What do retinal receptive fields do? Produce an opponent value:simply the difference between 2 sensors This means: it is a relative measure, not an absolute measure and no difference = no information to brain

  18. Sensor Input Luminance Levels it is usual to code 256 levels of luminance Linear: Y Logarithmic: Y’

  19. - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + + + ++ + ++ + + - - -- - -- - - - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + Receptive Field Function Min Zone Max-Min Function Output is difference between average of center and max/min of surround Tip of Triangle Max Zone

  20. Dual Response to gradients Why? Often described assecond derivative/zero crossing

  21. Abstracted Neurons only produce positive values. Dual +/- produces positive & negative values. Together: called a channel Produces signed values. Co-ordinate

  22. Human Sensor Responseto monochromatic light stimuli

  23. HVS Luminance Sensor Idealized A linear response in relation to wavelength. Under ideal conditions can be used to measure wavelength.

  24. Spatially Opponent HVS:Luminance is always measured by taking the difference between two sensor values.Produces: contrast value Which is done twice, to get a signed contrast value

  25. Moving from Luminance to Colour • Primitive visual systems were in b&w • Night-vision remains b&w • Evolutionary Path • Monochromacy • Dichromacy (most mammals – eg. the dog) • Trichromacy (birds, apes, some monkeys) • Vital for evolution: backwards compatibility

  26. Electro-Magnetic Spectrum Visible Spectrum Visual system must represent light stimuli within this zone.

  27. Colour Vision Young-HelmholtzTheory Argument:Sensors are RGB thereforeBrain is RGB 3 colour model

  28. Hering colour opponency model Fact: we never see reddish green or yellowish blue. Therefore: colours must be arranged in opponent pairs: RedGreen BlueYellow 4 colour model

  29. HVS Colour Sensorsresponse to monochromatic light

  30. How to calculate spectral frequency with 2 luminance sensors. Roughly speaking:

  31. the ideal light stimulus Monochromatic Light Allows frequency to be measured in relation to reference.

  32. Problem:natural light is not ideal • Light stimulus might not activate reference sensor fully. • Light stimulus might not be fully monochromatic. ie. there might be white mixed in

  33. Solution: Then reference sensor can be normalized Which is subtracted. A 3rd sensor is used to measure equiluminance.

  34. Equiluminance & Normalization Also called Saturation and Lightness. • Must be removed first – before opponent values calculated. • Then opponent value = spectral frequency • Values must be preserved – otherwise information is lost.

  35. a 4 sensor design 2 opponent pairs • only 1 of each pair can be active • min sensor is equiluminance

  36. What does a colour opponent channel look like? luminance contrast opponent channel each colour opponent channel codes for 2 primary colours Total of 4 primary colours

  37. What is Colour? Colour is calculated exactly the same as luminance contrast. The only difference is spectral range of sensors is modified. Colour channels are: RG BY Uncorrected colour values are contrast values. But with white subtracted and normalized: Colour is Wavelength!

  38. How many sensors? 4 primary colours require 4 sensors!

  39. Human Retina only has 3 sensors!What to do? Because of opponencywhen R=G, RG colour channel is 0. Why not pair RG and reuse it as a Yellow sensor! Yellow can be R=G

  40. How do we abstract information from sensor array? Luma (Y’)Red-Green (CB)Blue-Yellow (CR)

  41. Luminance + 2 colour values+ 2 sensor correction values Chroma BlueChroma Red + Lightness + Saturation

  42. Tri-Phosphor Lightingoptimised for perception of ‘white’

  43. Primary Colours matched to spectrum

  44. Testing Colour Opponent model What we should see What we do see Unfortunately it does not matchThere is Red in our Blue

  45. The strange case of Ultra-Violet Light with frequency of 400nm is ultra-blue Red sensor is at opposite of spectrum & not stimulated. Yet we see ultra-violet – which is Blue + Red …and the more we go into UV the more red

  46. Colour Matching Data (CIE 1931)(indirect sensor response) a very odd fact – a virtual sensor response

  47. Pigment Absorption Data of human cone sensors Red > Green

  48. Therefore: HVS colour representation must be circular! Which is not a new idea, but not currently in fashion. 480nm 620nm 540nm

  49. Dual Opponency with Circularity an ideal model using 2 sensor pairs

  50. Colour Wheel Goethe & Munsell Colours are represented by a single value: Hue

More Related