1 / 16

Perceptual Hysteresis Thresholding: Towards Driver Visibility Descriptors

Perceptual Hysteresis Thresholding: Towards Driver Visibility Descriptors. Nicolas Hautière, Jean-philippe Tarel, Roland Brémond Laboratoire Central des Ponts et Chaussées, Paris, France. Presentation overview. Introduction Angular resolution of a camera Human vision system modeling

hanley
Télécharger la présentation

Perceptual Hysteresis Thresholding: Towards Driver Visibility Descriptors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Perceptual Hysteresis Thresholding:Towards Driver Visibility Descriptors Nicolas Hautière, Jean-philippe Tarel, Roland Brémond Laboratoire Central des Ponts et Chaussées, Paris, France

  2. Presentation overview • Introduction • Angular resolution of a camera • Human vision system modeling • Discrete Cosine Transform • Design of a visibility criteria • Perceptual hysteresis thresholding • Towards road visibility descriptors • Conclusion

  3. Introduction • The major part of all the information used in driving is visual. Reduced visibility thus leads to accidents. • Reductions in visibility may have a variety of causes (geometry, obstacles, adverse weather/lighting conditions). • Different proposals exist in the literature to mitigate the dangerousness of each of these situations using in-vehicle cameras. • One objective is to inform the driver if he is driving too quickly according to the measured visibility conditions. • Detecting the visible edges in the image is a critical step to assess the driver visibility. • We propose such a technique based on the Contrast Sensitivity Function (CSF) of the Human Visual System (HVS).

  4. Angular resolution of a camera Let’s express the angular resolution of a camera in cpd. With the notations of Fig. 1, the length d for a visual field = 1° is expressed by: To have the maximum angular resolution of the camera in cpd, we divide d by the size of two pixels (black and white alternation) of the CCD array: Fig. 1 Cycle per degree (cpd): This unit is used to measure how well details of an object can be seen separately without being blurry. This is the number of lines that can be distinguished in a degree of a visual field.

  5. 0 Contraste 1 0 Spatial Frequency [cpd] 45 Human vision system modeling • Our ability to discern low contrast patterns varies with the size of the pattern, i.e. its spatial frequency f (cpd). • The CTF is a measure of the minimum contrast needed for an object (a sinusoidal grating) to become visible. • This CTF is defined as 1/CSF, where CSF is a Contrast Sensitivity Function (see Fig. 2). In this paper, we use Mannos CSF, plotted in Fig. 3 and expressed by: Fig. 2 Fig. 3

  6. Discrete Cosine Transform • A={aij} a block of the original image • B={bij} the corresponding block in the transformed image where c0=1/sqrt(2), ci=1 for i=1...n-1. • The maximum frequency of the DCT is obtained for the maximum resolution of the sensor, i.e. r*cpd: • To express the bij in cpd, we use the following scale factor obtained by computing the ratio between (1) and (4):

  7. Design of a visibility criteriaDCT vs CTF • We can now plot the DCT coefficients with respect to the CTF curve: Fig. 4: Curves of the CSF (__) and of the CTF (---) for the sensor used to grab the images tpix=8.3μm, f =8.5mm. Fig. 5: plot of the DCT in the marked blocks with respect to the CTF

  8. Visibility can be related to the contrast C, defined by: For suprathreshold contrasts, the Visibility Level (VL) of a target can be quantified by the ratio: Design of a visibility criteria Visibility Level Definition • As Lb is the same for both conditions, then this equation reduces to: • ΔLthreshold depends on many parameters and can be estimated using Adrian’s empirical target visibility model (Adrian, 1989).

  9. Design of a visibility criteria Visibility Level for Periodic Targets • We propose a new definition of the VL, denoted VLp, valid for periodic targets, i.e. sinusoidal gratings. • We first consider the ratio rij between a DCT coefficient of the block and the corresponding coefficient of the CTF: • Based on the CSF definition CSF, rij ≥1 means that the block contains visible edges. • To define VLp, we choose the greatest rij: Fig. 6: Map of VLp≥1

  10. Perceptual hysteresis thresholding: Edges Detection by Segmentation • The proposed approach may be used with different edge detectors (Canny-Deriche, zero-crossing approach, Sobel) • We propose an alternative method which consists in finding the border F which maximizes the contrastC(s0) between two parts of a block, without adding a threshold on this contrast value. The edges are the pixels on this border: • This approach is based on Köhler’s binarization method and is detailed in [16]. [16] N. Hautière, D. Aubert, and M. Jourlin. Measurement of local contrast in images, application to the measurement of visibility distance through use of an onboard camera. Traitement du Signal, 23(2):145–158, Septembre 2006.

  11. Perceptual hysteresis thresholding: Hysteresis Thresholding on the VLp • In the usual hysteresis thresholding, a high threshold and a low threshold of gradient magnitude are set. • We propose to replace these thresholds by thresholds on the VLp (cf. Fig. 7) • Thus, the algorithm is as following: • All possible edges are extracted, • The edges are selected thanks to its VLp value using low tL and high tH thresholds. Fig. 7: Principle behind thresholding by hysteresis:

  12. Perceptual hysteresis thresholding: Results Samples tL=1 ; tH=10 tL=1 ; tH=20 No noisy features are detected whatever are the lighting conditions whereas thresholds are fixed. The method is thus clearly adaptive.

  13. Perceptual hysteresis thresholding: Contrast Detection Threshold of the Human Eye • The value of tL is easy to choose, because it can be related to the HVS. • Setting tL=1 should be appropriate for most applications.  The hysteresis thresholding has now only one parameter ! • The value of tH depends on the application. • For lighting engineering, the CIE published some guidelines to set the VL according to the visual task complexity. • VL=7 is a adequate value for night-time driving task. • We can set tL=7 as a starting point. However, a psychophysical validation is necessary.

  14. Towards road visibility descriptors • Once visible edges have been extracted, they can be used in the context of an onboard camera to derive driver visibility descriptors: visibility distance estimation… • There are three steps to complete and validate the algorithm from a psychophysical point of view: • An extension to color images may be necessary, • The CSF is valid for a given adaptation level of the HVS. It is interesting to automatically select the properly CSF. • To compare our results with the set of edges which are manually extracted by different people.

  15. Conclusion • We present a visible edges selector and use it for in-vehicle applications. • It proposes an alternative to the traditional hysteresis filtering. • We propose to replace the thresholds on the gradient magnitude by visibility levels. • The low threshold can be fixed at 1 in general. • Some guidelines to set the high threshold are proposed. • This algorithm may be used to develop sophisticated driver visibility descriptors. • Thereafter, it can be fused with other visibility descriptors to develop driving assistance systems which takes into account all the visibility conditions.

  16. Thank you for your attention ! This work is partly founded by the French ANR project DIVAS (2007-2010) dealing with vehicle-infrastructure cooperative systems.

More Related