1 / 44

Optimal Linear Operator for Step Edge Detection

Optimal Linear Operator for Step Edge Detection. Jun Shen Image Laboratory, Institute EGID, Bordeaux-3 University, France. 1. INTRODUCTION. Edge detection: important in image processing differential operators. smoothing necessary to reduce noise.

jasmine
Télécharger la présentation

Optimal Linear Operator for Step Edge Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimal Linear Operator for Step Edge Detection Jun Shen Image Laboratory, Institute EGID, Bordeaux-3 University, France

  2. 1. INTRODUCTION • Edge detection: important in image processing • differential operators. • smoothing necessary to reduce noise. • Methods for edge detection in noisy images • Robert gradient, Sobel, Prewitt operators, facet model, Laplacian operator; (small sizes) • Marr edge detection theory: large sizes Gaussians. • much computation, fast algorithms for their realization • Gaussian function cut-off in [-w/2, +w/2], • J.Canny • mono step edge model, finite kernel size constraint, • optimal filter approximated by Gaussian filter.

  3. Our idea • Gaussian filter: • contradiction between noise suppressing and edge localization precision. • Moreover loss of localization precision introduces in turn difficulties for edge verif. by grad. • infinite filter size, to efficiently reduce noise • better results • limited kernel size introduces a cut-off effect • sharper at the center than Gaussians to improve precision of edge localization

  4. 2. OPTIMAL FILTER BASED ON MONO STEP EDGE MODEL • Optimal Kernel for Mono-Edge Model: Edge detector: smoothing + diff. block

  5. Noisy mono step edge model Si(x) input noisy step edge image Si(x) = S(x) + N(x) A , for x > 0 ; S(x) = {A/2 , for x = 0 ; step edge free of noise 0 , for x < 0 . N(x), white indep. Noise,E{N(x)} = 0, E{ N2(x) } = n2

  6. f(x): low-pass smoothing kernel to remove noise • Output So(x) of low-pass smoothing filter: So(x) = Si(x)* f(x) = S(x) * f(x) + N(x) * f(x) • Energy of noise in So(x) measured by EN EN = n2 ·  f 2 (x)·dx Preparation for edge detection to remove noise, (d/dx)So(x)= (d/dx){S(x)* f(x)}+ (d/dx) {N(x)* f(x)} and(d /dx ) {S(x)* f(x) }= A · f(x) • Energy of output deriv. at edge position x = 0 measuredby Es Es = A2 · f2(0)

  7. Noise energy in 1st derivative of filter output EN' = n2 ·  f '2(x)·dx • f(x) should minimize the criterion C C =  (EN · EN' / Es2) Ignoring amplitude of step edge and noise in C, • Normalized criterion CN dep. only on filter CN =  {  f 2 (x)·dx }·{  f ’ 2(x)·dx } / f 2 (0)

  8. f(x) should satisfy: • Max f(x)= f(0), x  (-, +), to avoid additional maxima other than edge. • And f(x1) > f(x2) for | x1 | < | x2 | . • decomposed to sum of odd & even parts, f1(x)+ f2(x), to minimize CN, f1(x) = 0 , i.e., f(x) must be even. For convenience, first analyze kernel of finite window size 2W and then take W + to find optimal solution. w w • CN2 = {  f 2 (x)·dx }·{  f ’ 2(x)·dx } / f 2 (0) 0 0

  9. In order to find the optimal function f(x), x  [ 0, W], which minimizes CN, create: ( f, f ') = f ' 2 +  ·f2,  > 0, const. • Euler's equation for variational analysis: f ''(x) - · f(x) = 0 which gives f(x) = C1·exp (p·x) + C2·exp (-p·x) with p = > 0 and C1 and C2: constants determined by boundary condition.

  10. Taking W  +, in order to be convergent, f(x) should satisfy boundary condition lim f(x) = 0 x + which gives C1= 0, f(x) = C2·e -p·x, x [0, +) . • Because f(x) is even, the optimal f(x) is: f(x)= C2·exp (-p·|x|) with p> 0, x  (-, + ). • take normalized kernel with amplitude gain 1, f(x) = ( p / 2 ) · exp( -p·|x| ), or rewritten as f(x) = a · b |x| where a = ( -ln b ) / 2 and 0 < b < 1 . • In discrete case, a = (1-b)/ (1+b ) with 0< a, b< 1.

  11. Performance of the Optimal Filter • Noise / Signal Ratio (NSR) NSR for optimal filter CN = 1 And for the Gaussian filter G(x, CN = (  / 2 )1/2  1.253 The optimal filter ISEF shows a better performance than Gaussian filters, 25% or so.

  12. Precision for edge localization Edges detected by zero-crsing of 2nd deriv., the precision of edge localization can be analyzed from sign change & slope of the 2nd deriv. Consider 2nd deriv. of image filtered by f(x), (d2/dx2){Si(x)* f(x)}= (d2/dx2){S(x)* f(x)}+ (d2/dx2){N(x)* f(x)} (d2/dx2) { S(x) * f(x) }= (d/dx ) S(x) * (d/dx) f(x) For the optimal filter f(x) = a · b|x|, we have a ·ln b· bx for x > 0 , (d/dx) f(x) = { -a ·ln b· b-x for x < 0 . A·a ·ln b· bx for x > 0 , (d2/dx2) {S(x) * f(x)} = { -A·a ·ln b· b-x for x < 0 .

  13. ISEF and its first order derivative

  14. Removing coefficients corresponding to amplitude of step edge and noise, we obtain the measure of the edge localization error Le : Le=  lim  |f ''(x)|·dx 0/2  f ''(x)·dx -/2 • Optimal filter f(x) = a · b|x| , Le = 0 • Gaussian filter G(x, ), Le = 4· (2 e )1/2

  15. 3. OPTIMAL FILTER FOR MULTI-EDGE DETECTION • In real cases, always many edge points, multi-edge models necessary. • Find an optimal filter eliminating noise and preserving as well as possible step edges so that its deriv. can be used for edge detection. Edge positions change from one image to another, our analysis is statistical. • What we are interested in is step edges rather than D.C. or low freq. components, so we modelize step edge sequences by the following stationary stochastic processes.

  16. Noisy multi-edge model SMi(x) SM(x) takes -A or A, A > 0, and a jump of from -A to A ( respectively from A to -A ) corresponds to a step edge.

  17. Suppose SM(x) is a stationary stochastic process satisfying conditions: • Stability: probability we have a step edge in (x0, x0 + x ) is independent of x0; • Orthogonality: number of step edges in (x0, x1) independent of that in (x2, x3) if (x0, x1)  (x2, x3 ) = ø; • Finite edge density: the probability that there exist two or more step edges in an interval ( x + x ) is very small when x0 , i.e., P2(x )/ x  0 when x  0, where P2(x) denotes the probability that we have two edge points in the interval ( x + x) . • The noisy step edge sequence SMi(x) is SMi(x) = SM(x) + N(x) SM(x): step edge sequence free of noise and N(x), white noise indep. of SM(x), E{N(x)}= 0, E{N2(x)}= n2.

  18. According to theory of linear filtering of stationary stochastic processes, spectral charact. of optimal filter f(x) should satisfy: +  exp ( jT )· [S()- M()·SN() ]· d = 0 - for - < T < + , with M() = F { f(x) }, Fourier transform of f(x). • The necessary and sufficient condition is M() = S() / SN(), which gives optimal filter kernel f(x) f(x) = [ ( A2) / ( 2 B2) ] · exp ( -  |x| ) with  = ( 4·2 + A2 / B2 ) 1/2

  19. based on multi-edge model, the optimal linear smoothing filter is still an ISEF. • : density of edge points, i.e., average number of edge points in an interval of unite length. • So when  increased, i.e., the average distance between neighboring edge pixels decreased,  will be increased, optimal filter sharper to reduce influence between neighboring edges. pixels. • And on the other hand, when signal / noise ratio of noisy image is decreased, the optimal filter will become planer to remove effectively the noise. But in all cases, we have always  2 , otherwise the filter kernel f(x) would be too plane to avoid an important influence of neighboring edges.

  20. 4. DERIVATIVE COMPUTATION FOR EDGE DETECTION • f(x) = a · b|x| with a = ( -ln b ) / 2, 0 < b < 1 ; • f(x) = fL(x) * fR(x) with 2·a· bx , for x  0, fL(x) = { 0 , for x < 0; 0, for x > 0, and fR(x) = { 2 a· b-x , for x  0. • f '(x) = a·[ fR(x) - fL(x) ] • f '(x) = (-ln b) · [ f(x) - fL(x) ] • f ''(x) = 4·a2·[ f(x) - (x) ]

  21. Neglecting coefficients, given input image I(x): • Low-pass filtered image: I(x)*f(x) = I(x)*fL(x)*fR(x) • First order derivative: (d/dx) [ I(x)*f(x) ] ~ I(x)*fR(x) - I(x)*fL(x) and (d/dx) [ I(x)*f(x) ] ~ I(x)*f(x) - I(x)*fL(x) • Second order derivative: (d2/dx2) [ I(x)*f(x) ]~ I(x)*f(x) - I(x)

  22. ISEF and derivatives fL(x) fR(x) I(x) I(x)* f(x) + - d/dx [I(x)* f(x)] - + d2/dx2 [I(x)* f(x)]

  23. 5. RECURSIVE REALIZATION • Recursive Realiz. of One-Side Exp. Filters Consider a recursive filter as the following: Y1( i ) = a0 X( i ) + a1 Y1( i - 1), i = 1, ··· , N where Y1 is output, X, input and a0, a1 > 0, a0 + a1 = 1. By use of Z-transform, we have: Y1( i ) = fL ( i ) * X( i ) where fL ( i ) is the equivalent linear filter with a0 · a1i for i  0 FL ( i ) = { 0 for i < 0. Similarly, fR(x) realized by: Y1( i ) = a0 X( i ) + a1 Y1( i + 1), i = N, ··· , 1

  24. 6. GENERALIZATION IN MULTI-DIMENSIONAL CASES • 1D ISEF f (x) = a . e -p |x| ( p > 0 ) • Using Euclidean Distance? A natural choice of D(x,y) is Euclidean distance De (x,y) = ( x2 + y2 )1/2 which gives fe (x,y) = a · exp [ -p ( x2 + y2 )1/2] No fast realizations have yet been found.

  25. Magnitude Distance Symmetric Exp. Filter Dm(x, y) = | x | + | y | f (x, y) = a · e -p (|x| + |y|) • Given a 2-D filter f (i, j) to realize, f (i, j) separable: f (i,j) = f (i) * f (j) f(i) and f(j) are the 1D ISEF respectively in dim. i and j. • directional derivatives: • 1st derivative in dim. i : fi (i,j) = f (j) * fi (i) • 2nd derivative in dim. i : fii (i,j) = f (j) * fii (i) • 1st derivative in dim. j : fj (i,j) = f (i) * fj (j) • 2nd derivative in dim. j : fjj (i,j) = f (i) * fjj (j)

  26. Laplacian Calculation in 2DCases • 2 f(i,j) = fii (i,j) + fjj (i,j) • Another possibility to calculate Laplacian: Let I2(x,y) = I(x,y) * f2(x,y) f2(x,y) = a2 · b |x| + |y| We have I2(x,y) - I(x,y) 2 I(x,y) * f2(x,y)

  27. ISEF GENERALIZED TO 2D 2 - - + Input I(x, y) Low-pass fL(x) fR(x) fL(y) fR(y) - - + + /y - - + + /y2 /x fL(y) fR(y) /x2 fL(y) fR(y)

  28. 7. EDGE DETECTION BY ISEF AND ADAPTIVE GRADIENT • Zero-crossings of Laplacian, Binary Lap. Image (BLI) Gradient thresholding, • Adaptive Gradient • a shift-invariant band-limited gradient operator will smooth together two regions of different grey values separated by edge pixels. • how to give an estimate which approximates best the real gradient magnitude in help of BLI, estimator non shift-invariant. • Optimal gradient estimate is the difference between the averages of grey values of two regions, which is evidently not shift-invariant.

  29. 8.MULTI- STOCHASTIC PROCESS MODEL AND ANALYSIS

  30. Optimal smoothing filter for M-process edge detection

  31. 9. UNIFICATION OF BAND-LIMITED DERIVATIVE OPERATORS • Gaussian-Laplacian operator (DOG) • Sobel operator • Prewitt operator • Fast filter transform technique • Gabor filtering • Gabor-Sine • Gabor-Cosine • Translating cascading box technique: • A.Rosenfeld’s Box difference technique • Filter of Canny

  32. 10. EXPERIM. RESULTS & CONCLUSION • Essential difficulty of Gaussian filters for edge detect.: contradiction between insensibility to noise and precision of edge localization. • optimal linear operator for edge detection: • Linear edge detector considered as a low-pass smoothing to remove noise followed by a differential element to detect changes. • The optimal operator ISEF deduced from well-known mono step edge model, • based on a combined signal/noise ratio criterion adapted to edge detection, i.e., • maximizing response to step edge and minimizing those to noise and to derivative of noise. • better performance in insensibility to noise and in precision of edge localization than Gaussian filter.

  33. Based on spectral analysis • Noisy multi-edge model by a stationary stochastic process with additive white noise. • Noisy multi-process model by a stationary stochastic process with additive white noise. • Recursive realization of the ISEF and its first and second derivatives • Generalized to 2D cases. • Edges detected by zero-crossings of 2nd deriv. or Laplacian, or by maxima of grad., always filtered by ISEF filter. • Edge candidates thus detected verified by grad. thresholding with or without hysteresis. The grad. can be calculated from ISEF or adaptive gradient if non shift-invariant operators considered.

  34. Tested on computer-generated and real images and compared with some other optimal operators such as Gaussian filter, Canny's filter and its simplified version by Deriche. • The experimental results show significantly the better performance of the ISEF filters, which confirm the theoretical analysis on optimization and performance analysis.

  35. Isotropically Symmetrical?

  36. On Multi-Edge Detection, "CVGIP: Graphical Models And Image Processing", Vol.58, No.2, pp101-114, March 1996. • Multi-Edge Detection by Isotropical 2-D ISEF Cascade, “Pattern Recognition”, Vol.28, No.12, pp1871-1885, 1995. • Towards Unification of Band-Limited Differential Operators for Edge Detection, "Signal Processing", Vol.31, No.2, ppl03-119, 1993. • An Optimal Linear Operator for Step Edge Detection, "CVGIP: Graphical Models and Image Processing", Vol.54, No.2, pp112-133, March1992. • An Optimal Linear Operator for Edge Detection, Proc. IEEE CVPR’86, pp109-114, Miami, June, 1986. • Un Nouvel Algorithme de Détection de Contours, Proc. 5ème AFCET RFIA, pp201-213, Grenoble, Nov. 25-27, 1985. • Image Smoothing And Edge Detection by Hermite Integration, “Pattern Recognition”, Vol.28, No.8, pp1159-1166, 1995.

More Related