1 / 41

A highly accurate and computationally efficient approach for unconstrained iris segmentation

A highly accurate and computationally efficient approach for unconstrained iris segmentation. Yu Chen, Malek Adjouadi *, Changan Han, Jin Wang, Armando Barreto, Naphtali Rishe, Jean Andrian,College of Engineering and Computing, Florida International University, Miami, FL 33174, USA

afia
Télécharger la présentation

A highly accurate and computationally efficient approach for unconstrained iris segmentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A highly accurate and computationally efficient approach for unconstrainediris segmentation Yu Chen, Malek Adjouadi *, Changan Han, Jin Wang, Armando Barreto, Naphtali Rishe, Jean Andrian,College of Engineering and Computing, Florida International University, Miami, FL 33174, USA Received 15 December 2008;Received in revised form 7 April 2009;Accepted 27 April 2009 報告者:劉治元

  2. Outline • Abstract • Introduction • Approximate localization of the eye area • Iris outer boundary detection with a fast circular Hough transform • Boundary detection of the upper and lower eyelids • Circle correction and non-circular boundary detection • Pupil, eyelash detection and results reliability verification • Result and discussion • Conclusion

  3. Abstract • Biometric research • Iris recognition • Iris acquisition method • Iris recognition system • UBIRIS.v2 database (the second version of the UBIRIS noisy iris database)

  4. Introduction

  5. Approximate localization of the eye area • (1) Detecting the sclera area. • (2) Determining a target area for the eye.

  6. Detecting the sclera area • HSI color model • Hue(H) • Saturation(S) • Intensity(I)

  7. Detecting the sclera area • Through our experimentalanalysis, the saturation values of sclera areas provided arange from 0 to 0.21. • Thatthreshold is obtained by calculating the biggest group derivativewithin the range in a histogram of saturation values (between 0and 0.21) corresponding to the image.

  8. Detecting the sclera area

  9. Determining a target area for the eye • The thresholded image, as in Fig. 3(b),is converted to a gray scale image, and for every pixel with a graylevel intensity greater than 0, its value will be replaced by the averageintensity value of a 17 * 17 block which is centered on that pixel. 3

  10. Determining a target area for the eye • The resulting binary maps, as can be seen, can be classified intotwo categories: (1)double sclera areas as shown in Fig. 3(a). (2)single sclera areas as shown in Fig. 3(c). 3 3

  11. Determining a target area for the eye

  12. Iris outer boundary detection with a fast circular Hough transform • (1)Detecting the outer boundary • (2)A fast circular Hough transform

  13. Detecting the outer boundary • To generate the edge map, instead of the traditional four-directionSobel edge detection, we only conducted the edge detectionhorizontally (left to right and right to left), as can be seen inFig. 4(c); compared with Fig. 4(b).

  14. Detecting the outer boundary • Some precautions are considered • First, the upper and lower limits of the radius can be set with respectto the size of rectangle. • Second, neither the center of theresulting circle center nor its boundary can possibly be locatedon the already defined sclera areas.

  15. A fast circular Hough transform • Although the circular Hough transform is a powerful algorithm,it also carries with it a heavy computational accumulator, and thatrefers to the three step iterations burden. • With the circular Houghtransform, each edge point (x, y) in the image space votes for (a, b, r) in the parameter space for each possible circle passing it, where a, b are the coordinates of the circle center position, and r being the radius of the circle.

  16. A fast circular Hough transform • the computational complexity of the circular Hough transform Oa:O1 is the computational complexity of calculatingvotes for a circle with a determinedcenter location and radius.

  17. A fast circular Hough transform • Let Ca, Cb, Cr be the step-length for the parameters a, b, r, respectively, then the computational complexity Ob : the step-lengths are set such that Ca=Cb=Cr.

  18. A fast circular Hough transform • Because of the three step-lengths, a large number of votes will not be counted. • To overcome this problem, all points located on the circular ring surrounding that circle would be counted.

  19. A fast circular Hough transform

  20. A fast circular Hough transform • Dynamic programming methodInitially, the distance between every pixel point on the image and the image center point would be calculated, all those distances would be stored in the table, and each distance would refer to a list of relative locations which have that certain distance from the center location of the image frame. • There is no need to calculate distances while performing the Hough transform for each image, and the computational burden was consequently alleviated significantly.

  21. A fast circular Hough transform • To demonstrate the performance of the proposed method

  22. A fast circular Hough transform • The average processing time using the circular Hough transform which applies a step-length of 1 is calculated as 6.77 s per image, and the error rate of such an approach is estimated at 0.0200215 • With the proposed modifiedcircular Hough transform, the average execution time is decreasedto 0.83 s and the E1 error rate is 0.0200823 now.

  23. Boundary detection of the upper and lower eyelids • The linear Hough can be applied to the edge map of the eye image to detect the eyelids. • Because the slopes of the upper and lower eyelids are not steep in most cases, the proposed approach starts by applying edge detection in only the vertical direction.

  24. Boundary detection of the upper and lower eyelids • The generated edge map will have an emphasis on the desired eyelids edge points. Fig. 7(a) is one such example.

  25. Boundary detection of the upper and lower eyelids • To distinguish the points which are edges between iris and eyelids, a patch of area is selected to calculate the average gray intensity IA of the iris, as shown in Fig. 8. Fig. 8. Example of the square patch used to obtain the average gray intensity.

  26. Boundary detection of the upper and lower eyelids

  27. Circle correction and non-circular boundary detection • That human iris boundaries are usually non-circular • The circular Hough transform can generate inaccurate results

  28. Circle correction and non-circular boundary detection • Shown in Fig. 10(b) is a square grid with an adaptive size insidethe outer iris boundary. • The center of the grid which yields thelowest average gray intensity would be selected as the correct outeriris boundary center.

  29. Circle correction and non-circular boundary detection • Based onthe experimental study, the desired iris boundary is usually insidethe Hough circle. • Thus, the target area whosecenter is at (xt ,yt) as shown in Fig. 11 was expected to be the region between the real iris center (xr , yr) and the arc on the opposite side of the original circle. Fig. 11. Relations between the real iris center, original circle center and center for target rectangle.

  30. Circle correction and non-circular boundary detection • In reference to Fig. 11, the center of the target rectangle is(xt , yt), and the original circle center is (xc , yc), here we have: Fig. 11. Relations between the real iris center, original circle center and center for target rectangle.

  31. Circle correction and non-circular boundary detection • The final result of the detected boundary consists ofmultiple arcs and lines.

  32. Pupil, eyelash detection and results reliability verification • Under visible wavelength, the intensity contrastof iris and pupil can be very low, • Thus, pupil removal isleft for this step to be performed. • With only iris and pupil, the contrastenhancementmethod would yield better performance.

  33. Pupil, eyelash detection and results reliability verification • We used an empirical intensity threshold of 150 to detect thereflections, and expanded every reflection point by a 3 * 3 maskto ensure its total removal. • Then, histogram equalization was appliedto get the high-contrast image, as shown in image (b).

  34. Pupil, eyelash detection and results reliability verification • Sobeledge detection was used to get the edge map (c).

  35. Pupil, eyelash detection and results reliability verification • The circularHough transform determined the pupil boundary. • Center can be considered as the outer iris center. • The radius ofthe pupil boundary is set to be from 3Router /20 as a lower limit to 11Router/20 as the upper limit.

  36. Pupil, eyelash detection and results reliability verification • To ensure that the falsely segmented results would not pass tothe next step of iris recognition. • segmented iris is too big (Router > 120), too small (Router < 20),too bright (IA > 90) . • The average intensity of the pupil isbrighter than the average iris intensity.

  37. Result and discussion Fig. 14. Examples yielding good results.

  38. Result and discussion Fig. 15. Examples yielding faulty or undesired results.

  39. Result and discussion Fig. 16. Examples of detecting the outer iris boundary of rotated iris images.

  40. Conclusion • The accuracy of the proposedapproach was evaluated as part of the NICE.I contest, rankingthe method with the sixth lowest error rate among 97participants worldwide. • The proposed approach is nearreal-time, requiring only 0.83 s to perform all the required steps fora final iris segmentation.

  41. Thanks for your attention

More Related