1 / 26

Dynamic Scalable Distributed Face Recognition System Security Framework

Dynamic Scalable Distributed Face Recognition System Security Framework. by Konrad Rzeszutek B.S. University of New Orleans, 1999. Overview. Purpose History of face recognition Problems Solution Apollo Components of Apollo Face recognition technology used Motion detection Future work.

skah
Télécharger la présentation

Dynamic Scalable Distributed Face Recognition System Security Framework

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Scalable Distributed Face Recognition System Security Framework by Konrad Rzeszutek B.S. University of New Orleans, 1999

  2. Overview • Purpose • History of face recognition • Problems • Solution • Apollo • Components of Apollo • Face recognition technology used • Motion detection • Future work

  3. Applications of face recognition • Surveillance Systems • Biomedical Systems (eye-replacement) • Military (anti-terrorist groups) • Security (logon authorization) • Autonomous vehicle navigation • … many more

  4. History • Sir Francis Galton (1888) – Automatic method of classification of French prisoners. Called it mechanical selector. • Late 1960s started. • In 1980 research picked up dramatically. • Two branches of face recognition: • Geometric-features • Template matching

  5. Geometric - profile • Profile features. • 8-100 control points • Six control points using B-spline • U.S. INS uses this one extensively.

  6. Geometric - frontal • Frontal features • 8-16 features • Various distance from right and left eye to nose, nose to chin, eye to eye, etc. • Nose width, chin radii, eyebrow thickness, etc

  7. Template matching • Face images are represented as vectors in an array (each image is identified as k) • Computations are carried on the model arrays resulting in hash values. • The matching image hash value is compared against the template images’ hash values.

  8. Template matching, part 2 • The distance from the training images hash value determines the match. • Euclidian distance mostly used.

  9. Principal Component Analysis • Turk and Pentland – Eigenface. • Most simplest – uses the whole image face as a template. • Variations of this use infrared images.

  10. Template matching .. more • Isodensity line maps (brightness of image viewed as height of the mountain; isodensity lines corresponds to contour lines of equal altitude). • Neural network – eye and mouth regions feed into multi-layer perception engine that carries of the classification • .. Other are mostly various combinations of these two branches of face-recognition technologies.

  11. Problems • Work done on a very selective set of face images, mostly: • In upright position • Lighting and background controlled • Either in frontal or profile view • Have no occlusions, facial hair • Most test cases are white males

  12. Solution • A distributed system capable of handling large load of images, analyze them in near-real time, provide support for future enhancements and be scalable to the load. • Separates the functionality of a security system in three modules: recognition, notification, and replay.

  13. Apollo

  14. Components • Ares - the thin-client providing camera feed. • Hermes – the police officer directing traffic • Demeter – the storage for later replay of camera-feed • Nemesis – the face recognition engine • Mors – the notification event server

  15. Ares • Passes the real-time camera feed through a motion-detection engine. • Transmits the feed to Nemesis for face-recognition and Demeter for storage. • Uses Jini/RMI to localize required components.

  16. Hermes • Collects information about load of each components. • Is queried for its knowledge whenever a system in a pool requires another component. • Scalable – many of these system (Hermes) can coexist and provide the load information.

  17. Demeter • Stores camera-feed for later replay and for storing the camera-feed on a archive media (WORM).

  18. Nemesis • Face recognition module. Uses Eigenfaces technique to match images in near-real time. • Can be extended to use more algorithms and check image using many techniques. • If match found, an event is sent to Mors.

  19. Mors • Receives events which notify about a possible face match. • Centralized pool where humans can visually check results and carry out proper procedures.

  20. Face recognition - Nemesis • Eigenfaces – algorithm finds the PCA of faces, or eigenvectors of the covariance matrix. • “Each eigenvalue can be thought as an amount which, when subtracted from each diagonal matrix, makes the matrix singular. … Eigenvectors are characteristics vectors of the matrix” (from “Digital Image Processing by Castleman)

  21. Eigenvalues and Eigenvectors • We are looking for  (eigenvectors) and  (eigenvalues) defined as: C =   Where C is our covariance matrix of the normalized face-vector =[1 2 … M ]

  22. Weights • After the computation of eigenvalues and eigenvetors, we use M’ most significant eigenfaces (where each eigenface is the linear combination of eigenvalues and the face-image) to form a face subspace. • From the face subspace we calculate the weights (where T=[1 2 .. M’]): k= kT  k = 1,2, …, M’

  23. Matching • We use the calculated weights to determine if the image is recognized. • Usually we use Euclidian distance.

  24. Motion Detection • Motion detection is used on the client side – Ares. • It saves bandwidth and saves only frames that have content. • Algorithm uses two threshold functions: • The first is used to accommodate for possible artifacts introduced by the camera. • Second determines the if there is motion depending on the count of “clusters” of pixels that changed.

  25. Motion Detection, .. more • Red is the “cluster” count.

  26. Future work • Use more face recognition technologies so each can complement each other. • Expand framework to include other recognition technologies: iris, speech, etc. • Improve motion detection engine. • Face operations – automatically removing background. • Generate from one face a multitude of other faces with different alternations – bear (or lack of it), long hair, etc to expand possibly match.

More Related