1 / 18

Identification System Errors

Guide to Biometrics – Chapter 6 Handbook of Fingerprint Recognition - 1.4 Presented By: Chris Miles. Identification System Errors. Extending to Identification. How do we extend our numerical models for verification errors for identificatation? FNMR – False Non Match Rate

pennie
Télécharger la présentation

Identification System Errors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Guide to Biometrics – Chapter 6 Handbook of Fingerprint Recognition - 1.4 Presented By: Chris Miles Identification System Errors

  2. Extending to Identification • How do we extend our numerical models for verification errors for identificatation? • FNMR – False Non Match Rate • FMR – False Match Rate • What Issues are presented

  3. Identification System • Maintains a database of enrolled users • Tries to match input against the database • Positive Identification == Negative Identification • Output • List of best matches – Ideally just the true identity • Best Match • yes/no in the list

  4. Example • Casino using face detection to identify people on the Nevada Gaming Commission's black list • http://gaming.nv.gov/loep_main.htm • Basis for other government biometrics systems • N = the number of people on the list • M = number of people through the casino daily • Calculate FNMRN and FMRN

  5. Matching System • Parallel version of your favorite verification algorithm • Attempt to match all users against the database

  6. FNMRN • The chance of being falsely rejected is the same as verification • Chance of not matching against your template – chance of matching someone else's template • Assuming no FMR, FNMRN = FNMR

  7. FMRN • FMRN = Chance of matching someones template ^ number of templates • FMRN = 1 – (1 – FMR)N • Number of daily false matches = M * FMRN= M (1 - (1-FMR)N)

  8. Accuracy Scales Worse then Computation • The chance of being falsely accepted rises exponentially with the number of templates • Suppose algorithm is 99.99% accurate • 100 people in the database • Each has 8 templates • 10,000 people through the casino a day • FMRN = 1 - .9999800 = 0.076 • FMRN * 10000 = 768 False accepts a day

  9. Winnowing • True identification is exponentially hard, so generally we compromise and just return a list of probable matches. • Input -> System -> List of Candidate Matches • A second system, biometric or a human supervisor, then tries to identify the user from the new List / Database of candidates • Candidates -> Second System -> Identity • “Passing the buck” so to speak

  10. Who's on the list? • Threshold • Apply a threshold to the similarity metric • similarity > threshold -> On the list • Rank • Take the K most similar templates • Hybrid • Take the K most similar templates so long as there similarity > threshold

  11. Weaknesses • Threshold • If several users kind of match the input, but not quite, a threshold based system would return nothing • Rank • Impostor -> List of bad matches • Solution: Generic Impostor Model -> Additional Template representing a non-match situation, if a user matches this -> returns nothing. • Hybrid • Strengths of both techniques cover the weaknesses

  12. Hybridization Ideas • Adjust K based upon how many are above the threshold • Adjust the threshold based upon the distribution of similarities

  13. Multiple Templates • Example had multiple templates per individual • Input might match mutiple templates from one person • Only one might need to be in the list • Domain Dependent

  14. Characterizing Identification • FNMR and FMR ~= Reliability and Selectivity • Reliability • 1 - FRR • How often we correctly identify someone who is in the database • Selectivity • K – Rel or • (m-1) FAR • Number of incorrect matches returned

  15. RSC, ROC, RPC Curves • These curves show the compromises involved • ROC Compromises between FAR and FRR rate • Should the vending machine take my ripped dollar and someone elses forgery? • RPC Curves • If google returned more results it would be less likely to miss relavant ones • Would include more irrelevant results however • RSC Curves

  16. Three systems • Theshold Based – Previous Example • Rank-Based identification • Rank-order statistics • Rank Probability Mass Function

  17. Threshold System Errors • Similar to previous example only returns a list of individuals above the threshold • Errors • FARM = m * FAR * (1-FAR)m-1 -Falsely Match one individual • Ambiguous answer -> List has length > 1 • P(Ambiguous) = 1 – [1 – (m+1) * FAR](1 - FAR)m-1 • FRRM = 1 - (1 – FRR) * (1 – FAR) m-1 ~= FRR

  18. Rank Based System Errors • Only works in very restricted close world scenarios (No Impostors) • Only one error – Misidentification by the correct user being ranked below another • Analyze probabilistic distribution of ranks – Rank Probability Mass Function

More Related