1 / 18

Adversarially Learned One-Class Classifier for Novelty Detection

Adversarially Learned One-Class Classifier for Novelty Detection. Computer Vision and Pattern Recognition (CVPR) 2018. eadeli@cs.stanford.edu. Problem Statement (One-Class Classification). Applications: Novelty Detection Outlier Anomaly. Training. Testing. Challenges.

jnewby
Télécharger la présentation

Adversarially Learned One-Class Classifier for Novelty Detection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adversarially Learned One-Class Classifier for Novelty Detection Computer Vision and Pattern Recognition (CVPR) 2018 eadeli@cs.stanford.edu

  2. Problem Statement(One-Class Classification) • Applications: • Novelty Detection • Outlier • Anomaly Training Testing

  3. Challenges No samples to train based on • In reality, the novelty class is • absent during training, • poorly sampled, or • not well defined • Due to the unavailability of data from the novelty class, training an end-to-end deep network is a cumbersome task. Too few samples (highly imbalanced classification) What is novelty?

  4. Learns the concept characterized by the space of all positive samples (novelty Detector) Learns to Reconstruct the positive samples (samples from target class) Method If exposed to negative samples (novelty), Rdecimates/distorts them. X target class training sample Encoder Decoder The two networks compete during training, but collaborate during testing. CNN Class Label Using the two networks at the testing time improves the results.

  5. Method (cont’d) Better discrimination power

  6. Joint Training of

  7. Joint Training (Summary) Outlier or novelty sample Similar to denoisingautoencoders (but for a target concept) New concept? Does not know what to do, maps it to unknown distribution D is trained only to detect target samples, not novelty samples Output of R is more separable than the original input images.

  8. One-Class Classifier

  9. Experiments • Outlier Detection (MNIST) • Trained to detect each digit separately • Other digits pose as outliers

  10. Severability of classes (before and after the Remonstration step) Inliers Outliers

  11. Experiments (cont’d) • Outlier Detection (Caltech-256) • Similar to previous works [52], we repeat the procedure three times and use images from n={1; 3; 5} randomly chosen categories as inliers (i.e., target). • Outliers are randomly selected from the “clutter” category, such that each experiment has exactly 50% outliers. 1 inlier category 3 inlier categories 5 inlier categories

  12. Experiments (cont’d) • Video Anomaly Detection (UCSD Ped2) • Frame-level comparisons

  13. Experiments (cont’d) • UCSD Ped2

  14. Results (Cont’d) • UMN Video Anomaly Detection Dataset

  15. Analysis on the convergence conditions Nash Equilibrium

  16. The Reconstructor Network Architecture • Experiment on MNIST dataset, for detecting digit ‘1’ Dimensionality of the latent variable Length of the reject region Restricted by forcing the latent space to follow a Gaussian distribution

  17. Conclusion • We trained the first end-to-end trainable deep network for anomaly detection in images and videos • We trained two networks jointly that compete to train, but collaborate on evaluating any input test sample • Future direction: • Exploration of other types of noise in the reconstructor network • Gaining insights on the latent representation • Networks help each other learn better, any other applications?

  18. Thank You! Any questions

More Related