1 / 18

中国云 移动互联网创新大赛 -- 火眼金睛

中国云 移动互联网创新大赛 -- 火眼金睛. 团队:LCLL Zhejiang University. Team. Team Leader: Yue Lin ( 林悦 ): responsible for model training, parameters tuning, structure design Team Members: Debing Zhang ( 张德兵 ): responsible for seeking the new techniques in DL, structure design

cassia
Télécharger la présentation

中国云 移动互联网创新大赛 -- 火眼金睛

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 中国云 移动互联网创新大赛-- 火眼金睛 团队:LCLL Zhejiang University

  2. Team Team Leader: Yue Lin (林悦): responsible for model training, parameters tuning, structure design Team Members: Debing Zhang (张德兵): responsible for seeking the new techniques in DL, structure design Cheng Li (李成): responsible for the images crawler, build the model training environment Xiaoting Zhao (赵晓婷): responsible for the data labeling Publications: Yue Lin, Rong Jin, Deng Cai, Xiaofei He: Random Projection with Filtering for Nearly Duplicate Search. AAAI 2012 Yue Lin, Rong Jin, Deng Cai, Shuicheng Yan, Xuelong Li: Compressed Hashing. CVPR 2013 Bin Xu, Jiajun Bu, Yue Lin, Chun Chen, Xiaofei He, Deng Cai: Harmonious Hashing. IJCAI 2013 Zhongming Jin,Yao Hu,Yue Lin, Debing Zhang, Shiding Lin,Deng Cai, Xuelong Li: Complementary Projection Hashing. ICCV 2013 Yao Hu, Debing Zhang, Jieping Ye, Xuelong Li, Xiaofei He: Fast and Accurate Matrix Completion via Truncated Nuclear Norm Regularization.  TPAMI 2013 Yao Hu, Debing Zhang, Zhongming Jin, Deng Cai, Xiaofei He: Active Learning Based on Local Representation. IJCAI 2013 Debing Zhang, Genmao Yang, Yao Hu, Zhongming Jin, Deng Cai, Xiaofei He: A Unified Approximate Nearest Neighbor Search Scheme by Combining Data Structure and Hashing. IJCAI 2013 Debing Zhang, Yao Hu, Jieping Ye, Xuelong Li, Xiaofei He: Matrix completion by Truncated Nuclear Norm Regularization. CVPR 2012 Yao Hu, Debing Zhang, Jun Liu, Jieping Ye, Xiaofei He: Accelerated singular value thresholding for matrix completion. KDD 2012 Zhongming Jin, Cheng Li, Deng Cai, Yue Lin: Densitive Sensitive Hashing. TSMCB 2013

  3. 数据 传统做法: Ref: Pedestrian Detection: An Evaluation of the State of the Art

  4. 比赛数据 We choose Deep Learning Offline test result: 0.9820.

  5. Structure We follow the structure used in MNIST, ImageNet

  6. More data is good Negative Data: Caltech 256, Some images selected in VOC. Caltech 256 All the data need to be checked. Mislabeled images will hurt the performance. VOC

  7. More data is good Positive Data: Baidu Shitu • We implement a crawler to send some classical images to the Baidu Shitu and save the results. • save the page -> get the images’ link + another crawler

  8. Training Information Gray vs. Color: Resolution: Maps: Convolutions: Color is better. 128x128 is better than 64x64 and 32x32. More maps is better but cost more time. Finally we choose 64 maps. More Convolutions is better but cost more time. Finally we choose 5 convolutions.

  9. Training Information Local response normalization We use ReLU neuron types f(x)=max(0,x). (Much faster) “Local response normalization aids generalization.”

  10. Parameters

  11. Viewing the Net Training and test error over time.

  12. Viewing the Net

  13. Viewing the Net

  14. Viewing the Net

  15. Viewing the Net

  16. Discussion Dropout Ref: ImageNet Classification with Deep Convolutional Neural Networks • Achieve better performance on ImageNet, MNIST, TIMIT, CIFAR-10. • In offline test, the performance is improved from 0.9820 to 0.9826.

  17. Future 1.Why it works? Theory Extension: Long way to go 2.How it goes? Distributed Computation,Huge Data

  18. Thank you

More Related