1 / 16

Dual Transfer Learning

Dual Transfer Learning. Mingsheng Long 1,2 , Jianmin Wang 2 , Guiguang Ding 2 Wei Cheng, Xiang Zhang, and Wei Wang 1 Department of Computer Science and Technology 2 School of Software, Tsinghua University, Beijing 100084, China. Outline. Motivation The Framework Dual Transfer Learning

abbot-casey
Télécharger la présentation

Dual Transfer Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dual Transfer Learning Mingsheng Long1,2, Jianmin Wang2, Guiguang Ding2 Wei Cheng, Xiang Zhang, and Wei Wang 1Department of Computer Science and Technology 2School of Software, Tsinghua University, Beijing 100084, China

  2. Outline • Motivation • The Framework • Dual Transfer Learning • An Implementation • Joint Nonnegative Matrix Tri-Factorization • Experiments • Conclusion

  3. Notations • Domain • Feature space Two domains are different • Task • Given feature space and label space • Learn or estimate where Two tasks are different

  4. Motivation Source comp.os Target comp.hardware Latent factors Task scheduling Performance Power consumption Architecture Cause the discrepancy between domains Represent the commonality between domains Exploring the marginal distributions

  5. Motivation Source comp.os Target comp.hardware Model parameters Task scheduling → comp Performance →comp Power consumption →comp Architecture →comp Represent the commonality between tasks Exploring the conditional distributions

  6. The Framework: Dual Transfer Learning (DTL) • Simultaneously learning the marginal distribution and the conditional distribution • Marginal mapping: learning the marginal distribution • Conditional mapping: learning the conditional distribution • Exploring the dualityfor mutual reinforcement • Learning one distribution can help to learn the other distribution

  7. Nonnegative Matrix Tri-Factorization (NMTF)

  8. An Implementation: Joint NMTF Source comp.os Target comp.hardware Latent factors Task scheduling Performance Power consumption Architecture Cause the discrepancy between domains Marginal mapping: learning the marginal distribution Represent the commonality between domains

  9. An Implementation: Joint NMTF Source comp.os Target comp.hardware Model parameters Task scheduling → comp Performance →comp Power consumption →comp Architecture →comp Conditional mapping: learning the conditional distribution Represent the commonality between tasks

  10. An Implementation: Joint NMTF Dual Transfer Learning Joint Nonnegative Matrix Tri-Factorization Solution to the Joint NMTF optimization problem

  11. Joint NMTF: Theoretical Analysis • Derivation Formulate a Lagrange function for the optimization problem Use the KKT condition • Convergence Prove it by the auxiliary function approach [Ding et al. KDD’06]

  12. Experiments • Open data sets • 20-Newsgroups • Reuters-21578 • Each cross-domain data set • 8,000 documents, 15,000 features approximately • Evaluation Criteria

  13. Experiments • Non-transfer methods: NMF, SVM, LR, TSVM • Transfer learning methods: • Co-Clustering based Classification (CoCC) [Dai et al. KDD’07] • Matrix Tri-Factorization based Classification (MTrick) [Zhuang et al. SDM’10] • Dual Knowledge Transfer (DKT) [Wang et al. SIGIR’11]

  14. Experiments Parameter sensitivity and algorithm convergence

  15. Conclusion • We proposed a novel Dual Transfer Learning (DTL) framework • Exploring the duality between the marginal distribution and the conditional distribution for mutual reinforcement • We implemented a novel Joint NMTF algorithm based on the DTL framework • Experimental results validated that DTL is superior to the state-of-the-art single transfer learning methods

  16. Any Questions? Thank you!

More Related