1 / 45

Learning to Learn By Exploiting Prior Knowledge

Learning to Learn By Exploiting Prior Knowledge. Tatiana Tommasi Idiap Research Institute École Polytechnique Fédérale de Lausanne Switzerland Oxford, October 22, 2012. Example - Learning. Task Training Experience A performance measure. “I want to learn Italian”.

toan
Télécharger la présentation

Learning to Learn By Exploiting Prior Knowledge

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning to Learn By Exploiting Prior Knowledge Tatiana Tommasi Idiap Research Institute École Polytechnique Fédérale de Lausanne Switzerland Oxford, October 22, 2012

  2. Example - Learning Task Training Experience A performance measure “I want to learn Italian” “Bionji”… “ Buonyo” “Buongiorno” An agent learns if its performance at a task improves with experience (Mitchell, 1996) 1

  3. Example – Learning to Learn Tasks Training Experience Performance measures “I want to learn Italian and French” It: “Buongiorno” Fr: “Bonjour” An agent learns to learn if its performance at each tasks improves with experience and with the number of tasks (Thrun, 1996) 2

  4. Does it look like some other fruit? What is this? A fruit 3

  5. Does it look similar to something else? Analogical reasoning: if we already know the appearance of some objects we can use it as reference information when learning something new. 4

  6. Knowledge Transfer Storing knowledge gained while solving one problem and applying it to a different but related problem. Source/Sources Target: Guava Learning to learn: some transfer must occur between multiple tasks with a positive impact on the performance. 5

  7. Domain Adaptation Domain adaptation is needed when the data distribution ofthe test domain is different from that ofthe training domain. Source/Sources Target 6

  8. Multi-Task Learning Learning over multiple tasks at the same time by exploiting a symmetric share of information. Task 1 Task 2 Task 3 7

  9. Learning to Learn • Sharing Information • Knowledge Transfer • Domain Adaptation • Multi-Task Learning • Dynamic Process • Online Learning: continuous update of the current knowledge. • Active Learning: interactively query an oracle to obtain the desired outputs at new data points. 8

  10. Learning to Learn • Sharing Information • Knowledge Transfer • Domain Adaptation • Multi-Task Learning • Dynamic Process • Online Learning: continuous update of the current knowledge. • Active Learning: interactively query an oracle to obtain the desired outputs at new data points. ExploitPrior Knowledge 8

  11. Knowledge Transfer: Advantages Particularly useful when few target training samples are available: boost the learning process. 9

  12. Knowledge Transfer: Challenges What to Transfer? Specify the form of the knowledge to transfer: instances, features, models. How to Transfer? Define a learning algorithm able to exploit prior knowledge. When to Transfer? Evaluate the task relatedness, keep useful knowledge and reject bad information (avoid negative transfer). 10

  13. My choices What to Transfer?Learningmodels. How to Transfer? Discriminative learning approach. When to Transfer? Automatic evaluation. Intuition 11

  14. My choices What to Transfer?Learningmodels. How to Transfer? Discriminative learning approach. When to Transfer? Automatic evaluation. Intuition 11

  15. Target Problem I want to learn … vs • Given a set of data • Find a function • Minimize the structural risk • Linear models • Feature mapping with • Optimization problem 12

  16. Source Problem I already know … vs • A source a set of data • with • Pre-learned model on the source. • : solution of the learning problem on the source 13

  17. What to Transfer • Consider J source models • : solution of the learning problem on the j-th source expressed as a weighted sum of kernel functions. • Use as a reference knowledge when learning • What to transfer? Discriminative models. 14

  18. How and When to Transfer How: adaptive regularization. When, how much: reweighted source knowledge. • Evaluate the relevance of each source • Solve the target learning problem. • We name KT the obtained Knowledge Transfer approach. • [T. Tommasi and B. Caputo, BMVC 2009] • [T. Tommasi et al., CVPR 2010] 15

  19. Solve the target learning problem • Use the square loss • Solve • Adaptive Least-Square Support Vector Machines • LS-SVM (Suykens et al, 2002) • square loss: predict correctly each sample; • not sparse: all the training samples are considered; • solution: set of linear equations. 16

  20. Solving Procedure In matricial form where The model parameters can be calculated by matrix inversion Solution: Classifier: 17

  21. Leave-One-Out Prediction We can train the learning method on N samples and obtain as a byproduct the prediction for each training sample as if it was left out from the training set. The Leave-One-Out error is an almost unbiased estimator of the generalization error (Lunz and Brailovsky, 1969). 18

  22. Evaluate the relevance of each source The best values for beta are those producing positive values for for each i. To have a convex formulation we consider and solve 19

  23. Experiments – Mixed Classes • Visual Object Classification • Caltech-256 • Binary problems: object vs non-object • Features: PHOG, SIFT, Region Covariance, LBP 10 mixed classes, one target and nine sources. 20

  24. Results – Mixed Classes 21

  25. Experiments – 6 Unrelated Classes • Visual Object Classification • Caltech-256 • Binary problems: object vs non-object • Features: PHOG, SIFT, Region Covariance, LBP 6 unrelated classes, one target and five sources. 22

  26. Results – 6 Unrelated Classes 23

  27. Experiments – 2 Unrelated Classes • Visual Object Classification • Caltech-256 • Binary problems: object vs non-object • Features: SIFT 2 unrelated classes, one target and one source. 24

  28. Results – 2 Unrelated Classes 25

  29. Transfer Weights and Semantic Similarity • Use the vectors b to define a matrix of class dissimilarities. • Apply multidimensional scaling (two dimensions). 26

  30. Transfer Weights and Semantic Similarity • Use the vectors b to define a matrix of class dissimilarities. • Apply multidimensional scaling (two dimensions). 26

  31. Extension: Multiclass Domain Adaptation • g = 1, ..., G classes fixed for both source and target; • discriminates class g as positive from all the others considered as negative; • class prediction • Leave-One-Out predictions 27

  32. When and How Much to Transfer We suffer a loss which is linearly proportional to the difference between the confidence of the correct label and the maximum among the confidence of the other labels. Final objective function 28

  33. Three Possible Schemes 1. 29

  34. Three Possible Schemes 2. 30

  35. Three Possible Schemes 3. 31

  36. Application • Personalization of a pre-existent model. • Task: Hand posture classification. • Electrodes applied on the forearm collect sEMG signals. • Goals: • reduce the training time of a mechanical hand prosthesis through adaptive learning over several known subjects. • augment the control abilities over hand prosteses. [T. Tommasi et al, IEEE Transaction on Robotics 2012] 32

  37. Experimental setup • 10 healthy subjects • 7 sEMG electrodes • 3 grasping actions plus rest 33

  38. Experimental results 34

  39. More Subjects and Postures • 20 healthy subjects • 10 sEMG electrodes • 6 actions plus rest 35

  40. Leveraging over source models: Limits • Restriction to binary problems (transfer learning) or multiclass with the same set of classes in the source and in the target (domain adaptation). • The source and the target models should live in the same space: same features and learning parameters. • Batch method, re-evaluate the relevance of each source knowledge every time a new training sample is available. 36

  41. Feature Transfer • Use the source models as experts that predict on the target samples. • Use the output of the prediction as additional feature elements. • Cast the problem in the multi-kernel learning framework (Multiple Kernel Transfer Learning MKTL). • Principled multiclass formulation [L. Jie*, T. Tommasi*, B. Caputo, ICCV 2011] 37

  42. Online Learning • Combine Online Learning and Knowledge Transfer such that they can get a reciprocal benefit. • Avoid to re-evaluate at each step the relevance of source knowledge. • Obtain an online learning approach with robust generalization capacity. • Transfer Initialized Online Learning (TROL) [T. Tommasi et al, BMVC 2012] 38

  43. Cross-Database Generalization Exploit Exisiting Visual Resources (partially overlapping label sets) • General case of many visual dataset (tasks) with some common classes. No explicit class alignment. • No model already learned, only samples available, eventually represented with different feature descriptors. • Define a representation which decomposes in two orthogonal • parts: one shared and one private for each task. • Use the generic knowledge coded in the shared part when • learning on a new target problem. • Multi-Task Unaligned Shared Knowledge Transfer (MUST) [T. Tommasi et al, ACCV 2012] 39

  44. Take Home Message • It is possible to define learning algorithms that automatically evaluate the relevance of prior knowledge when addressing a new target problem with few training examples. • The described approaches consistently outperforms learning from scratch both in transfer learning and domain adaptation problems. • It is possible to reproduce artificially different aspects of the • “human analogical reasoning process”. 40

  45. More details in my thesis... Questions ? Tatiana Tommasi ttommasi@idiap.ch • http://www.idiap.ch/~ttommasi/

More Related