1 / 46

Machine Learning” Lecture 1

Machine Learning” Lecture 1. Dr. Alper Özpınar. Textbook. Main Textbook: Introduction to Machine Learning - Ethem Alpaydın ( 3rd Edition ) Supportive Materila Neural Networks and Learning Machines - Simon Haykin

nixie
Télécharger la présentation

Machine Learning” Lecture 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Learning”Lecture 1 Dr. Alper Özpınar

  2. Textbook • Main Textbook: Introduction to Machine Learning - EthemAlpaydın ( 3rd Edition ) • Supportive Materila • Neural Networks and Learning Machines - Simon Haykin • Pattern Recognition and Machine Learning (Information Science and Statistics) - Christopher M. Bishop • Machine Learning - Tom M. Mitchell

  3. Weekly Plan

  4. Artificial Intelligence and Machine Learning • Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry. • Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as: • Knowledge • Reasoning • Learning • Problem solving • Perception • Planning • Ability to manipulate and move objects

  5. Automation and Intelligence

  6. Artificial Intelligence and Machine Learning

  7. Definition of Intelligence • Intelligence is a biological system in living organisms. Brain, nerve system and nerve cells • Human Brain has 19-23 Billion Neurons in the Cerebral cortex

  8. Number of Neurons • Total / CerabralCorteks • Sponge (0/0) • Rat ( 200 Million / 18 Million) • Capuchinmonkey( 3.5 Billion / 650 Million) • Human ( 85-100 Billion / 19-23 Billion) • African Elephant ( 250 Billion / 11 Billion )

  9. Number of Neurons and Artificial Intelligence

  10. A Little bit of History Turing Machine Enigma Machine Alan Turing-1912-1954

  11. AI Hive

  12. AI and Machine Learning Business

  13. NasılBaşlayabilirsiniz

  14. Big Data Widespread use of personal computers and wireless communication leads to “big data” We are both producers and consumers of data Data is not random, it has structure, e.g., customer behavior We need “big theory” to extract that structure from data for (a) Understanding the process (b) Making predictions for the future

  15. Why “Learn” ? • Machine learning is programming computers to optimize a performance criterion using example data or past experience. • There is no need to “learn” to calculate payroll • Learning is used when: • Human expertise does not exist (navigating on Mars), • Humans are unable to explain their expertise (speech recognition) • Solution changes in time (routing on a computer network) • Solution needs to be adapted to particular cases (user biometrics)

  16. What We Talk About When We Talk About “Learning” • Learning general models from a data of particular examples • Data is cheap and abundant (data warehouses, data marts); knowledge is expensive and scarce. • Example in retail: Customer transactions to consumer behavior: People who bought “Blink” also bought “Outliers” (www.amazon.com) • Build a model that is a good and useful approximationto the data.

  17. Data Mining Retail:Market basket analysis, Customer relationship management (CRM) Finance: Credit scoring, fraud detection Manufacturing: Control, robotics, troubleshooting Medicine: Medical diagnosis Telecommunications: Spam filters, intrusion detection Bioinformatics: Motifs, alignment Web mining: Search engines ...

  18. What is Machine Learning? • Optimize a performance criterion using example data or past experience. • Role of Statistics: Inference from a sample • Role of Computer science: Efficient algorithms to • Solve the optimization problem • Representing and evaluating the model for inference

  19. Applications • Association • Supervised Learning • Classification • Regression • Unsupervised Learning • Reinforcement Learning

  20. Learning Associations Basket analysis: P (Y | X ) probability that somebody who buys X also buys Y where X and Y are products/services. Example: P ( chips | beer ) = 0.7

  21. Example: Credit scoring Differentiating between low-risk and high-risk customers from their income and savings Classification Discriminant:IF income > θ1 AND savings > θ2 THENlow-risk ELSEhigh-risk

  22. Classification: Applications Aka Pattern recognition Face recognition: Pose, lighting, occlusion (glasses, beard), make-up, hair style Character recognition: Different handwriting styles. Speech recognition: Temporal dependency. Medical diagnosis: From symptoms to illnesses Biometrics: Recognition/authentication using physical and/or behavioral characteristics: Face, iris, signature, etc Outlier/novelty detection:

  23. Face Recognition Training examples of a person Test images ORL dataset, AT&T Laboratories, Cambridge UK

  24. Regression • Example: Price of a used car • x : car attributes y : price y = g (x | q ) g ( ) model, q parameters y = wx+w0

  25. (x,y) α2 α1 Regression Applications α1= g1(x,y) α2= g2(x,y) • Response surface design Navigating a car: Angle of the steering Kinematics of a robot arm

  26. Supervised Learning: Uses Prediction of future cases: Use the rule to predict the output for future inputs Knowledge extraction: The rule is easy to understand Compression: The rule is simpler than the data it explains Outlier detection: Exceptions that are not covered by the rule, e.g., fraud

  27. Unsupervised Learning • Learning “what normally happens” • No output • Clustering: Grouping similar instances • Example applications • Customer segmentation in CRM • Image compression: Color quantization • Bioinformatics: Learning motifs

  28. Reinforcement Learning Learning a policy: A sequence of outputs No supervised output but delayed reward Credit assignment problem Game playing Robot in a maze Multiple agents, partial observability, ...

  29. Learning a Class from Examples • Class C of a “family car” • Prediction: Is car x a family car? • Knowledge extraction: What do people expect from a family car? • Output: Positive (+) and negative (–) examples • Input representation: x1: price, x2 : engine power

  30. Training set X

  31. Class C

  32. Hypothesis class H Error of h onH

  33. S, G, and the Version Space most specific hypothesis, S most general hypothesis, G h Î H, between S and G is consistent and make up the version space (Mitchell, 1997)

  34. Margin Choose h with largest margin

  35. VC Dimension An axis-aligned rectangle shatters 4 points only ! N points can be labeled in 2Nways as +/– HshattersN if there exists h ÎH consistent for any of these: VC(H ) = N

  36. Probably Approximately Correct (PAC) Learning How many training examples N should we have, such that with probability at least 1 ‒ δ, h has error at most ε ? (Blumer et al., 1989) Each strip is at most ε/4 Pr that we miss a strip 1‒ ε/4 Pr that N instances miss a strip (1 ‒ ε/4)N Pr that N instances miss 4 strips 4(1 ‒ ε/4)N 4(1 ‒ ε/4)N ≤ δ and (1 ‒ x)≤exp( ‒ x) 4exp(‒ εN/4) ≤ δ and N ≥ (4/ε)log(4/δ)

  37. Noise and Model Complexity Use the simpler one because Simpler to use (lower computational complexity) Easier to train (lower space complexity) Easier to explain (more interpretable) Generalizes better (lower variance - Occam’s razor)

  38. Multiple Classes, Ci i=1,...,K Train hypotheses hi(x), i =1,...,K:

  39. Regression

  40. Model Selection & Generalization Learning is an ill-posed problem; data is not sufficient to find a unique solution The need for inductive bias,assumptions about H Generalization: How well a model performs on new data Overfitting: H more complex than C or f Underfitting: H less complex than C or f

  41. Triple Trade-Off • There is a trade-off between three factors (Dietterich, 2003): • Complexity of H, c (H), • Training set size, N, • Generalization error, E, on new data • As N­, E¯ • As c (H)­, first E¯ and then E­

  42. Cross-Validation • To estimate generalization error, we need data unseen during training. We split the data as • Training set (50%) • Validation set (25%) • Test (publication) set (25%) • Resampling when there is few data

  43. Dimensions of a Supervised Learner • Model: • Loss function: • Optimization procedure:

  44. Resources: Datasets UCI Repository: http://www.ics.uci.edu/~mlearn/MLRepository.html Statlib: http://lib.stat.cmu.edu/

  45. Teşekkürler

More Related