1 / 26

Machine Learning, Chapter 2: Concept Learning

Overview. Concept Learning Representation Inductive Learning Hypothesis Concept Learning as Search The Structure of the Hypothesis Space Find-S Algorithm Version Space List-Eliminate Algorithm Candidate-Elimination Algorithm Using Partially Learned Concepts Inductive Bias

sara-kent
Télécharger la présentation

Machine Learning, Chapter 2: Concept Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview • Concept Learning • Representation • Inductive Learning Hypothesis • Concept Learning as Search • The Structure of the Hypothesis Space • Find-S Algorithm • Version Space • List-Eliminate Algorithm • Candidate-Elimination Algorithm • Using Partially Learned Concepts • Inductive Bias • An Unbiased Learner • The Futility of Bias-Free Learning • Inductive and Equivalent Deductive Systems • Three Learners with Different Bias 1 Machine Learning, Chapter 2: Concept Learning

  2. The Example Windsurfing can be fun, but this depends on the weather. The following data is collected: Sky AirTemp Humidity Wind Water Forecast EnjoySport Sunny Warm Normal Strong Warm Same Yes Sunny Warm High Strong Warm Same Yes Rainy Cold High Strong Warm Change No Sunny Warm High Strong Cool Change Yes Is there a pattern here - what is the general concept? Concept Learning Representation Inductive Learning... 2 Machine Learning, Chapter 2: Concept Learning

  3. Concept Learning Concept = boolean-valued function defined over a large set of objects or events. Concept learning = Inferring a boolean-valued function from training examples of input and output of the function. How can we represent a concept? Concept Learning Representation Inductive Learning... 3 Machine Learning, Chapter 2: Concept Learning

  4. Representation (1/2) Ø = empty set Many possible representations, but here: • Instance x: A collection of attributes (Sky, AirTemp, Humidity, etc.) • Target function c: • Hypothesis h: A conjunction of constraints on the attributes. • A constraint can be: • a specific value (e.g. Water = Warm) • don’t care (e.g. Water = ?) • no value allowed (e.g. Water = Ø) • Training example d: An instance xi paired with the • target function c, Concept Learning Representation Inductive Learning... Concept Learning as... negative example positive example 4 Machine Learning, Chapter 2: Concept Learning

  5. Representation (2/2) Instances X: The set of possible instances x. Hypotheses H: The set of possible hypotheses h. Training examples D: The set of possible training examples d. x = instance h = hypothesis d = training example Concept Learning Representation Inductive Learning... Concept Learning as... 5 Machine Learning, Chapter 2: Concept Learning

  6. Inductive Learning Hypothesis Concept learning = Inferring a boolean-valued function from training examples of input and output of the function.  - ”member of” x = instance h = hypothesis d = training example c = target function Thus, concept learning is to find a hH so that h(x)=c(x) for all xX. Problem:We might not have access to allx! Solution:Find a h that fits the examples, and take the achieved function as an approximation of the true c(x). This principle is called induction. Inductive learning hypothesis: Any hypothesis found to approximate the target function well over a sufficiently large set of training examples will also approximate the target function well over other unobserved examples. Concept Learning Representation Inductive Learning... Concept Learning as... The structure of H 6 Machine Learning, Chapter 2: Concept Learning

  7. Concept Learning as Search Concept learning = Inferring a boolean-valued function from training examples of input and output of the function.  - ”member of” x = instance h = hypothesis d = training example c = target function Concept learning is to find a hH so that h(x)=c(x) for all xX. A search problem: search through the hypothesis space H after the best h. Representation Inductive Learning... Concept Learning as... The Structure of H Find-S Algorithm 7 Machine Learning, Chapter 2: Concept Learning

  8. The Structure of H x = instance h = hypothesis d = training example c = target function Many algorithms for concept learning organize the search by using a useful structure in the hypothesis space: general-to-specific ordering! An example: h1 = <Sunny, ?, ?, Strong, ?, ?> h2 = <Sunny, ?, ?, ?, ?, ?> h3 = <Sunny, ?, ?, ?, Cool, ?> Because h2 imposes fewer constraints, it will cover more instances x in X than both h1 and h3. h2 is more general than h1 and h3. In a similar manner, we can define more-specific-than. Instance space Hypothesis space Inductive Learning... Concept Learning as... The Structure of H Find-S Algorithm Version Space 8 Machine Learning, Chapter 2: Concept Learning

  9. Find-S Algorithm x = instance h = hypothesis d = training example c = target function Finds the most specific hypothesis matching the training example (hence the name). 1. Initialize h to the most specific hypothesis in H 2. For each positive training instance x For each attribute constraint ai in h If the constraint ai in h is satisfied by x Then do nothing Else replace ai in h by the next more general constraint that is satisfied by x 3. Output hypothesis h Concept Learning as... The Structure of H Find-S Algorithm Version Space List-Eliminate... 9 Machine Learning, Chapter 2: Concept Learning

  10. Find-S algorithm example x = instance h = hypothesis d = training example c = target function Concept Learning as... The Structure of H Find-S Algorithm Version Space List-Eliminate... 10 Machine Learning, Chapter 2: Concept Learning

  11. Problems with Find-S There are several problems with the Find-S algorithm: It cannot determine if it has learnt the concept. There might be several other hypotheses that match as well – has it found the only one? It cannot detect when training data is inconsistent. We would like to detect and be tolerant to errors and noise. Why do we want the most specific hypothesis? Some other hypothesis might be more useful. Depending on H, there might be several maximally consistent hypotheses, and there is no way for Find-S to find them. All of them are equally likely. Concept Learning as... The Structure of H Find-S Algorithm Version Space List-Eliminate... 11 Machine Learning, Chapter 2: Concept Learning

  12. Version Space x = instance h = hypothesis d = training example c = target function Definition: A hypothesis h is consistent with a set of training examples D if and only if h(x)=c(x) for each example in D. Version space = the subset of all hypotheses in H consistent with the training examples D. Definition: The version space, denoted VSH,D, with respect to hypothesis space H and training examples D, is the subset of hypotheses from H consistent with the training examples in D. The Structure of H Find-S Algorithm Version Space List-Eliminate... Candidate-Elimination... 12 Machine Learning, Chapter 2: Concept Learning

  13. Representation of Version Space x = instance h = hypothesis d = training example c = target function VSH,D=version space Option 1: List all of the members in the version space. Works only when the hypothesis space H is finite! Option 2: Store only the set of most general members G and the set of most specific members S. Given these two sets, it is possible to generate any member of the version space as needed. The Structure of H Find-S Algorithm Version Space List-Eliminate... Candidate-Elimination... 13 Machine Learning, Chapter 2: Concept Learning

  14. List-Eliminate Algorithm x = instance h = hypothesis d = training example c = target function VSH,D=version space 1. VSH,D  a list containing every hypothesis in H 2. For each training example, remove from VSH,D any hypothesis that is inconsistent with the training example h(x)≠c(x) 3. Output the list of hypotheses in VSH,D Advantage: Guaranteed to output all hypotheses consistent with the training examples. But inefficient! Even in this simple example, there are 1+4·3·3·3·3 = 973 semantically distinct hypotheses. Find-S Algorithm Version Space List-Eliminate... Candidate-Elimination... Using Partially... 14 Machine Learning, Chapter 2: Concept Learning

  15. Candidate-Elimination Algorithm x = instance h = hypothesis d = training example c = target function VSH,D=version space G maximally general hypothesis in H S maximally specific hypothesis in H For each training example modify G and S so that G and S are consistent with d Version Space List-Eliminate... Candidate-Elimination... Using Partially... Inductive Bias 15 Machine Learning, Chapter 2: Concept Learning

  16. For each training example Candidate-Elimination Algorithm (detailed) x = instance h = hypothesis d = training example c = target function VSH,D=version space G maximally general hypotheses in H S maximally specific hypotheses in H • Ifd is a positive example • Remove from G any hypothesis that is inconsistent with d • For each hypothesis s in S that is not consistent with d • Remove s from S. • Add to S all minimal generalizations h of s such that • h consistent with d • Some member of G is more general than h • Remove from S any hypothesis that is more general than another hypothesis in S • Ifd is a negative example • Remove from S any hypothesis that is inconsistent with d • For each hypothesis g in G that is not consistent with d • Remove g from G. • Add to G all minimal specializations h of g such that • h consistent with d • Some member of S is more specific than h • Remove from G any hypothesis that is less general than another hypothesis in G 16 Machine Learning, Chapter 2: Concept Learning

  17. Example of Candidate-Elimination x = instance h = hypothesis d = training example c = target function VSH,D=version space G = set of most general members in H S = set of most specific members in H Version Space List-Eliminate... Candidate-Elimination... Using Partially... Inductive Bias 17 Machine Learning, Chapter 2: Concept Learning

  18. Properties of Candidate-Elimination x = instance h = hypothesis d = training example c = target function VSH,D=version space G = set of most general members in H S = set of most specific members in H • Converges to target concept when • No error in training examples • Target concept is in H • Converges to an empty version space when • Inconsistency in training data • Target concept cannot be described by hypothesis representation Version Space List-Eliminate... Candidate-Elimination... Using Partially... Inductive Bias 18 Machine Learning, Chapter 2: Concept Learning

  19. Using Partially Learned Concepts x = instance h = hypothesis d = training example c = target function VSH,D=version space G = set of most general members in H S = set of most specific members in H New instance Votes: Pos Neg 6 0 0 6 3 3 2 4 List-Eliminate... Candidate-Elimination... Using Partially... Inductive Bias An Unbiased Learner 19 Machine Learning, Chapter 2: Concept Learning

  20. xc 1 1 2 0 3 1 Inductive Bias x = instance h = hypothesis d = training example c = target function VSH,D=version space G = set of most general members in H S = set of most specific members in H New example: Each instance is a single integer [1,4] Training examples: There is no way to represent this simple example by using our representation. Why? The hypotheses space H is biased, since it only allow conjunctions of attribute values. Candidate-Elimination... Using Partially... Inductive Bias An Unbiased Learner The Futility of... 20 Machine Learning, Chapter 2: Concept Learning

  21. An Unbiased Learner x = instance h = hypothesis d = training example c = target function VSH,D=version space G = set of most general members in H S = set of most specific members in H Solution(?): Let H be the set of all possible subsets of X – the power set of X – denoted as P(X). It contains 2|X| hypotheses, where, |X| is the number of distinct instances. For our example, where each instance is a single integer [1,4] : P(X)= Using Partially... Inductive Bias An Unbiased Learner The Futility of... Inductive Bias Revisited 21 Machine Learning, Chapter 2: Concept Learning

  22. xc 1 1 2 0 3 1 The Futility of Bias-free Learning x = instance h = hypothesis d = training example c = target function VSH,D=version space G = set of most general members in H S = set of most specific members in H For our example, we get after three training examples: How do we classify a new instance x=4? The only examples that are classified are the training examples themselves. In order to learn the target concept one would have to present every single instance in X as a training example! Each unobserved instance will be classified positive by precisely half the hypotheses in VSH,D and negative by the other half. Inductive Bias An Unbiased Learner The Futility of... Inductive Bias Revisited Inductive and ... 22 Machine Learning, Chapter 2: Concept Learning

  23. Inductive Bias Revisited x = instance h = hypothesis d = training example c = target function VSH,D=version space G = set of most general members in H S = set of most specific members in H =“for all” =“member of” =“logical and” =“implies” Inductive bias is necessary in order to be able to generalize! Definition: The inductive bias of L is any minimal set of assertions B such that for any target concept c and corresponding training data D Where yz means that z deductively follows from y. An Unbiased Learner The Futility of... Inductive Bias Revisited Inductive and ... Three Learners... 23 Machine Learning, Chapter 2: Concept Learning

  24. Inductive and equivalent deductive systems x = instance h = hypothesis d = training example c = target function VSH,D=version space G = set of most general members in H S = set of most specific members in H The Futility of... Inductive Bias Revisited Inductive and ... Three Learners... Summary 24 Machine Learning, Chapter 2: Concept Learning

  25. Three Learners with different Biases x = instance h = hypothesis d = training example c = target function VSH,D=version space G = set of most general members in H S = set of most specific members in H 1. Rote learner – simply stores instances No bias – unable to generalize 2. Candidate-Elimination Bias: Target concept is in hypothesis space H 3. Find-S Bias: Target concept is in hypothesis space H + All instances are negative unless taught otherwise Inductive Bias Revisited Inductive and ... Three Learners... Summary 25 Machine Learning, Chapter 2: Concept Learning

  26. Summary • Concept learning as search through H • Many algorithms use general-to-specific ordering of H • Find-S algorithm • Version space and version space representations • List-Eliminate algorithm • Candidate-Elimination algorithm • Inductive leaps possible only if learner is biased • Inductive learners can be modelled by equivalent deductive systems Inductive and ... Three Learners... Summary 26 Machine Learning, Chapter 2: Concept Learning

More Related