50 likes | 149 Vues
Discover the essence of efficient machine learning with the principles of Occam's Razor, Laziness rule, and Follow the Data rule. Learn how to justify your models and navigate through evaluating hypotheses. Dive into the fundamentals of rule learning and task formulation in the realm of ML. This week's focus includes reading Mitchell's Chapter 10 on Evaluating hypotheses and Chapter 5 on Rule learning. Make the most of Homework #2, due by 5:00 PM on October 23, by encompassing thorough evaluation, understanding ML issues, and ensuring a quality presentation. Grading criteria include these aspects and not just the ultimate performance of your system.
E N D
Lessons from homework • Try the simplest thing first • “Occam’s Razor”: Prefer the simplest hypothesis that fits the data • Corresponds to the decision tree bias • Shown to be useful empirically (various mostly unsatisfying philosophical justifications also exist) • “Laziness” rule • If it works, you’re done • “Follow the data” rule • If it doesn’t work, you learn how to proceed • “Justify yourself” rule • Your audience/boss/customer will resist a complex model unless you’ve shown simple ones are inadequate
This week • Rule learning • Reading: Mitchell, Chapter 10 • Evaluating hypotheses • Reading: Mitchell, Chapter 5 • Homework #2 assigned later today • Due 5:00PM October 23 • Shorter than last time
Project Grading • Questions • How did you encode your task? Why is this reasonable? • Which ML approaches? Why? • How did you evaluate your system? • Were you successful? Why or why not? What did/would you try next? • Grading based on: • Thoroughness of evaluation • Understanding of ML issues (e.g. overfitting, inductive bias, etc.) • Quality of presentation • Not on ultimate performance of your system
How to formulate an ML task • Example: Web pages • Classify as Student, Instructor, Course • What are the input features? • Would you use DTs or NNs? • Example: Face Recognition • Identify as one of 20 people • What are the input features? • DTs or NNs?