1 / 14

Strategyproof Classification Under Constant Hypotheses: A Tale of Two Functions

Reshef Meir, Ariel D. Procaccia , and Jeffrey S. Rosenschein. Strategyproof Classification Under Constant Hypotheses: A Tale of Two Functions . Outline. A very simple example of mechanism design in a decision making setting 8 slides

aleta
Télécharger la présentation

Strategyproof Classification Under Constant Hypotheses: A Tale of Two Functions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reshef Meir, Ariel D. Procaccia, and Jeffrey S. Rosenschein Strategyproof Classification Under Constant Hypotheses: A Tale of Two Functions

  2. Outline • A very simple example of mechanism design in a decision making setting • 8 slides • An investigation of incentives in a general machine learning setting • 2 slides

  3. Motivation • ECB makes Yes/no decisions at European level • Decisions based on reports from national banks • National bankers gather positive/negative data from local institutions • Bankers might misreport their data in order to sway the central decision

  4. A simple setting • Set of n agents • Agent i controls points Xi = {xi1,xi2,...} X • For each xikXi agent i has a label yik{,} • Agent i reports labels y’i1,y’i2,... • Mechanism receives reported labels and outputs c+(constant ) or c(constant ) • Risk of i: Ri(c) = |{k: c(xik)  yik}| • Global risk: R(c) = |{i,k: c(xik)  yik}| = iRi(c)

  5. Individual and global risk – – +  + – +

  6. Risk Minimization • If all agents report truthfully, choose concept that minimizes global risk • Risk Minimization is not strategyproof: agents can benefit by lying

  7. Risk Minimization is not SP – + – – +  + – +

  8. Strategyproof approximation mechanisms • VCG works (but is not interesting). • Mechanism gives -approximation if returns concept with risk at most  times optimal • Mechanism 1: • Define i as positive if has majority of + labels, negative otherwise • If at least half the points belong to positive agents return c+ , otherwise return c- • Theorem: Mechanism 1 is a 3-approx group strategyproof mechanism • Theorem: No (deterministic) SP mechanism achieves an approx ratio better than 3

  9. Proof sketch + + + + + + + + + + + – – + +   + +  +  – – – – – – – – – + – – – – – – + + +

  10. Randomized SP mechanisms • Theorem: There is a randomized group SP 2-approximation mechanism • Theorem: No randomized SP mechanism achieves an approx ratio better than 2

  11. Reminder • A very simple example of mechanism design in a decision making setting • 8 slides • An investigation of incentives in a general machine learning setting • 2 slides

  12. A learning-theoretic setting • Each agent assigns a label to every point of X. • Each agent holds a distribution over X • Ri(c) = prob. of point being mislabeled according to agent’s distribution • R(c) = average individual risk • Each agent’s distribution is sampled, sample labeled by the agent • Theorem: Possible to achieve almost 2-approximation in expectation under rationality assumption

  13. Towards a theory of incentives in machine learning • Classification: • Richer concept classes • Currently have strong results for linear threshold functions over the real line • Other machine learning models • Regression learning [Dekel, Fischer, and Procaccia, in SODA 2008]

  14. ThankYou!

More Related