1 / 11

Automated Design of Voting Rules by Learning From Examples

Automated Design of Voting Rules by Learning From Examples. Ariel D. Procaccia, Aviv Zohar, Jeffrey S. Rosenschein. Outline. Scoring rules. Learning. Limitations. Conclusions. Scoring rules: Definition Advantages Our approach Learning voting rules in the PAC model Main theorem

Télécharger la présentation

Automated Design of Voting Rules by Learning From Examples

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automated Design of Voting Rules by Learning From Examples Ariel D. Procaccia, Aviv Zohar, Jeffrey S. Rosenschein

  2. Outline Scoring rules Learning Limitations Conclusions • Scoring rules: • Definition • Advantages • Our approach • Learning voting rules in the PAC model • Main theorem • Limitations of our approach • Conclusions

  3. Scoring rules Scoring rules Learning Limitations Conclusions • Election: set of voters N={1,...,n}, set of candidates/alternatives A={x1,...,xm}. Voters express linear preferences Ri over A. • Winner determined according to a voting rule/social choice function. • Scoring rules: defined by a vector =1,...,m, all i  i+1. Each candidate receives i points from every voter which ranks it in the i’th place. • Examples: • Plurality: =1,0,...,0 • Veto: =1,...,1,0 • Borda: =m-1,m-2,...,0

  4. On the diversity of scoring rules Scoring rules Learning Limitations Conclusions • Different choice of parameters result in different properties. • Some properties: • Majority: candidate most preferred by majority is elected. • Robustness: worst-case prob. of the outcome not changing as a result of a fault. • Computational Complexity of coalitional manipulation. • Communication Complexity.

  5. Automated Design of voting rules Scoring rules Learning Limitations Conclusions • Designer/teacher is presented with pref. profiles, and designates the winner in each. • Philosophical justification. • Practical justification: designer simply wants to find a concise representation. • Assuming there exists a “target” scoring rule, the goal is to find a scoring rule which is “close”.

  6. An Illustration Designer Computer a a a ? a a b a a a a b b b a b b c b b b c c c c c a c c c 1 2 3 1 2 3 c c b a b c c b a a 1 2 3 a a b b b c a a c c a 1 2 3 1 2 3

  7. PAC Learning Scoring rules Learning Limitations Conclusions • Training set consists of pairs of examples (Rj,f(Rj)). • Rj are drawn from fixed dist. D. • f = target scoring rule. • Goal: given , find scoring rule g such that ProbD[f(R)  g(R)]  . • Q: How many examples are needed in order to guarantee that goal is achieved with prob. at least 1-?

  8. PAC-learnability of scoring rules Scoring rules Learning Limitations Conclusions • Theorem: If there are at least poly(n,m,1/,1/) examples in the training set, then any “consistent” scoring rule g achieves the goal. • Such a rule can be efficiently found using LP. • Example: • Scoring rules are efficiently PAC-learnable. find 1, 2, 3 s.t. 31 > 32 31 > 33 21 + 3 > 1 + 22 21 + 3 > 2 + 23 1  2  3  0 a a a a a b b b b a b b c a c c c c c a

  9. Limitations Scoring rules Learning Limitations Conclusions • There are many different scoring rules. • Can any voting rule be approximated by a scoring rule? • Definition: g is a c-approximation of f iff f and g agree on a c-fraction of the possible preference profiles. • Reformulation: given a voting rule f, how hard is it to learn a scoring rule which is a c-approximation, with c close to 1? • Theorem: Let >0. For large enough n,m, f such that no scoring rule is a (1/2+)-approximation of f. • Lemma:  polynomial p(n,m) s.t. the number of distinct scoring rules  2p(n,m).

  10. Proof of Theorem space size lemma

  11. Conclusions Scoring rules Learning Limitations Conclusions • If the designer can designate winners, then it can automatically design voting rule. • Cumbersome representation  concise. • Many voting rules cannot be approximated by scoring rules. • Open questions: • Is there a broad class of rules which can be approximated by scoring? • Is there a broad class of rules which is efficiently learnable and concisely representable?

More Related