1 / 32

False-name-proofness in social choice, and limited verification of identities

Vincent Conitzer Duke University Papers: Anonymity-proof voting rules, draft (2007) Limited verification of identities to induce false-name-proofness, TARK-07. False-name-proofness in social choice, and limited verification of identities.

Télécharger la présentation

False-name-proofness in social choice, and limited verification of identities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Vincent Conitzer Duke University Papers: Anonymity-proof voting rules, draft (2007) Limited verification of identities to induce false-name-proofness, TARK-07 False-name-proofness in social choice, and limited verification of identities

  2. Time Magazine “Person of the Century” poll – “results” (January 19, 2000) To Our Readers We understand that some of our users are upset that the votes for Jesus Christ as "Person of the Century" were removed from our online poll. […] TIME's Man of the Year, instituted in the 1930s, has always been based on the impact of living persons on events around the world. […] When we removed the votes for Jesus we also removed the votes for the Prophet Mohammed. We did not wish to see this poll turned into a mockery, because in our experience it is quite possible that supporters of the leading figures might have turned to computer robots to churn out hundreds of thousands of "phony" votes for their champions. […] Ours is a poll that attempts to judge the works of mere men, the acts in which men render unto Caesar […] #   Person  %       Tally    1   Elvis Presley   13.73   625045    2   Yitzhak Rabin   13.17   599473    3   Adolf Hitler     11.36   516926    4   Billy Graham     10.35   4711145   Albert Einstein      9.78     445218     6   Martin Luther King 8.40     382159    7   Pope John Paul II 8.18     372477    8   Gordon B Hinckley 5.62     256077    9   Mohandas Gandhi 3.61     164281    10  Ronald Reagan   1.78     81368    11  John Lennon     1.41     64295    12  American GI     1.35     61836    13  Henry Ford       1.22     55696    14  Mother Teresa   1.11     50770    15  Madonna         0.85     38696    16  Winston Churchill 0.83     37930    17  Linus Torvalds   0.53     24146    18  Nelson Mandela   0.47     21640    19  Princess Diana   0.36     16481    20  Pope Paul VI     0.34     15812

  3. Time Magazine “Person of the Century” poll – partial results (November 20) #   Person  %       Tally    1 Jesus Christ 48.36 610238 2 Adolf Hitler 14.00 176732 3 Ric Flair 8.33 105116 4 Prophet Mohammed 4.22 53310 5 John Flansburgh 3.80 47983 6 Mohandas Gandhi 3.30 41762 7 Mustafa K Ataturk 2.07 26172 8 Billy Graham 1.75 22109 9 Raven 1.51 19178 10 Pope John Paul II 1.15 14529 11 Ronald Reagan 0.98 12448 12 Sarah McLachlan 0.85 10774 13 Dr William L Pierce 0.73 9337 14 Ryan Aurori 0.60 7670 15 Winston Churchill 0.58 734116 Albert Einstein 0.56 7103 17 Kurt Cobain 0.32 4088 18 Bob Weaver 0.29 3783 19 Bill Gates 0.28 3629 20 Serdar Gokhan 0.28 3627

  4. Ric Flair

  5. John Flansburgh(They Might Be Giants)

  6. Atatürk

  7. Raven

  8. Howard Stern

  9. Cartman

  10. Optimus Prime

  11. Anonymity-proof voting rules • A voting rule is false-name-proof if no voter ever benefits from casting additional (potentially different) votes • A voting rule satisfies participation if it never hurts a voter to cast her vote • A voting rule is anonymity-proof if it is false-name-proof and satisfies participation • Can we characterize anonymity-proof rules? • Assume rule is neutral, anonymous • Randomized rules are allowed • [Gibbard 77] characterizes strategy-proof randomized rules

  12. Characterization • Theorem: • Any anonymity-proof voting rule f can be described by a single number kf in [0,1] • With probability kf, the rule chooses an alternative uniformly at random • With probability 1- kf, the rule draws two alternatives uniformly at random; • If all votes rank the same alternative higher among the two, that alternative is chosen • Otherwise, a coin is flipped to decide between the two alternatives

  13. Lemma 1 • If we have • vote 1: a1 > … > ak > (some other alternatives…) > ak+1 > … > al • vote 2: a1 > … > ak > (some other alternatives…) > ak+1 > … > al • other votes then dropping vote 2 will not change the winning probabilities of any of the ai . • Corollary: Duplicates of existing votes have no effect

  14. Lemma 2 • If we modify a vote by swapping some alternatives, but nothing is swapped pasta, then a’s winning probability does not change. • b > a > c > d → b > a > d > c • b > a > c > d → b > c > a > d

  15. Lemma 3 • If we modify a vote by swapping some alternative b past a, then if the remaining votes do not agree on a vs. b (some rank a higher, some b), then a’s winning probability does not change. • c > a > b > d → c > b > a > d b > d > a > c → b > d > a > c d > a > b > c → d > a > b > c • c > a > b > d → c > b > a > d b > d > a > c → b > d > a > c d > b > a > c → d > b > a > c

  16. Lemma 4 • Using the previous lemmas, we can change the set of votes to a particular set of only 2 votes, without affecting a’s winning probability. • Proof sketch: • move as many alternatives as possible ahead of a in one vote • then, move as many alternatives behind a as possible in the other votes • the latter votes will all be the same, so remove duplicates

  17. Lemma 5 • The theorem holds when there is only one vote. • I.e. an alternative’s winning probability is linear in its rank in the vote.

  18. Lemma 6 • The theorem holds for the two votes in Lemma 4 (for alternative a).

  19. What are our options? • Insist on false-name-proofness • Inherent limitations on false-name-proof mechanisms • Verify everyone’s real-world identity • Total loss of anonymity • Middle ground: • Verify identities of some of the reports (votes, bids) • Throw out reports that fail verification step • Original mechanism + verification protocol should be false-name-proof • Verification protocol induces false-name-proofness on the mechanism if using a single identifier with true type is an ex-post equilibrium

  20. Combinatorial auctions Simultaneously for sale: , , bid 1 v( ) = $500 bid 2 v( ) = $700 bid 3 v( ) = $300 $800

  21. Clarke/VCG mechanism (Generalized Vickrey Auction) bid 2 v( ) = $700 bid 3 v( ) = $300 $700 pays $700-$300 = $400 Strategy-proof (bidding true valuations is always optimal) Individually rational (IR) (participation never hurts)

  22. False-name bidding[Yokoo et al. AIJ2001, GEB2003] ) = $800 v( v( ) = $700 loses wins, pays $200 v( v( ) = $300 ) = $200 wins, pays $0 wins, pays $0 Related to, but different from, certain types of collusivebidding [Ausubel & Milgrom 2006, Conitzer & Sandholm AAMAS2006, …] A (strategy-proof and IR) mechanism is false-name-proof if bidders never have an incentive to use multiple identifiers No mechanism that allocates items efficiently is false-name-proof [Yokoo et al. GEB2003]

  23. How do we verify identities? • Ask for a real-world identifier (for some subset of the reports) • Social security number, (maybe) credit card number, phone number • Ask a subset of identifiers to simultaneously have an online chat with auctioneer’s employees number of employees may be limited may make sense to check multiple subsets separately

  24. Verification protocols • Checking a subset of reports • Ask for social security numbers • Ask to simultaneously appear in chat room • Assume: if you submitted multiple reports in the subset, you can respond for at most one • Verification protocol specifies, for each set of reports, a collection of subsets S’1, …, S’k to check • If a report does not respond, throw it out, start over with remaining reports

  25. Subsets of reports that require verification A subset S of submitted reports R requires verification if there exist some type (preference) t such that: given reports R-S, an agent of type t would be better off reporting S than {t} requires verification ) = $800 v( v( ) = $700 loses wins, pays $200 v( v( ) = $300 ) = $200 wins, pays $0 wins, pays $0 requires verification does not require verification requires verification

  26. Characterization • Theorem. Given a strategy-proof, IR mechanism, a verification protocol P induces false-name proofness if and only if: • For any set of reports R, for any subset S of R that requires verification, P will check some subset S’ such that |S∩S’|≥2

  27. Proof sketch • “If” direction. • Suppose everyone except you uses a single identifier and reports truthfully (so their reports are never deleted) • After verification terminates, consider the subset S of your reports that have not been deleted • S cannot require verification • Otherwise, the protocol must check some S’ with |S∩S’|≥2 and you will lose some report in S • But then you would have been better off reporting truthfully

  28. Proof sketch… • “Only if” direction. • Suppose there is some R and some subset S of R that requires verification, such that for any S’ that P checks, |S∩S’|≤1 • There must be some type t for which reporting S is better than reporting t (given other reports R-S) • Suppose t and R-S are the true types • If everyone besides t uses a single identifier and reports truthfully, t is better off reporting S since she can always respond for each report

  29. Minimizing verification effort • SINGLE-SUBSET-VERIFICATION. Given • a set (of reports) R, and • a collection of subsets S1, …, Sn of R (that require verification), try to • minimize |S’|, under the constraint that • for all Si we have |Si∩S’|≥2. • BOUNDED-SUBSET-VERIFICATION. Given • a set (of reports) R, • a collection of subsets S1, …, Sn of R (that require verification), and • a number k (maximum size of checked subset), • try to • minimize t, under the constraint that there are S’1, …, S’t such that • for all S’j, |S’j|≤k, and • for every Si there is some S’j with |Si∩S’j|≥2. • Theorem. Essentially any R, S1, …, Sn can occur in GVA.

  30. A closely related problem • HITTING-SET. Given • a set T, and • a collection of subsets U1, …, Un of T, try to • minimize |U’|, under the constraint that • for all Ui we have |Ui∩U’|≥1. • Essentially the same problem as SET-COVER • Theorem. HITTING-SET can be reduced to SINGLE-SUBSET-VERIFICATION and BOUNDED-SUBSET-VERIFICATION (even with k=2) to show NP-hardness and inapproximability

  31. Using HITTING-SET in a positive way • Theorem. Any ρ-approximation algorithm for HITTING-SET can be converted to a 2ρ-approximation algorithm for SINGLE-SUBSET-VERIFICATION • Algorithm: • find one hitting set, • remove those elements and the twice-hit subsets, • find another hitting set • Can also use any HITTING-SET algorithm for BOUNDED-SUBSET-VERIFICATION with k=2: A AD AC AE E B AB BC DE BD CE BE D C CD

  32. Further/future directions • Voting over two alternatives • Finding the (minimal) subsets that require verification • Verification monotonicity: if a set of winning bids requires verification, then so does any superset • Sufficient condition for verification monotonicity in GVA • Mechanisms that are not strategy-proof • How about Bayes-Nash incentive compatible? • Verification protocols that use randomization Thank you for your attention!

More Related