1 / 22

A Cultural Sensitive Agent for Human-Computer Negotiation

A Cultural Sensitive Agent for Human-Computer Negotiation . Galit Haim , Ya'akov Gal, Sarit Kraus and Michele J. Gelfand. Motivation. Buyers and seller across geographical and ethnic borders electronic commerce: crowd-sourcing: deal-of-the-day applications:

kapila
Télécharger la présentation

A Cultural Sensitive Agent for Human-Computer Negotiation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Cultural Sensitive Agent for Human-ComputerNegotiation GalitHaim, Ya'akov Gal, Sarit Kraus and Michele J. Gelfand

  2. Motivation • Buyers and seller across geographical and ethnic borders • electronic commerce: • crowd-sourcing: • deal-of-the-day applications: • Interaction between people from different countries  to succeed, an agent needs to reason about how culture affects people's decision making

  3. Goals and Challenges Can we build an agent that will negotiate better than the people in each countries? The approach 1. Collect data on each country 2. Use machine learning 3. Build influence diagram Sparse Data Can we build proficient negotiator with no expert designed rules? Noisy Data Culture sensitive agent?

  4. The Colored Trails (CT) Game • An infrastructure for agent design, implementation and evaluation for open environments • Designed in 2004 by Barbara Grosz and Sarit Kraus (Grosz et al AIJ 2010) CT is the right test-bed to use because it  provides a task analogy to the real world

  5. The CT Configuration • 7*5 board of colored squares • One square is the goal • Set of colored chips • Move using a chip in the same color 5

  6. CT Scenario • 2 players • Multiple phases: • communication: negotiation (alternating offer protocol) • transfer: chip exchange • movement • Complete information • Agreements are not enforceable • Complex dependencies • Game ends when one of the players: reached the goal or did not move for three movement phases

  7. Scoring and Payment • 100 point bonus for getting to goal • 5 point bonus for each chip left at end of game • 10 point penalty for each square in the shortest path from end-position to goal • Performance does not depend on outcome for other player

  8. Personality, Adaptive Learning (PAL) Agent • Data from specific country machine learning Decision Making Human behavior model Take action 8

  9. Learning People's Reliability Predict if the other player will keep its promise

  10. Learning how People Accept Offers Accept or reject the proposal?

  11. Feature Set • Domain independent feature: • Current and Resulting scores • Offer generosity • Reliability: between 0 (completely unreliable) to 1(fully reliable) • Weighted reliability: over the previous rounds in the game • Domain dependent feature: • Round number

  12. How to Model People's Behavior  • For each culture: • Use different features  • Choose learning algorithm that minimized error using 10-fold cross validation • In US and Israel - we only used domain independent features • In Lebanon we added domain dependent features

  13. Data Collection with Sparse Data • Sources of data to train our classifiers: • 222 game instances consisting of people playing a rule-based agent • U.S. and Israel: collect 112 game instances of people playing other people • Lebanon: collect 64 additional games • “Nasty agent”: less reliable when fulfilling its agreement The Lebanon people in this data set almost always kept the agreements and as a result, PAL never kept agreements

  14. People Learned Reliability

  15. Experiment Design • 3 countries: 157 people • Israel: 63 • Lebanon: 48 • U.S.A: 46 • 30 minutes tutorial • Boards varied dependencies between players • People were always the first proposer in the game • There was a single path to the goal

  16. Decision Making There are 3 decisions that PAL needs to make: • Reliability: determine the PAL transfer strategy • Accepting an offer: accept or reject a specific offer proposed by the opponent • Propose an offer Use backward induction over two rounds…

  17. Success Rate: Getting to the Goal

  18. Performance Comparison: Averages

  19. Example in Lebanon • 2 chips for 2 chips; accepted  both sent • 1 chip for 1 chip; accepted • PAL learned that people in Lebanon were highly reliable PAL did not send, the human sent games were relatively shorter people were very reliable in the training games 19

  20. Example in Israel • 2 chips for 2 chips; accepted  only PAL sent • 1 chip for 1 chip; accepted  the human only sent • 1 chip for 1 chip; accepted  the human only sent • 1 chip for 1 chip; accepted only PAL sent • 1 chip for 3 chips; accepted only the human sent people were less reliable in the training games than in Lebanon games were relatively longer

  21. Conclusions • PAL is able to learn to negotiate proficiently with people across different cultures • PAL was able to outperform people in all dependency conditions and in all countries This is the first work to show that a computer agent can learn to negotiate with people in different countries

  22. Colored trails is easy to use for your own research • Open source empirical test-bed for investigating decision making • Easy to design new games • Built in functionality for conducting experiments with people • Over 30 publications • Freely available; extensive documentation • http://eecs.harvard.edu/ai/ct (or Google ”colored trails”) THANK YOUhaimga@cs.biu.ac.il

More Related