1 / 20

Inferring micro-rules from macro-behavior in the Minority Game

Inferring micro-rules from macro-behavior in the Minority Game Alexis Arias, Ben Shargel, Eric Bonabeau Icosystem Corporation IMA Conference Nov 5,2003. The Problem. Under what conditions is it possible to identify behavioral rules at the micro level from aggregate output data?.

inez-harmon
Télécharger la présentation

Inferring micro-rules from macro-behavior in the Minority Game

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Inferring micro-rules from macro-behavior in the Minority Game Alexis Arias, Ben Shargel, Eric Bonabeau Icosystem Corporation IMA Conference Nov 5,2003

  2. The Problem Under what conditions is it possible to identify behavioral rules at the micro level from aggregate output data? • In Real World Applications: • Need to enhance predictive power • No direct information regarding micro behavior • Lack of expert consensus but… • Some knowledge/assumptions about micro-strategies

  3. Inference in The Minority Game • Why the minority game? • Simple structure • Complex aggregate behavior results from individual interactions • Global interactions: individual behavior depends on aggregates • Questions? • Can the distribution of behavioral rules be inferred from observable time series data at three levels of aggregation: • Individual actions observable • Size of the minority • Action taken by the minority • What is the effect of increasing the available sample size, number of individuals and length of the time series, on the estimation error.

  4. Inference in The Minority Game • 2 Models: • Discount Factor Model • Every individual holds one strategy characterized by a discount factor • Finite memory • Strategy set has a natural ordering • Learning Model • Every individual holds a bag of strategies • In every period individuals follow their most successful strategy • All strategies are re-evaluated every period

  5. Estimation Methodology • Assumptions: • Parametric distribution of individual rules (Beta) • Individual strategies are a function of the time series of the action taken by the minority (Individuals’ information sets are observable) • We maximize the likelihood of the observable data as a function of these parameters (MLE) • Under assumption 2, conditional on the history of the game, individual actions are independent random variables

  6. Discount Factor Model • Individual strategies are characterized by a discount factor λ[0,1] • The distribution of discount factors in the population is Beta with parameters a, b • Same finite memory: m periods • At each period t, given the history of the game h, the probability of attending the bar is: p(h, λ) = (i h(i)*^m-i) / (j ^m-j) Where: • h is binary vector size m • h(i) is the ith element of h

  7. Results • Panel Data • We estimated individual discount factors • The likelihood of the time series of actions taken by an individual {a(t)} conditional on λ and {h(t)} is: L({a} / λ,{h}) = ∏t { Ind(a(t)) =1) p(h(t), λ ) + (1- Ind(a(t))=1)(1- p(h(t), λ))} • Easy to estimate λeven for small data sets (50 periods)

  8. Results • Size of the Minority Observable • The likelihood of the time series of the size of the minority {s(t)} and the corresponding action series {AM(t)} conditional on {h(t)}, a and b is: L({s(t)} / {h(t)}, a, b) = ∏t b(s(t), N; δ(h(t),a,b)) Where b(s(t), N; δ (h(t),a,b)) is the probability of s(t) successful trials out of N with probability of success equal to δ (h(t),a,b) and δ (h(t),a,b) = ∫ (p(h(t) , λ)*Beta(λ;a,b)) dλ if AM(t)=1 1- ∫ (p(h(t) , λ)*Beta(λ;a,b)) dλ if AM(t)=0 • We maximize the likelihood with respect to a, b

  9. Results • System is in principle identified: • Expected probabilities are different for every pair of underlying distributions and every history • Simulations • For 100 different pairs (a, b) we simulated the game with N individuals for T periods • N and T range from 50 to 200 • For each simulation we estimated the parameters (a, b) and calculated an estimation error • The estimation error we used is: D(a,b;a*,b*) = ∫{| Beta(λ;a,b)- Beta(λ;a*,b*)| /2}dλ Where a, b are true parameters and a*, b* are estimates

  10. Results

  11. Results • Results depend on the underlying parameters a, b • Estimation error is positively related to the mean and variance of the underlying distribution

  12. Results • Effect of increasing sample size: • Estimation error is significantly reduced

  13. Results • More significant improvements in distributions with high mean and high variance

  14. Results • Increasing N or T has a similar effect on the estimation error

  15. Results • Action of the Minority Observable • The likelihood of the time series of the action of the minority {AM(t)} conditional on {h(t)}, a and b is: L({AM(t)} / {h(t)}, a, b) = ∏t ∑ b(n, N; δ(h(t),a,b)) Where the summation is carried over n< N/2 • We maximize the likelihood with respect to a, b

  16. Results Very poor results even for N=200 T-200

  17. Results Comparing different levels of aggregation

  18. Future Work • Extend length of time series • Analyze prediction loss • Reduce level of correlation of individual actions • Consider multimodal distributions

  19. Learning Model • Individuals hold a bag of strategies • In every period they choose strategy s with probability • ρ(s,t) = eA(s,t) / ∑s’ eA(s’,t) Where A(s,t) is strategy s accumulated rewards at time t • In every period successful strategies receive 1 point, the others 0 • Strategies are characterized by three components: • Binary vector v size m • Threshold value θ • Operator {≤,>} • Individual takes action 1 if v*h {≤,>} θ

  20. Results • Panel Data • We estimated individuals’ bags and the initial level of accumulated rewards • Implemented a GA to maximize log likelihood • Preliminary results encouraging successful estimation in 80% cases for t >150, N>100 • Increasing N and T has a significant effect as the strategy space is more populated

More Related