1 / 57

Seeking Consistent Stories

Seeking Consistent Stories. By Reinterpreting or Discounting Evidence: An Agent-Based Model^. CSCN Presentation. Agenda. The Phenomenon Agent-Based Model (ABM) Primer The Model Sample Run Experiments. Research Interests. Goal: psychology, law, computational modeling

flavio
Télécharger la présentation

Seeking Consistent Stories

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Seeking Consistent Stories By Reinterpreting or Discounting Evidence: An Agent-Based Model^ CSCN Presentation

  2. Agenda The Phenomenon Agent-Based Model (ABM) Primer The Model Sample Run Experiments

  3. Research Interests • Goal: psychology, law, computational modeling • persuasion and decision making • law classes => persuasion techniques • storytelling • narrative coherence • metaphors • Story model of jury decision making (Pennington & Hastie, 1986)

  4. Real-Life Scenario: Bench Trial Prosecution (p) Defense Lawyer (d) “Guilty!” “Innocent!”

  5. Sequential Evidence – What’s Normative Official Deliberation Story/ Verdict Evidence 1 Evidence 2 Evidence 3 Evidence N …

  6. Empirical Literature on JDM • Form “coherent” stories that support option/verdict (Pennington & Hastie, 1988) • Confidence in option/verdict increases with “coherence” (Glockner et al., under review) • Decision threshold: “sufficiently strong” (supported by many consistent evidence) or “sufficiently stronger” than other stories (review by Hastie, 1993 • Narrative coherence: • consistency • causality • completeness • “Consistency” aspect of “good” stories • consistency between evidences in a story • consistency of evidence with favored story

  7. Example Case (Pennington & Hastie, 1988) Scenario: Defendant Frank Johnson stabbed and killed Alan Caldwell • Evidence := facts or arguments given in support of a story/verdict • Facts— “Johnson took knife with him” ; “Johnson pulled out his knife” • Arguments—”Johnson pulled out knife because he wanted revenge” vs. “Johson pulled out knife because he was afraid” • Story := set of evidence supporting a given verdict • Same evidence can be framed to support multiple verdicts/stories!

  8. Sequential Evidence – More Descriptive Official Deliberation Story/ Verdict Evidence 1 Evidence 2 Evidence 3 Evidence N Premature Story/ Verdict @ Evid n < N Compare, Deliberate, Interpret … (Brownstein, 2003; Russo et al., 2000)

  9. People don’t just take evid @ face value, but are selective! Possible Reactions to New Evid in Light of Old: “reinforce” each other “reinterpret” less plausible one (Russo et al., etc.); e.g., misremember the info “discount” less plausible one (Winston) actively “seek” more evidence (not modeled here) Existing evidence A: “Johnson was not carrying a knife.” New evidence B1: “Johnson is nonviolent.” Inconsistent new evid B2: “Johnson pulled a knife.” Reinterpret B2: “Johnson grabbed a knife from Caldwell.” (i.e., explain it was Caldwell’s knife, not Johnson’s) Discount B2: “Witness must be mistaken.” Judge asks layers follow-up questions How People Deal with Incoming Evidence

  10. Agenda The Phenomenon Agent-Based Model (ABM) Primer The Model Sample Run Experiments

  11. What is Agent-Based Modeling? • agents + interactions^ • start simple; build up^ • Key terminology • agents • system • dynamics • agent births and deaths • interactions/competitions • parameters

  12. Symbiotic relationship: Behavioral Experiments ABM Contributions ABMs Can Make Input Test description: informs base assumptions understanding: study processes in detail parsimony: demo emergence of seemingly complex phenomen from small set of simple rules predictions: new observations/predictions

  13. (Other Models (Hastie, 1993))

  14. (Contrast with Bayesian & Algebraic Models) • Algebraic (additive), Bayesian (multiplicative): • “single meter” of overall plausibility • ABM allows: • revisiting and reconsidering previously-processed evidence • interaction/competition between new evidence and individual pieces of previous evidence (not just conglomerate)

  15. Contrast with Story Model & ECHO • Explanatory Coherence Model := Thagard’s Theory of Explanatory Coherence (TEC) + Story Model • Only implemented discounting, not reinterpretation • ABM enforces lower evidence-agent-level consistency, as opposed to higher story-level consistency • Unlike previous ABMs, model agents within individual as system

  16. Motivations • Study emergence of consistent stories via reinterpretation, discounting, reinforcement mechanisms • Adaptive? • aid consistency? • speed-accuracy tradeoff? avoid indecisiveness • increases convergence rates? • order effects hurt accuracy?

  17. Agenda The Phenomenon Agent-Based Model (ABM) Primer The Model Sample Run Experiments

  18. Goal • Model consistency-seeking process in story formation • Present evidence-agents to judge-system • Judge-system compares evidence-agents => keep, reinterpret, or discount evidence (agents “interact” & “compete”) • Until sufficiently strong/stronger story emerges

  19. Agents Evid 1 Evid 1 Evid 2 Verdict (“G”,”N”,…) G N N Abstract features (binary) x x x x x y Plausibility index (0%-100%) 34% 14% 89% • Evidence-agents, composed of “features” • Operationalize consistency /b/ agents: • “similarity” in abstract features (Axelrod’s culture ABM, 1997) • “inverse Hamming distance” := % feature matches …

  20. Parameters

  21. (Java GUI)

  22. System & Agent Births • Judge-system represents judge’s mind • Evidence presentation = “agent birth”: • Initialization of N0 Agents: • Set up randomly-generated agents, OR… • uniform distributions—even for plausibility index due to prior beliefs (Kunda, 1990) or knowledge (Klein, 1993) • User-specified

  23. results\sysout_081118_0455.log Printing 8 agents: 01 02 03 04 05 06 07 08 N G I G N I N N y y y y y y y y y y y y y y y y 100% 100% 100% 100% 100% 100% 100% 80% Dead/Rejected Evidence--Printing 9 agents: 01 02 03 04 05 06 07 08 09 I G G I G I N N N y y x x y x x x y y x x x y x x y y 00% 00% 00% 00% 00% 00% 00% 00% 00% To keep track of evidence-agents and their order of presentation: lists of agents in order of birth/presentation to the system. latest-born agent always appears at the end of a list. no geographical “topology” (Topology)

  24. Stories Evid 1 Evid 2 Evid 1 Evid 3 N N G N y x x x y y x x 14% 34% 51% 89% strength = 99% / 3 = 33% strength = 89% := {evidence-agents supporting a verdict} • Consistency = inverse Hamming distance amongst evidence (:= SevidencePairs # feature matches / # evidencePairs / F ) • Plausibility Index (“Strength”) = average of plausibility indices Story promoting “Innocent” verdict Story promoting “Guilty” verdict

  25. Births & Interactions • If time period t = k * I, where k is some constant, birth new agent. • Random selection of agent to compare with newest-born. • Compare agents and compute consistency. If consistency… • = C, both agents are "winners“; increase both agents' plausibility indices by D • < C, "inconsistency conflict“ => competition. • higher plausibility index => "winner" (“draw” if indices identical). If consistency… • = N, "loser" is salvageable by “reinterpreting” one of its inconsistent features to match winner’s • < N, discount (decrease plausibility by D) • Agents with plausibility = 0% => death & removal from system • Gather stories in system. Check strengths. “Winning story found” if 1 ! story with strength >= S and/or |strength-strength | >= Sd for all competing stories; stop run early.

  26. Possible Reactions to New Evid : Completely consistent (e.g., 100% features match) => “reinforce” both; “reward consistency” Inconsistent… but salvageable (50% features match) => “reinterpret” less plausible “loser”; “increaseinconsistency” not salvageable (0% features match) => “discount” “loser”; “punish inconsistency” => plaus(Evid1) > plaus(Evid2) => Evid2 “loser” Operationalizing Consistency--Examples Evid 2 Evid 1 Evid 2 Evid 1 Evid 2 Evid 1 Evid 1 Evid 2 Evid 1 Evid 2 Evid 1 Evid 2 x x x x x x y x y x x x y x y y y y y y y y x y 44% 51% 34% 24% 51% 61% 51% 34% 51% 51% 34% 34%

  27. Agenda The Phenomenon Agent-Based Model (ABM) Primer The Model Sample Run Experiments

  28. Sample Output^ • Live Evidence--Printing 8 agents: • 01 02 03 04 05 06 07 08 • N G I G N I N N • y y y y y y y y • y y y y y y y y • 100% 100% 100% 100% 100% 100% 100% 80% • Dead/Rejected Evidence--Printing 9 agents: • 01 02 03 04 05 06 07 08 09 • I G G I G I N N N • y y x x y x x x y • y x x x y x x y y • 00% 00% 00% 00% 00% 00% 00% 00% 00% • 3 stories found: • -Verdict N supported by 4 evidence, with 1.00 consistency => 0.48 strength • -Verdict G supported by 2 evidence, with 1.00 consistency => 0.25 strength • -Verdict I supported by 2 evidence, with 1.00 consistency => 0.25 strength • *** Found winning story! Verdict N supported by 4 evidence, with 1.00 consistency => 0.48 strength

  29. Judge-System can get Stuck… results\sysout_STUCK.log Live Evidence--Printing 17 agents: 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 N G G I G I I I I N I G G N N N G x x x x x x x x x x x x x x x x x y y y x y y y y y y y y y y y y y 100% 100% 100% 90% 100% 100% 100% 100% 100% 100% 84% 100% 79% 100% 100% 98% 100% Dead/Rejected Evidence--Printing 10 agents: 01 02 03 04 05 06 07 08 09 10 I I I I N I I G N G y x y y y y y y y y y x x x x x x y x x 00% 00% 00% 00% 00% 00% 00% 00% 00% 00% 3 stories found: -Verdict N supported by 5 evidence, with 1.00 consistency => 0.29 strength -Verdict G supported by 6 evidence, with 1.00 consistency => 0.34 strength -Verdict I supported by 6 evidence, with 1.00 consistency => 0.34 strength No winning story found.

  30. Output of a Run^

  31. Agenda The Phenomenon Agent-Based Model (ABM) Primer The Model Sample Run Experiments

  32. 5 Experiments • Experiment 1: Emergence of consistency Decision Speed • Experiment 2: Speedup • Experiment 3: Accuracy tradeoff Decision Accuracy—Order Effects • Experiment 4: Emergence of order effects • Experiment 5: Extending deliberation

  33. Obtaining Consistent Stories – Q1 Q1: Evidence-level consistency sufficient? Which of the 3 mechanisms? Implementation: No rules DV: Consistency of stories

  34. Obtaining Consistent Stories – Q1 Results Reinterpret > Discount > Reinforcement

  35. Speed-Accuracy Tradeoff – Q2 Q2: Reinterpretation & Discounting increase speed? Prediction: Reinterpretation & Discounting allow much faster convergence DV: Time to Converge, Max nEvid (Max Consistency) Implementation: Both rules; all cases have reinforce

  36. Speed-Accuracy Tradeoff – Q2 Results Results of 10 Runs—Time to First Convergence, Maximum Number of Evidence, Maximum Story Consistency

  37. Speed-Accuracy Tradeoff – Q2 Results Medians of 10 Runs—Time to First Convergence, Maximum Number of Evidence, Maximum Story Consistency Reinterpret > Discount > Reinforcement only

  38. Speed-Accuracy Tradeoff – Q3 • Q3: What would happen if allow process to continue even after having found winner? Any point to "holding off judgment" until all evidence presented? • DV: Which story wins? (Strength) • Prediction: Leader will only be strengthened; competing stories never get a foothold. • Implementation: Allow system to continue running even if found winner

  39. Speed-Accuracy Tradeoff – Q3 Results^ Figure 2. Example run where winner switches • Over 20 runs, # runs same winner:# runs different winner = 15:1 • => Good heuristic to stop deliberation, for less time & effort

  40. Order Effects – Q4 p d d d p d p d p d Heuristic may be ok only if randomized evidence…what if biased evidence? Q4: Is there an Order Effect? Took {20 randomly-generated evidence} and then “doctored” it; % D win = “accuracy” IV: Presentation order--P goes first, followed by D vs. interwoven evidence Prediction: earlier, weaker side (e.g., P) beats out later, stronger side (e.g., D); "Accuracy" of D…P… > PDPDPD… > P…D… DV: Time to Converge, (Projected) Winner

  41. Order Effects – Q4 Results Figure 3. Sample random presentation order run.

  42. Order Effects – Q4 Results Figure 4. Sample D…P… biased presentation order run.

  43. Order Effects – Q4 Results Figure 5. Sample P…D… biased presentation order run.

  44. Order Effects – Q4 Results Table 4. Results of 10 Runs Varying Presentation Order of Evidence • Strong primacy effect • Exper3 conclusion no longer holds; longer deliberation DOES help!

  45. Increasing Deliberation – Q5 Q5: Can deliberating more often between births reduce order effects (i.e., increase "accuracy")? Implementation: Use P…D… model from Exper4 IV: Varied I (I = 0 => wait till end to deliberate) DV: % runs that P wins

  46. Increasing Deliberation – Q5 Results Too much lag time during trials can be detrimental!

  47. Summary of Key Findings • Why reinterpret and discount evidence? • Maximizes consistency (Experiment 1) • Hastens convergence on decision (Experiment 2) Reinterpret > Discount > Reinforcement • Speed-accuracy tradeoff? Yes… • Accuracy ok if evidence balanced (Experiment 3) • Not ok/primacy effect if biased (Experiment 4) • => Important to interweave evidence, like in real trials! • Can reduce primacy effect by decreasing premature deliberation (Experiment 5) • All achieved by modeling consistency @ evidence level, not story level

  48. (Future Expansions) • Q6: What happens when introduce bias toward certain verdicts? • Prediction: Verdict-driven process takes less time to converge • DV: Time to Converge (Consistency of Stories) • Implementation: Add favoredVerdict • Q7: In general, what conditions lead to indecision?

  49. BACKUPS

  50. Agenda The Phenomenon Agent-Based Model (ABM) Primer The Model Sample Run Experiments

More Related