1 / 28

prof. dr. Lambert Schomaker

KI2 – 8. Heterogeneous-Information Integration. prof. dr. Lambert Schomaker. Kunstmatige Intelligentie / RuG. Heterogeneous-information integration. aka multi-sensor fusion multi-expert combination multi-agent collaboration

alastair
Télécharger la présentation

prof. dr. Lambert Schomaker

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. KI2 – 8 Heterogeneous-Information Integration prof. dr. Lambert Schomaker Kunstmatige Intelligentie / RuG

  2. Heterogeneous-information integration • aka • multi-sensor fusion • multi-expert combination • multi-agent collaboration • The improved use of multiple information sources which are of different unit and scale

  3. Heterogeneous-information integration • Examples: • terrorist & weapon classification • friend or foe • forensic evidence collection • finding oil sources • pattern classification by multiple experts • audio-visual speech recognition

  4. … different units … • Celsius • microgram • Volt • Ampere • Lumen • probability • pseudo-probability • integer count

  5. 0.0 A B … different scale … • ratio scale • interval scale • ordinal scale (1st 2nd 3rd 4th 5th 6th … ) • nominal scale • yes/no • green red purple • good bad ugly • true/false A B

  6. Architecture, example Expert 1, NN Expert 2, Rule-based real world Expert 3, Bayesian COMBINE Measurementi DECISION Measurement j agent k agent l agent m

  7. How to combine heterogeneous information? • trained parameter-estimation methods • context-free methods

  8. Trained, parametric combination methods • Use a trainable function approximator: • mean field (linear, weights) • multi-layer perceptron (NN) • polynomial • Bayes! • cumbersome: train individual components, train the combination • if a new module or expert is added, the system must be completely retrained! • independent training sets are needed for the single functions and for the combination function

  9. Context-free combination methods • majority voting • plurality voting • product rule • sum rule • rank combination schemes

  10. Voting • A candidate ci is a person, object or proposal, and C is the set of all possible candidates, and Ce is the set of candidates taking place in a particular election • A voter is a function vj : Ce  R, in words, each candidate partaking in the election obtains a real- valued confidence of vjinci

  11. Election • An election is a tuple (Ce,Ve) where Ce C and Ve V, such that vjVe vj : Ce  R yielding |Ve| orderings of the candidates, in R

  12. Voting system criteria • Condorcet winner: will win from all candidates if elections were held in a pairwise fashion. A Condorcet loser could exist too • Consistency: if ciis a winner for voters Vk and for voters Vm, then ci should also be the winner if the election is based on {Vk Vm}

  13. More voting-system criteria • Monotonicity: if votes become available, this should not affect the existing valuation (humans often react non-monotonously in a sequential voting procedure). Also, voting procedures which eliminate candidates one by one are non monotonous. • Pareto optimality: the voting system choses cx over cy if all voters choose cxover cy

  14. Example: majority vote in unreliable but independent experts

  15. Special case: Borda rank combination • Each of N voters ranks Mcandidates • The assumption is that an optimal ranking exists • Individual voters utilize an unknown evaluation function vj : Ce  R where j=[1,N], e=[1,M] • Evaluations are sorted, such that the ‘best’ evaluation ranks 1, etc. up to M, ‘worst’

  16. Example: Evaluation scores 0-100

  17. Example: Ranks

  18. Example: Ranks

  19. How to combine rankings? • Several models are possible • standard Borda: take the average (best guess) • also: • median rank (disregard outlying ranks) • mode of ranks (plurality of ranks) • min of ranks (optimistic) • max of ranks (pessimistic)

  20. standard Borda: mean rank

  21. modal rank

  22. min rank

  23. min rank How to solve ties?

  24. max rank

  25. How to solve ties in the combined Borda ranking? • Random choice of candidates • If the validity of the voters’ judgment is known: take the rank of the best voter • But: then we digress towards knowledge-based and probabilistic schemes

  26. Example non-stochastic tie solving: Voter C is known to be superior to A, B

  27. How to choose for a combination method? • mean? mode? median? min? max? • Empirical tests are needed, mostly • The type of question to be answered is important • Example: “sportsperson of the year contest”

  28. How to choose for a combination method? • The type of question to be answered is important • Example: “sportsperson of the year contest” • Not the average rank over N sports for M sportspersons • but the minimum rank (best played sport) is indicative

More Related