1 / 14

ERROR DETECTION using AGENTS

ERROR DETECTION using AGENTS. - Diagnostics is a hard issue, since it is hard to observe all the data to get the reasoning. In this project a system that can detect the errors cooperatively will be implemented.

becca
Télécharger la présentation

ERROR DETECTION using AGENTS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ERROR DETECTION using AGENTS • - Diagnostics is a hard issue, since it is hard to observe all the data to get the reasoning. • In this project a system that can detect the errors cooperatively will be implemented. • A network structure called bayesian networks will be used which are causal probabilistic network based on bayes rule. The probabilities of this network will be updated whenever needed.

  2. DIAGNOSTIC PROBLEM Cloudy Sprinkler Rain What is it wet? Sprinkler or rain? What we think first? We need more data, if not we just guess according to past data. Wet

  3. BAYES RULE P(A|B) is the probability of A given evidence B. P(A,B)=P(A)*P(B|A) p(A,B)=P(B)*P(A|B) P(B)*P(A|B)=P(A)*P(B|A) Bayes rule: P(A|B)=P(A)*P(B|A)/P(B) General form e=set of evidences P(A|B,e)=p(B|A,e)*p(A|e)/p(B|e)

  4. BAYESIAN NETWORKS USED FOR DIAGNOSTICS • Why Bayesian Networks • Probability distribution • Bidirectional • Works fine even with missing data (initial configuration is important in that case) • Fits best for diagnostics. • - Diagnostics on bayesian networks are supported by microsoft, intel,…

  5. CONSTRUCTING A BBN cause result D A B C The possibility of C to happen varies according to A. P(C|A=T)+P(C|A=F)≠1

  6. cause of B is A or D? Possible D P(B|D)=p(A)*0.8+(1-p(A))*0.5=0.1*0.8+0.9*0.5 = 0.53 P(D|B)=p(B|D)*p(D)/p(B) =0.53*0.4/0.518=0.409266 as shown above... Same rule applies for p(A|B) then P(C) also changes since p(A) changes. All nodes are effected. Inference example

  7. For Evidences B and C, update all nodes again respectively. This time the possible cause of the failure is A. P(A|B,C)=p(B|A,C)*p(A|C)/p(B|C)=p(B|A)*p(A|C)/p(B|C) For this ex: P(B|A)=0.68, p(A|C)=p(C|A)*p(A)/p(C)=0.307692 P(B|C)=0.555385 (even though B is not conected to C, C affects A and A affects p(B)) P(A|B,C)=0.547284

  8. INFERENCE When the network gets bigger. The calculation of probabilities. Ex: p(A,B,C,D,E) = p(A|B,C,D,E)*p(B|C,D,E)*p(C|D,E)*p(D|E)*p(E) NP hard? Special algorithms implemented. -netica -msbnx -bayesia lab ……

  9. Each agent has its local bayesian network for diagnostics e1 A1 BN1 A2 BN2 e1,c1 B Experiences A1=0 A2=2 A3=0 A4=1 A3 BN3 A4 BN4 In this case B will ask e1 to A2 and get the answer if it is different A1 will get A2’s result. BN1 will be updated and A1=1 after that step since it made corrections in its network.

  10. P(c) is the cause set returned from B P(c) = p(c) +(1/100) Increase the possibility of the found causes if different than c1 e2 A1 BN1 A2 BN2 B Experiences A1=1 A2=2 A3=0 A4=1 A3 BN3 A4 BN4 Since there is no experienced agent from A2. Then B will check if A2 has made a good reasoining cheching its truth table and will return according to its checktable.

  11. ADVANTAGES of HAVING A COORDINATOR • If one individual updates its network after learning something. All other agents will get that information. • The network traffic will be regulated by a central agent • The evaluation of the agents can be done centrally.

  12. PERFORMANCE People do the similar thing. If you are not sure what you are saying you get advise of an experienced agent or an authority, A1 BN1 A2 BN2 B Experiences A1=1 A2=2 A3=0 A4=1 A3 BN3 A4 BN4 Overall system performance can be monitored by agent B. The number of bad predictions can be a good estimate for the overall performance.

  13. REFERENCES • http://research.microsoft.com/adapt/MSBNx/ • - A Bayesian Network Approach to the Self-organization and Learning in Intelligent Agents, Ferat Sahin, 2005 • - A Tutorial on Learning With Bayesian Networks, David Heckerman, 1995 • - Bayesian Network Trust Model in Peer-to-Peer Networks, Yao Wang, Julita Vassileva • - http://www.dcs.qmul.ac.uk/~norman/BBNs/BBNs.htm

More Related