1 / 22

SPTR

SPTR. Universitatea “Politehnica” din Bucuresti 2007-2008 Adina Magda Florea http://turing.cs.pub.ro/sptr_08 si curs.cs.pub.ro. Curs 10. Retele bayesiene Predictie bayesiana Invatare bayesiana. 2. Probabilitati. Probabiltate neconditionata (apriori) P(A|B)

caden
Télécharger la présentation

SPTR

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SPTR Universitatea “Politehnica” din Bucuresti 2007-2008 Adina Magda Florea http://turing.cs.pub.ro/sptr_08 si curs.cs.pub.ro

  2. Curs 10 Retele bayesiene Predictie bayesiana Invatare bayesiana 2

  3. Probabilitati • Probabiltate neconditionata (apriori) P(A|B) • Probabilitate conditionata (aposteriori) - P(A|B) Masura probabilitatii producerii unui eveniment A este o functie P:S  R care satisface axiomele: • 0  P(A)  1 • P(S) = 1 ( sau P(adev) = 1 si P(fals) = 0) • P(A  B) = P(A) + P(B) - P(A  B) 3

  4. A si B mutual exclusive  P(A  B) = P(A) + P(B) P(e1  e2  e3  … en) = P(e1) + P(e2) + P(e3) + … + P(en) e(a) – multimea de evenimente atomice in care apare a, mutual exclusive si exhaustive P(a) =  P(ei) eie(a) 4

  5. Regula produsului Probabilitatea conditionata de producere a evenimentului A in conditiile producerii evenimentului B P(A|B) = P(A  B) / P(B) P(A  B) = P(A|B) * P(B) 5

  6. Inferente din DP Distributie de probabilitate P(Carie, Dur_d) dur_d dur_d carie 0.04 0.06 carie 0.01 0.89 P(carie  dur_d) = 0.04 + 0.01 + 0.06 = 0.11 P(carie) = 0.04 + 0.06 = 0.1 Se poate generaliza pt orice set de variabile Y si Z: P(Y) = Σz P(Y,z) O distributie peste Y se poate obtine prin insumarea peste toate celelalte variabilele dintr-o DP ce contine Y 6

  7. Inferente din DP Distributie de probabilitate P(Carie, Dur_d, Evid) P(carie | dur_d) = P(carie  dur_d) / P(dur_d) P(~carie | dur_d) = P(~carie  dur_d) / P(dur_d) P(dur_d) = 0.108 + 0.012 + 0.016+0.064  = 1/ P(dur_d)  - Constanta de normalizare 7

  8. Inferente din DP P(Carie | dur_d) = P(Carie, dur_d) =  [P(Carie, dur_d, evid) + P(Carie, dur_d, ~evid)] =  [ <0.108, 0.016> + <0.012, 0.064>] =  <0.12, 0.8> = <0.6, 0.4> De aici rezulta procedura de inferenta X – variabila de interogare E – variabilele observate (probe) si e valorile observate Y – variabilele neobservate P(X|e) =  P(X,e) =  Σz P(X,e,y) 8

  9. Regula lui Bayes P(a^b) = P(a|b) P(b) P(a^b) = P(b|a)P(a) P(b|a) = P(a|b) P(b) / P(a) Generalizare P(Y|X) = P(X|Y) P(Y) / P(X) cu normalizare P(Y|X) =  P(X|Y) P(Y) 9

  10. Invatare Bayesiana date – probe, ipoteze Dropsuri h1: 100% cirese h2: 75% cirese 25% lamaie h3: 50% cirese 50% lamaie h4: 25% cirese 75% lamaie h5: 100% lamaie H – tip de punga cu valori h1 .. h5 Se culeg probe (variabile aleatoare): d1, d2, … cu valori posibile cirese sau lamaie Scop: prezice tipul de aroma a urmatorului drops • Invatarea Bayesiana calculeaza probabilitatea fiecarei ipoteze pe baza datelor culese si afce predictii pe aceasta baza. • Predictia se face pe baza tuturor ipotezelor, nu numai pe baza celei mai bune ipoteze 10

  11. Invatare Bayesiana Fie D datele cu valoarea observata d Probabilitatea fiecarei ipoteze, pe baza regulii lui Bayes, este: P(hi|d) =  P(d|hi) P(hi) (1) Predictia asupra unei ipoteze necunoscute X P(X|d) = ΣiP(X|hi) P(hi|d) (2) • Elemente cheie: ipotezele aprioriP(hi) si probabilitatea unei probe pentru fiecare ipoteza P(d|hi) P(d|hi) = Πj P(dj|hi) (3) Presupunem probabilitatea apriori pentru h1 h2 h3 h4 h5 0.1 0.2 0.4 0.2 0.1 11

  12. P(hi|d) =  P(d|hi) P(hi) h1 h2 h3 h4 h5 0.1 0.2 0.4 0.2 0.1 h1: 100% cirese h2: 75% cirese 25% lamaie h3: 50% cirese 50% lamaie h4: 25% cirese 75% lamaie h5: 100% lamaie P(lamaie) = 0.1*0 + 0.2*0.25 + 0.4*0.5 + 0.2*0.75+ 0.1*1 = 0.5 • = 1/0.5 = 2 P(h1|lamaie) =  P(lamaie|h1)P(h1) = 2*0.1*0 = 0 P(h2|lamaie) =  P(lamaie|h2)P(h2) = 2 * (0.25*0.2) = 0.1 P(h3|lamaie) =  P(lamaie|h3)P(h3) = 2 * (0.5*0.4) = 0.4 P(h4|lamaie) =  P(lamaie|h4)P(h4) = 2 * (0.75*0.2) = 0.3 P(h5|lamaie) =  P(lamaie|h5)P(h5) = 2 * (1*0.1) = 0.2 12

  13. P(hi|d) =  P(d|hi) P(hi) P(d|hi) = Πj P(dj|hi) h1 h2 h3 h4 h5 0.1 0.2 0.4 0.2 0.1 h1: 100% cirese h2: 75% cirese 25% lamaie h3: 50% cirese 50% lamaie h4: 25% cirese 75% lamaie h5: 100% lamaie P(lamaie,lamaie) = 0.1*0 + 0.2*0.25*0.25 + 0.4*0.5*0.5 + 0.2*0.75*0.75+ 0.1*1*1 = 0.325 • = 1/0.325 = 3.0769 P(h1|lamaie,lamaie) =  P(lamaie,lamaie|h1)P(h1) = 3* 0.1*0*0 =0 P(h2|lamaie,lamaie) =  P(lamaie,lamaie|h2)P(h2) = 3 * (0.25*.25*0.2) = 0.0375 P(h3|lamaie,lamaie) =  P(lamaie,lamaie|h3)P(h3) = 3 * (0.5*0.5*0.4) = 0.3 P(h4|lamaie,lamaie) =  P(lamaie,lamaie|h4)P(h4) = 3 * (0.75*0.75*0.2) = 0.3375 P(h5|lamaie,lamaie) =  P(lamaie,lamaie|h5)P(h5) = 3 * (1*1*0.1) = 0.3 13

  14. P(hi|d1,…,d10) din ecuatia (1) 14

  15. P(X|d) = ΣiP(X|hi) P(hi|d) h1 h2 h3 h4 h5 0.1 0.2 0.4 0.2 0.1 h1: 100% cirese h2: 75% cirese 25% lamaie h3: 50% cirese 50% lamaie h4: 25% cirese 75% lamaie h5: 100% lamaie P(d2=lamaie|d1)=P(d2|h1)*P(h1|d1) + P(d2|h2)*P(h2|d1) + P(d2|h3)*P(h3|d1) + P(d2|h4)*P(h4|d1) + P(d2|h5)*P(h5|d1) = = 0*+0.25*0.1+.5*0.4+0.75*0.3+1*0.2 = 0.65 Predictie Bayesiana 15

  16. Observatii Ipoteza adevarata va domina in final predictia Predictia Bayesiana este optimala: fiind dat setul de ipoteze, orice alta predictie va fi corecta mai putin frecvent Probleme daca spatiul ipotezelor este mare Aproximare Se fac predictii pe baza ipotezei celei mai probabile MAP Learning – maximum aposteriori P(X|d)=~P(X|hMAP) In exemplu hMAP=h5 dupa 3 probe deci 1.0 Pe masura ce se culeg mai mlte date MAP si Bayes se apropie 16

  17. Invatarea in retele Bayesiene • Dropsuri de cirese si lamaie – o punga • Continuu de ipoteze • θ – parametrul care se invata • Modelam cu o RB • P(F=cirese) θ ---- F – aroma (var aleatoare) • N dropsuri, c cirese si l=N-c lamaie P(d|hθ)= Πj=1,N P(dj|hθ) = θc * (1- θ)l • Ipoteza cu predictie maxima este data de valoarea θ care maximizeaza aceasta expresie L=log(P(d|hθ))= Σj=1,N log(P(dj|hθ)) = c logθ * l log(1- θ) Derivam in functie de θ • dL/dθ = c/ θ – l/(1- θ) =0 • θ = c/(c+l) = c/N 17

  18. P(F=cirese) θ P(F=cirese) θ Aroma - F Aroma - F F P(W=rosu|F) cirese θ1 lamaie θ2 Ambalaj - W 18

  19. Hot RB P(H) 0.001 P(C) 0.002 Cutremur H C P(A) T T 0.95 T F 0.94 F T 0.29 F F 0.001 Alarma A P(D) T 0.7 F 0.01 A P(M) T 0.9 F 0.05 TelMihai TelDana H C P(A | H, C) T F T T 0.95 0.05 T F 0.94 0.06 F T 0.29 0.71 F F 0.001 0.999 Tabela de probabilitati conditionate 19

  20. Semantica RB A) Reprezentare a distributiei de probabilitate B) Specificare a independentei conditionale – constructia retelei A) Fiecare valoare din distributia de probabilitate poate fi calculata ca: P(X1=x1  … Xn=xn) = P(x1,…, xn) = i=1,n P(xi | parinti(xi)) unde parinti(xi) reprezinat valorile specifice ale variabilelor Parinti(Xi) 20

  21. Hot Inferente probabilistice P(H) 0.001 P(C) 0.002 Cutremur H C P(A) T T 0.95 T F 0.94 F T 0.29 F F 0.001 Alarma A P(D) T 0.7 F 0.01 A P(M) T 0.9 F 0.05 TelMihai TelDana P(M  D  A H C ) = P(M|A)* P(D|A)*P(A|H C )*P(H) P(C)= 0.9 * 0.7 * 0.001 * 0.999 * 0.998 = 0.00062 21

  22. Invatarea in retele Bayesiene P(F=cirese,W=verde|hθ, θ1, θ2)= P(F=cirese|hθ, θ1, θ2) P(W=verde|F=cirese, hθ, θ1, θ2) = θ (1- θ) • P(d|hθ, θ1, θ2) = θc * (1- θ)l * θrc * (1- θ)gc * θrl * (1- θ)gl • L = [c log θ + l log (1- θ)]+ [rc log θ1 + gc log(1- θ1) + [rl log θ2 + gl log(1- θ2)] dL/d θ = c/ θ – l/(1- θ) = 0 dL/d θ1 = rc/ θ1 – gc/(1- θ1) = 0 dL/d θ2 – rl/ θ2 – gl/(1- θ2) = 0 θ = c/(c+l) θ1=rc/(rc+gc) θ2 = rl/(rl+gl) 22

More Related