1 / 40

CS 182 Sections 103 - 104

CS 182 Sections 103 - 104. slides created by Eva Mok ( emok@icsi.berkeley.edu ) modified by jgm April 13, 2005. Announcements. a8 out, due Monday April 19 th , 11:59pm BBS articles are assigned for the final paper:

flynn-salas
Télécharger la présentation

CS 182 Sections 103 - 104

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 182Sections 103 - 104 slides created by Eva Mok (emok@icsi.berkeley.edu) modified by jgm April 13, 2005

  2. Announcements • a8 out, due Monday April 19th, 11:59pm • BBS articles are assigned for the final paper: • Arbib, Michael A. (2002). The mirror system, imitation, and the evolution of language. • Hurford, James R. (2003). The neural basis of predicate-argument structure. • Grush, Rick (2004). The emulation theory of representation: motor control, imagery, and perception. • Skim through them and let us know, as part of a8, which article you plan to use.

  3. Schedule • Last Week • Inference in Bayes Net • Metaphor understanding using KARMA • This Week • Formal Grammar and Parsing • Construction Grammar, ECG • Next Week • Psychological model of sentence processing • Grammar Learning

  4. Quiz • How are the source and target domains represented in KARMA? • How does the source domain information enter KARMA? How should it? • What does SHRUTI buy us? • How are bindings propagated in a structured connectionist framework?

  5. Quiz • How are the source and target domains represented in KARMA? • How does the source domain information enter KARMA? How should it? • What does SHRUTI buy us? • How are bindings propagated in a structured connectionist framework?

  6. KARMA • DBN to represent target domain knowledge • Metaphor maps link target and source domain • X-schema to represent source domain knowledge

  7. DBNs • Explicit causal relations + full joint table  Bayes Nets • Sequence of full joint states over time  HMM • HMM + BN  DBNs • DBNs are a generalization of HMMs which capture sparse causal relationships of full joint

  8. Dynamic Bayes Nets

  9. Metaphor Maps • map entities and objects between embodied and abstract domains • invariantly map the aspect of the embodied domain event onto the target domain by setting the evidence for the status variable based on controller state (event structure metaphor) • project x-schema parameters onto the target domain

  10. Where does the domain knowledge come from? • Both domains are structured by frames • Frames have: • List of roles (participants, frame elements) • Relations between roles • Scenario structure

  11. Economic State [recession,nogrowth,lowgrowth,higrowth] Policy [Liberalization, Protectionism] Goal [free trade, protection ] Outcome [success, failure] Difficulty [present, absent] DBN for the target domain T0 T1

  12. Let’s try a different domain • I didn’t quite catch what he was saying • His slides are packed with information • He sent the audience a clear message When we can get a good flow of information from the streets of our cities across to, whether it is an investigating magistrate in France or an intelligence operative in the Middle East, and begin to assemble that kind of information and analyze it and repackage it and send it back out to users, whether it's a policeman on the beat or a judge in Italy or a Special Forces Team in Afghanistan, then we will be getting close to the kind of capability we need to deal with this kind of problem. That's going to take a couple, a few years. 9/11 Commission Public Hearing, Monday, March 31, 2003

  13. Target domain belief net (T) (communication frame) degree of understanding speaker addressee action outcome Metaphor Map (conduit metaphor) send is talk receive is hear Ideas are objects Words are containers Senders are speakers Receivers are addressees X-Schema representation Source domain f-structs (transfer) sender receiver means force rate send receive transfer pack Target domain belief net (T-1)

  14. Quiz • How are the source and target domains represented in KARMA? • How does the source domain information enter KARMA? How should it? • What does SHRUTI buy us? • How are bindings propagated in a structured connectionist framework?

  15. In the KARMA system, they are hand-coded. In general, you need analysis of sentences: syntax semantics Syntax captures: constraints on word order constituency (units of words) grammatical relations (e.g. subject, object) subcategorization & dependency (e.g. transitive, intransitive, subject-verb agreement) How do the source domain f-structs get parameterized?

  16. Quiz • How are the source and target domains represented in KARMA? • How does the source domain information enter KARMA? How should it? • What does SHRUTI buy us? • How are bindings propagated in a structured connectionist framework?

  17. SHRUTI • A connectionist model of reflexive processing • Reflexive reasoning • automatic, extremely fast (~300ms), ubiquitous • computation of coherent explanations and predictions • gradual learning of causal structure • episodic memory • understanding language • Reflective reasoning • conscious deliberation, slow • overt consideration of alternatives • external props (pencil + paper) • solving logic puzzles • doing cryptarithmetic • planning a vacation

  18. SHRUTI • synchronous activity without using global clock • An episode of reflexive processing is a transient propagation of rhythmic activity • An “entity” is a phase in the above rhythmic activity. • Bindings are synchronous firings of role and entity cells • Rules are interconnection patterns mediated by coincidence detector circuits that allow selective propagation of activity • Long-term memories are coincidence and coincidence-failure detector circuits • An affirmative answer / explanation corresponds to reverberatory activity around closed loops

  19. focal cluster • provides locus of coordination, control and decision making • enforce sequencing and concurrency • initiate information seeking actions • initiate evaluation of conditions • initiate conditional actions • link to other schemas, knowledge structures

  20. Quiz • How are the source and target domains represented in KARMA? • How does the source domain information enter KARMA? How should it? • What does SHRUTI buy us? • How are bindings propagated in a structured connectionist framework?

  21. dynamic binding example • asserting that get(father, cup) • father fires in phase with agent role • cup fires in phase with patient role my-father entity + ? cup +e +v ?e ?v type get ? agt pat + - predicate

  22. Active Schemas in SHRUTI • active schemas require control and coordination, dynamic role binding and parameter setting • schemas are interconnected networks of focal clusters • bindings are encoded and propagated using temporal synchrony • scalar parameters are encoded using rate-encoding

  23. Review: Probability • Random Variables • Boolean/Discrete • True/false • Cloudy/rainy/sunny • Continuous • [0,1] (i.e. 0.0 <= x <= 1.0)

  24. Priors/Unconditional Probability • Probability Distribution • In absence of any other info • Sums to 1 • E.g. P(Sunny=T) = .8 (thus, P(Sunny=F) = .2) • This is a simple probability distribution • Joint Probability • P(Sunny, Umbrella, Bike) • Table 23 in size • Full Joint is a joint of all variables in model • Probability Density Function • Continuous variables • E.g. Uniform, Gaussian, Poisson…

  25. Conditional Probability • P(Y | X) is probability of Y given that all we know is the value of X • E.g. P(cavity=T | toothache=T) = .8 • thus P(cavity=F | toothache=T) = .2 • Product Rule • P(Y | X) = P(X Y) / P(X) (normalizer to add up to 1) Y X

  26. Inference P(Toothache=T)?P(Toothache=T, Cavity=T)? P(Toothache=T | Cavity=T)?

  27. Independence • Rainy Cloudy • Sunny Windy

  28. Burglary Earthquake Alarm JohnCalls MaryCalls Bayes Nets

  29. Independence X independent of Z? X conditionally independent of Z given Y? X Y Z X Y Z No Yes X Z X Z No Yes Y Y Y Y Yes Or below X Z X Z No

  30. Markov Blanket X X is independentof everything else given:Parents, Children, Parents of Children

  31. Reference: Joints • Representation of entire network • P(X1=x1 X2=x2  ... Xn=xn) =P(x1, ..., xn) = i=1..n P(xi|parents(Xi)) • How? Chain Rule • P(x1, ..., xn) = P(x1|x2, ..., xn) P(x2, ..., xn) =... = i=1..nP(xi|xi-1, ..., x1) • Now use conditional independences to simplify

  32. Reference: Joint, cont. X4 P(x1, ..., x6) =P(x1) *P(x2|x1) *P(x3|x2, x1) *P(x4|x3, x2, x1) *P(x5|x4, x3, x2, x1) *P(x6|x5, x4, x3, x2, x1) X2 X6 X1 X3 X5

  33. Reference: Joint, cont. X4 P(x1, ..., x6) =P(x1) *P(x2|x1) *P(x3|x2, x1) *P(x4|x3, x2, x1) *P(x5|x4, x3, x2, x1) *P(x6|x5, x4, x3, x2, x1) X2 X6 X1 X3 X5

  34. Reference: Inference • General case • Variable Eliminate • P(Q | E) when you have P(R, Q, E) • P(Q | E) = ∑R P(R, Q, E) / ∑R,Q P(R, Q, E) • ∑R P(R, Q, E) = P(Q, E) • ∑Q P(Q, E) = P(E) • P(Q, E) / P(E) = P(Q | E)

  35. Inference P(Toothache=T, Cavity=T)?

  36. Inference

  37. X4 X2 X6 X1 X3 X5 Reference: Inference, cont. Q = {X1}, E = {X6} R = X \ Q,E P(x1, ..., x6) =P(x1) * P(x2|x1) * P(x3|x1) * P(x4|x2) *P(x5|x3) * P(x6|x5, x2) P(x1, x6) = ∑x2 ∑x3 ∑x4 ∑x5P(x1) P(x2|x1) P(x3|x1) P(x4|x2) P(x5|x3) P(x6|x5, x2) = P(x1) ∑x2P(x2|x1) ∑x3P(x3|x1) ∑x4P(x4|x2) ∑x5P(x5|x3) P(x6|x5, x2) = P(x1) ∑x2P(x2|x1) ∑x3P(x3|x1) ∑x4P(x4|x2) m5(x2, x3) = P(x1) ∑x2P(x2|x1) ∑x3P(x3|x1) m5(x2, x3) ∑x4P(x4|x2) = ...

  38. Approximation Methods • Simple • no evidence • Rejection • just forget about the invalids • Likelihood Weighting • only valid, but not necessarily useful • MCMC • Best: only valid, useful, in proportion

  39. Cloudy Rain Sprinkler WetGrass Stochastic Simulation P(WetGrass|Cloudy)? P(WetGrass|Cloudy) = P(WetGrass  Cloudy) / P(Cloudy) 1. Repeat N times: 1.1. Guess Cloudy at random 1.2. For each guess of Cloudy, guess Sprinkler and Rain, then WetGrass 2. Compute the ratio of the # runs where WetGrass and Cloudy are True over the # runs where Cloudy is True

More Related