1 / 42

Probabilistic models for corpora and graphs

Probabilistic models for corpora and graphs. Review: some generative models. Multinomial Naïve Bayes. . C. For each document d = 1,  , M Generate C d ~ Mult( C |  ) For each position n = 1,  , N d Generate w n ~ Mult( W | γ ). …. W N. W 1. W 2. W 3. M. γ.

corsi
Télécharger la présentation

Probabilistic models for corpora and graphs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probabilistic models for corpora and graphs

  2. Review: some generative models • Multinomial Naïve Bayes  C • For each document d = 1,, M • Generate Cd ~ Mult( C | ) • For each position n = 1,, Nd • Generate wn ~ Mult(W | γ) ….. WN W1 W2 W3 M γ

  3. Review: some generative models • Multinomial Naïve Bayes  C • For each document d = 1,, M • Generate Cd ~ Mult( C | ) • For each position n = 1,, Nd • Generate wn ~ Mult(W | g[Cd]) ….. WN W1 W2 W3 M γ γ K

  4. Review: some generative models • Multinomial Naïve Bayes  • For each class k=1,…K • Generate g[k] ~ … • For each document d = 1,, M • Generate Cd ~ Mult( C | ) • For each position n = 1,, Nd • Generate wn ~ Mult(W | g[Cd]) C ….. WN W1 W2 W3 M γ γ K

  5. Review: some generative models • Multinomial Naïve Bayes  • For each class k=1,…K • Generate g[k] ~ Dir(b) • For each document d = 1,, M • Generate Cd ~ Mult( C | ) • For each position n = 1,, Nd • Generate wn ~ Mult(W | g[Cd]) C Dirichlet is a prior for multinomials, defined by k params (b1,…, bK) and b0=b1+…,+bK MAP for P(g=i|b) = (nk + bi)/(n +b0) ….. WN W1 W2 W3 M γ γ b K Symmetric Dirichlet: all bi ‘s are equal

  6. Review – unsupervised Naïve Bayes • Mixture model: unsupervised naïve Bayes model  • For each class k=1,…K • Generate g[k] ~ Dir(b) • For each document d = 1,, M • Generate Cd ~ Mult( C | ) • For each position n = 1,, Nd • Generate wn ~ Mult(W | g[Cd]) C Z W N Same generative story – different learning problem… M b γ K

  7. “Mixed membership” • Latent Dirichlet Allocation  • For each document d = 1,,M • Generate d ~ Dir( | ) • For each position n = 1,, Nd • generate zn ~ Mult( | d) • generate wn ~ Mult( | gzn) a z w N M γ b K

  8. LDA’s view of a document

  9. Review - LDA • Gibbs sampling • Applicable when joint distribution is hard to evaluate but conditional distribution is known • Sequence of samples comprises a Markov Chain • Stationary distribution of the chain is the joint distribution Key capability: estimate distribution of one latent variables given the other latent variables and observed variables.

  10. D1: “Happy Thanksgiving!” D2: “Thanksgiving turkey” D3: “Istambul, Turkey K=2

  11. D1: “Happy Thanksgiving!” D2: “Thanksgiving turkey” D3: “Istambul, Turkey K=2 a 3 2 1 z z z w w w N1 N2 N3 γ b K=2

  12. D1: “Happy Thanksgiving!” D2: “Thanksgiving turkey” D3: “Istambul, Turkey K=2 a 1 2 3 z11 z12 z21 z22 z31 z32 happy thank thank turkey istam turkey γ b K=2

  13. D1: “Happy Thanksgiving!” D2: “Thanksgiving turkey” D3: “Istambul, Turkey K=2 a 3 2 1 Step 1: initialize z’s randomly 1 2 2 1 1 1 happy thank thank turkey istam turkey γ b K=2

  14. D1: “Happy Thanksgiving!” D2: “Thanksgiving turkey” D3: “Istambul, Turkey K=2 a 2 1 3 Step 1: initialize z’s randomly Step 2: sample z11 from: 1 2 2 1 1 1 happy thank thank turkey istam turkey γ b K=2 slow derivation: https://lingpipe.files.wordpress.com/2010/07/lda3.pdf

  15. D1: “Happy Thanksgiving!” D2: “Thanksgiving turkey” D3: “Istambul, Turkey K=2 a 3 2 1 #times ‘happy’ in topic t -------------------------------- #words in topic t (smoothed byb) 2 2 1 1 1 happy thank thank turkey istam turkey γ b K=2

  16. D1: “Happy Thanksgiving!” D2: “Thanksgiving turkey” D3: “Istambul, Turkey K=2 a 3 2 1 #times topic t in D1 ---------------------- #words in D1 (smoothed bya) 2 2 1 1 1 happy thank thank turkey istam turkey γ b K=2

  17. D1: “Happy Thanksgiving!” D2: “Thanksgiving turkey” D3: “Istambul, Turkey K=2 a 3 2 1 Step 1: initialize z’s randomly Step 2: sample z11 from: Pr(z11=k|z12,…,z32,a,b) Step 3: sample z12 2 2 1 1 1 happy thank thank turkey istam turkey γ b K=2

  18. Models for corpora and graphs

  19. Stochastic Block models: assume 1) nodes w/in a block z and 2) edges between blocks zp,zq are exchangeable a b zi zi zj u auv N N2 for each node i, pick a latent class zi for each node pair i,j, pick an edge weight a by a ~ Pr(a | b[zi,zj])

  20. Another mixed membership block model • pick a multinomial θ over pairs of node topics k1,k2 • for each node topic k, pick a multinomial ϕk over node id’s • for each edge e: • pick z=(k1,k2) • pick i~ Pr(. | ϕk1 ) • pick i~ Pr(. | ϕk2 )

  21. Speeding up modeling for corpora and graphs

  22. Solutions • Parallelize • IPM (Newman et al, AD-LDA) • Parameter server (Qiao et al, HW7) • Speedups: • Exploit sparsity • Fast sampling techniques • State of the art methods use both….

  23. z=1 This is almost all the run-time, and it’s linear in the number of topics….and bigger corpora need more topics…. z=2 random z=3 unit height … …

  24. KDD 09

  25. z=t1 z=t1 z=t1 z=t2 z=t2 z=t2 z=t2 random z=t3 unit height z=t3 z=t3 … z=t3 … … … … … … …

  26. z=s+r+q

  27. If U<s: • lookup U on line segment with tic-marks at α1β/(βV + n.|1), α2β/(βV + n.|2), … • If s<U<r: • lookup U on line segment for r Only need to check t such that nt|d>0 z=s+r+q

  28. If U<s: • lookup U on line segment with tic-marks at α1β/(βV + n.|1), α2β/(βV + n.|2), … • If s<U<s+r: • lookup U on line segment for r • If s+r<U: • lookup U on line segment for q Only need to check t such that nw|t>0 z=s+r+q

  29. Only need to check occasionally (< 10% of the time) Only need to check t such that nt|d>0 Only need to check t such that nw|t>0 z=s+r+q

  30. Trick; count up nt|dfor d when you start working on d and update incrementally Only need to store (and maintain) total words per topic and α’s,β,V Only need to storent|dfor current d Need to store nw|t for each word, topic pair …??? z=s+r+q

  31. 1. Precompute, for each t, 2. Quickly find t’s such that nw|t is large for w Most (>90%) of the time and space is here… Need to store nw|t for each word, topic pair …??? z=s+r+q

  32. 1. Precompute, for each t, 2. Quickly find t’s such that nw|t is large for w • map w to an int array • no larger than frequency of w • no larger than #topics • encode (t,n) as a bit vector • n in the high-order bits • t in the low-order bits • keep ints sorted in descending order Most (>90%) of the time and space is here… Need to store nw|t for each word, topic pair …???

  33. Outline • LDA/Gibbs algorithm details • How to speed it up by parallelizing • How to speed it up by faster sampling • Why sampling is key • Some sampling ideas for LDA • The Mimno/McCallum decomposition (SparseLDA) • Alias tables (Walker 1977; Li, Ahmed, Ravi, Smola KDD 2014)

  34. Alias tables O(K) Basic problem: how can we sample from a biased coin quickly? If the distribution changes slowly maybe we can do some preprocessing and then sample multiple times. Proof of concept: generate r~uniformand use a binary tree r in (23/40,7/10] O(log2K) http://www.keithschwarz.com/darts-dice-coins/

  35. Alias tables Another idea… Simulate the dart with two drawn values: rx int(u1*K) ry  u1*pmax keep throwing till you hit a stripe http://www.keithschwarz.com/darts-dice-coins/

  36. Alias tables An even more clever idea: minimize the brown space (where the dart “misses”) by sizing the rectangle’s height to the average probability, not the maximum probability, and cutting and pasting a bit. You can always do this using only two colors in each column of the final alias table and the dart never misses! mathematically speaking… http://www.keithschwarz.com/darts-dice-coins/

  37. KDD 2014 • Key ideas • use variant of Mimno/McCallum decomposition • Use alias tables to sample from the dense parts • Since the alias table gradually goes stale, use Metropolis-Hastings sampling instead of Gibbs

  38. KDD 2014 else the dart missed • q is stale, easy-to-draw from distribution • p is updated distribution • computing ratios p(i)/q(i) is cheap • usually the ratio is close to one

  39. KDD 2014

More Related