1 / 41

CS 679: Text Mining

This work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License . CS 679: Text Mining. Lecture # 13: Gibbs Sampling for LDA. Credit: Many slides are from presentations by Tom Griffiths of Berkeley. Announcements. Required reading for Today

mili
Télécharger la présentation

CS 679: Text Mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. This work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. CS 679: Text Mining Lecture #13: Gibbs Sampling for LDA Credit: Many slides are from presentations by Tom Griffiths of Berkeley.

  2. Announcements • Required reading for Today • Griffiths & Steyvers: “Finding Scientific Topics” • Final Project Proposal • Clear, detailed: ideally, the first half of your project report! • Talk to me about ideas • Teams are an option • Due date to be specified

  3. Objectives • Gain further understanding of LDA • Understand the intractability of inference with the model • Gain further insight into Gibbs sampling • Understand how to estimate the parameters of interest in LDA using a collapsed Gibbs sampler

  4. distribution over topics for each document Dirichlet priors distribution over words for each topic topic assignment for each word word generated from assigned topic Latent Dirichlet Allocation(slightly different symbols this time) (Blei, Ng, & Jordan, 2001; 2003)   (d)  Dirichlet()  (d)  zi  Categorical( (d) ) zi (j)  Dirichlet()  (j) T wi  Categorical((zi) ) wi Nd D

  5. The Statistical Problem of Meaning • Generating data from parameters is easy • Learning parameters from data is hard • What does it mean to identify the “meaning” of a document?

  6. Estimation ofthe LDA Generative Model • Maximum likelihood estimation (EM) • Similar to method presented by Hofmann for pLSI (1999) • Deterministic approximate algorithms • Variational EM (Blei, Ng & Jordan, 2001, 2003) • Expectation propagation (Minka & Lafferty, 2002) • Markov chain Monte Carlo – our focus • Full Gibbs sampler (Pritchard et al., 2000) • Collapsed Gibbs sampler (Griffiths & Steyvers, 2004) • The papers you read for today

  7. Review: Markov Chain Monte Carlo (MCMC) • Sample from a Markov chain • converges to a target distribution • Allows sampling from an unnormalized posterior distribution • Can compute approximate statistics from intractable distributions (MacKay, 2002)

  8. Review: Gibbs Sampling • Most straightforward kind of MCMC • For variables • Require the full (or “complete”) conditional distribution for each variable: Draw from x-i = x1(t),x2(t),…, xi-1(t), xi+1(t-1), …,xn(t-1)

  9. Bayesian Inference in LDA • We would like to reason with the full joint distribution: • Given , the distribution over the latent variables is desirable, but the denominator (the marginal likelihood) is intractable to compute: • We marginalize the model parameters out of the joint distribution so that we can focus on the words in the corpus () and their assigned topics (): • This leads to our use of the term “collapsed sampler”

  10. Posterior Inference in LDA • From this marginalized joint dist., we can compute the posterior distribution over topics for a given corpus (): • But possible topic assignments, where is the number of tokens in the corpus! • i.e., inference is still intractable! • Working with this topic posterior is only tractable up to a constant multiple:

  11. Collapsed Gibbs Sampler for LDA • Since we’re now focusing on the topic posterior, namely: • Let’s find these factors by marginalizing separately: • Where: • is the number of times word assigned to topic

  12. Collapsed Gibbs Sampler for LDA • We only sample each ! • Complete (or full) conditionals can now be derived for each in . • Where: • is the document in which word wioccurs • is the number of times (ignoring position i) word w assigned to topic j • is the number of times (ignoring position i) topic j used in document d

  13. Steps for deriving the complete conditionals • Beginwith the full joint distribution over the data, latent variables, and model parameters, given the fixed parameters and of the prior distributions. • Write out the desired collapsed joint distribution and set it equal to the appropriate integral over the full joint in order to marginalize over and . • Perform algebra and group like terms. • Expand the generic notation by applying the closed-form definitions of the Multinomial, Categorical, and Dirichlet distributions. • Transform the representation: change the product indices from products over documents and word sequences, to products over cluster labels and token counts. • Simplify by combining products, adding exponents and pulling constant multipliers outside of integrals. • When you have integrals over terms that are in the form of the kernel of the Dirichlet distribution, consider how to convert the result into a familiar distribution. • Once you have the expression for the joint, derive the expression for the conditional distribution

  14. Collapsed Gibbs Sampler for LDA For = 1 to : For variables (i.e., for to ): Draw from

  15. Collapsed Gibbs Sampler for LDA • This is nicer than your average Gibbs sampler: • Memory: counts (the “” counts) can be cached in two sparse matrices • No special functions, simple arithmetic • The distributions on and are analytic in topic assignments and , and can later be recomputed from the samples in a given iteration of the sampler: •  from | •  from

  16. Gibbs sampling in LDA T=2 Nd=10 M=5 iteration 1

  17. Gibbs sampling in LDA iteration 1 2

  18. Gibbs sampling in LDA iteration 1 2

  19. Gibbs sampling in LDA iteration 1 2

  20. Gibbs sampling in LDA iteration 1 2

  21. Gibbs sampling in LDA iteration 1 2

  22. Gibbs sampling in LDA iteration 1 2

  23. Gibbs sampling in LDA iteration 1 2

  24. Gibbs sampling in LDA iteration 1 2 … 1000

  25. A Visual Example: Bars sample each pixel from a mixture of topics pixel = word image = document A toy problem. Just a metaphor for inference on text.

  26. Documents generated from the topics.

  27. Evolution of the topics ( matrix)

  28. Interpretable decomposition • SVD gives a basis for the data, but not an interpretable one • The true basis is not orthogonal, so rotation does no good

  29. Effects of Hyper-parameters •  and  control the relative sparsity of  and  • smaller :fewer topics per document • smaller :fewer words per topic • Good assignments z are a compromise in sparsity

  30. Bayesian model selection • How many topics do we need? • A Bayesian would consider the posterior: • Involves summing over assignments z P(T|w)  P(w|T) P(T)

  31. Sweeping T

  32. Analysis of PNAS abstracts • Used all D = 28,154 abstracts from 1991-2001 • Used any word occurring in at least five abstracts, not on “stop” list (W = 20,551) • Segmentation by any delimiting character, total of n = 3,026,970 word tokens in corpus • Also, PNAS class designations for 2001 (Acknowledgment: Kevin Boyack)

  33. Running the algorithm • Memory requirements linear in T(W+D), runtime proportional to nT • T = 50, 100, 200, 300, 400, 500, 600, (1000) • Ran 8 chains for each T, burn-in of 1000 iterations, 10 samples/chain at a lag of 100 • All runs completed in under 30 hours on BlueHorizon supercomputer at San Diego

  34. How many topics?

  35. Topics by Document Length

  36. A Selection of Topics STRUCTURE ANGSTROM CRYSTAL RESIDUES STRUCTURES STRUCTURAL RESOLUTION HELIX THREE HELICES DETERMINED RAY CONFORMATION HELICAL HYDROPHOBIC SIDE DIMENSIONAL INTERACTIONS MOLECULE SURFACE NEURONS BRAIN CORTEX CORTICAL OLFACTORY NUCLEUS NEURONAL LAYER RAT NUCLEI CEREBELLUM CEREBELLAR LATERAL CEREBRAL LAYERS GRANULE LABELED HIPPOCAMPUS AREAS THALAMIC TUMOR CANCER TUMORS HUMAN CELLS BREAST MELANOMA GROWTH CARCINOMA PROSTATE NORMAL CELL METASTATIC MALIGNANT LUNG CANCERS MICE NUDE PRIMARY OVARIAN MUSCLE CARDIAC HEART SKELETAL MYOCYTES VENTRICULAR MUSCLES SMOOTH HYPERTROPHY DYSTROPHIN HEARTS CONTRACTION FIBERS FUNCTION TISSUE RAT MYOCARDIAL ISOLATED MYOD FAILURE HIV VIRUS INFECTED IMMUNODEFICIENCY CD4 INFECTION HUMAN VIRAL TAT GP120 REPLICATION TYPE ENVELOPE AIDS REV BLOOD CCR5 INDIVIDUALS ENV PERIPHERAL FORCE SURFACE MOLECULES SOLUTION SURFACES MICROSCOPY WATER FORCES PARTICLES STRENGTH POLYMER IONIC ATOMIC AQUEOUS MOLECULAR PROPERTIES LIQUID SOLUTIONS BEADS MECHANICAL P(w | z) 

  37. Cold topics Hot topics

  38. Cold topics Hot topics 2 SPECIES GLOBAL CLIMATE CO2 WATER ENVIRONMENTAL YEARS MARINE CARBON DIVERSITY OCEAN EXTINCTION TERRESTRIAL COMMUNITY ABUNDANCE 134 MICE DEFICIENT NORMAL GENE NULL MOUSE TYPE HOMOZYGOUS ROLE KNOCKOUT DEVELOPMENT GENERATED LACKING ANIMALS REDUCED 179 APOPTOSIS DEATH CELL INDUCED BCL CELLS APOPTOTIC CASPASE FAS SURVIVAL PROGRAMMED MEDIATED INDUCTION CERAMIDE EXPRESSION

  39. Cold topics Hot topics 37 CDNA AMINO SEQUENCE ACID PROTEIN ISOLATED ENCODING CLONED ACIDS IDENTITY CLONE EXPRESSED ENCODES RAT HOMOLOGY 2 SPECIES GLOBAL CLIMATE CO2 WATER ENVIRONMENTAL YEARS MARINE CARBON DIVERSITY OCEAN EXTINCTION TERRESTRIAL COMMUNITY ABUNDANCE 289 KDA PROTEIN PURIFIED MOLECULAR MASS CHROMATOGRAPHY POLYPEPTIDE GEL SDS BAND APPARENT LABELED IDENTIFIED FRACTION DETECTED 75 ANTIBODY ANTIBODIES MONOCLONAL ANTIGEN IGG MAB SPECIFIC EPITOPE HUMAN MABS RECOGNIZED SERA EPITOPES DIRECTED NEUTRALIZING 134 MICE DEFICIENT NORMAL GENE NULL MOUSE TYPE HOMOZYGOUS ROLE KNOCKOUT DEVELOPMENT GENERATED LACKING ANIMALS REDUCED 179 APOPTOSIS DEATH CELL INDUCED BCL CELLS APOPTOTIC CASPASE FAS SURVIVAL PROGRAMMED MEDIATED INDUCTION CERAMIDE EXPRESSION

  40. Conclusions • Estimation/inference in LDA is more or less straightforward using Gibbs Sampling • i.e., easy! • Not so easy in all graphical models

  41. Coming Soon • Topical n-grams

More Related