1 / 22

Topic modeling experiments benchmark and simple evaluations

Topic modeling experiments benchmark and simple evaluations. 11 /15, 11/16 . What to measure. Empirical running time (1000 samplings) * Both Dirichlet and Logistic normal * Both a single prior and a mixture model

gabe
Télécharger la présentation

Topic modeling experiments benchmark and simple evaluations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Topic modelingexperiments benchmark and simple evaluations 11/15, 11/16

  2. What to measure • Empirical running time (1000 samplings) * Both Dirichlet and Logistic normal * Both a single prior and a mixture model * Both small number(k=100) and a large number(k=1000) of topics * 4 different size of dataset (1000, 2000, 5000, 10000 posts) • Person/topic modeling quality * See topic distribution of people * See what topics are extracted * Try both priors with K=50, 500, and 1000 topics with 10000 posts dataset

  3. Running time comparison 1-1. (100 topics) Using 4 different size dataset, we assume 5 mixture of priors for both Dirichlet and Logistic normal mixture model #runningtime (sec) #of posts

  4. Running time comparison 1-2. details (100 topics)

  5. Running time comparison 1-3. analysis(100 topics) • Mixture of Dirichlet is kind of slower. This could be because we always have to keep track of latest word-topic assignment history for each mixture component and we have to copy a huge “z” matrix sampling iteration • Mixture of Dirichlet has high initial #topic/word value. This is because we have to do initial random topic assignment for all mixture component respectively • In contrast, there is almost no speed difference between a logistic normal and the mixture of logistic normals. This is because we directory sample person-specific topic distribution from a corresponding prior

  6. Running time comparison 2-1. (1000 topics) Using 4 different size dataset, we assume 5 mixture of priors for both Dirichlet and Logistic normal mixture model To be added tomorrow #runningtime (sec) #of posts

  7. Running time comparison 2-2. details (1000 topics) Complete set of running time will be available tomorrow

  8. Running time comparison 2-3. analysis(1000 topics) • Running time for single/mixture of Dirichlet is almost completely proportional for the number of topics or the size of dataset • Logistic Normal prior (both single/mixture) seems to have (not so much significant) additional time growth according to the size of topics like 10 times topic size growth results in 14 or 15 times running time growth. Regarding to the dataset, the time growth is linear.

  9. Person/topic modeling quality We have two ways of user analysis presentation • 1. Showing topic distribution heatmap of the set of people • 2. Projecting K-dimensional vectors of each person into 2 dimension (x-y coordinates) using Multidimensional scaling

  10. Multidimensional Scaling for 104 people with a Dirichlet prior 1000 topics 50 topics 500 topics

  11. Multidimensional Scaling for 104 people with a mixture of Dirichlet priors 1000 topics 50 topics 500 topics

  12. Multidimensional Scaling for 104 people with a Logistic Normal prior 1000 topics 50 topics 500 topics

  13. Multidimensional Scaling for 104 people with a mixture of Logistic Normal priors 1000 topics 50 topics 500 topics

  14. Topic distribution heatmap of 50 topics (Dirichlet) Users are sorted according to the k-means result with K=10 Single prior Mixture of priors

  15. Topic distribution heatmap of 50 topics (Logistic Normal) Users are sorted according to the k-means result with K=10 Single prior Mixture of priors

  16. Analysis -Dirichlet • Since we are using a large number of topics for relatively small data set (109 people), Dirichlet tends to assign many same weights for most topics and because of the normalization method we are taking, just a few topics tends to dominate the weight of his vector (it seems the mixture model a little bit alleviates this issue )

  17. Analysis –Logistic Normal • Since we are using a large number of topics for relatively small data set (109 people), Dirichlet tends to assign many same weights for most topics and because of the normalization method we are taking, just a few topics tends to dominate the weight of his vector (it seems the mixture model a little bit alleviates this issue )

  18. Issues • Person topic distribution estimation: In (mixture of) Dirichlet, we can easily get a set of document-topic distributions of a person. Then, we need to estimate the total topic distribution of the person from them. Simply taking the product of the vectors will be suffered from numerical underflow. Right now, we are taking Rong’s log-exponentiate ratio method, but this method can assign excessively high weights for topics of relatively high weights. Is there any alternative idea which might preserve the true topic distribution better?

  19. Extracted topics with Dirichlet Single Dirichlet with K=1000 Dirchlet mixture with K=1000 Single Dirichlet with K=500

  20. Extracted topics with logistic normal Single LN with K=500 LNMixture with K=1000 Single LN with K=500

  21. Topic quality analysis • Both priors find some interesting and clean topics with K = 500 or K =1,000 • Still subjective, but the logistic normal seems to be somewhat better than Dirichlet (it seems the general frequent words seem to less appear in so many topics) • Also, logistic normal seems to assign relatively low tf-idf score for the common words than Dirichlet

  22. Next steps (in addition to parallelism)? • Explore a way to improve theta estimation with Dirichlet? • Build community-based data with demographic information?

More Related