1 / 14

Mixture Models on Graphs

Mixture Models on Graphs. Guido Sanguinetti Department of Computer Science, University of Sheffield Joint work with Josselin Noirel and Phillip Wright, Chemical and Process Engineering, Sheffield. Basic question.

pillan
Télécharger la présentation

Mixture Models on Graphs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mixture Models on Graphs Guido Sanguinetti Department of Computer Science, University of Sheffield Joint work with Josselin Noirel and Phillip Wright, Chemical and Process Engineering, Sheffield

  2. Basic question • Given high-throughput measurements comparing two conditions, identify groups of over-, under- and normal- expression. • Biological quantities are linked in complex networks of interactions. • Can we incorporate the network structure in our classifiers/ clustering algorithms?

  3. Traditional approach • Use various statistical hypothesis-testing tools (t-statistics, p-vals, etc.). • More Bayesian (ish), model data as mixture model. • Key assumption is that the data is i.i.d. (see graphical model). • Many variations on theme (ciberT, PPLR,...) N c y

  4. Network based approach y1 y2 • In practice we expect network structure to play a role: if many of your neighbours are overexpressed, you are more likely to be overexpressed. • Graphical model is different. • This allows to identify subnetworks with coherent expression patterns C1 y3 C2 X13 X24 C3 X23 C4 X35 X36 C5 y4 C6 y5 y6

  5. Prior model • The graphical model suggests dependencies between the latent (class) variables. • We will encode these in conditional priors on the mixture coefficients. • Specifically where denotes the set of indices of nodes that are connected to the j-th node. Recently appeared other possibilities (CRFs, Spectral decompositions).

  6. Class conditional model • We restrict to modelling log-expression ratios. • Three classes: overexpressed, underexpressed and no change • We model the no change class with a Gaussian • The other two classes have longer tails and are modelled with exponential distributions and similarly for underexpressed.

  7. Parameters and hyper-parameters • Normal variance  set by user. • Exponential parameters are given an improper prior • This is equivalent to making no assumption on the ’s.

  8. Conditional posteriors • Conditional posteriors can be obtained analytically for both class membership and exponential parameters, and they are given by where N is the number of elements in class  and I is the set of indices corresponding to class .

  9. Gibbs sampling • Conditional posteriors are easy to sample from. • A Gibbs sampling scheme can be devised easily. • Gibbs sampling is a particular form of the Metropolis-Hastings Markov-Chain Monte Carlo scheme where the proposal distribution is the conditional posterior. • As a consequence, no rejections are needed.

  10. Monitoring convergence • Not an expert (I’d like to hear from one!) • Standard textbook technique: run parallel chains and control mixing (e.g. Gelman, Carlin, Rubin and Stern). • Burn in period. • Thinning. • Result shown used a burn-in of 1000 iterations and a thinning of five.

  11. Synthetic results • Generated random scale free network using Barabasi-Albert algorithm. • Network has 100 nodes and average connectivity of ~2. • Isolated nodes are removed. • Classes are generated from the conditional priors running a Markov chain to remove initial bias. • Data is generated from the conditional model.

  12. Synthetic results 12 5 28 16 8 0 10 Left: MMG (blue) vs ordinary mixture model. Each point is a different random network, with ten random data assignments.

  13. Real data (prelim) • E. coli reaction to oxygen exposure (Partridge et al.,2007) • Network structure given by transcriptional regulation network • Network weights given by regulatory strengths inferred using state-space model • Large overlap among classes • Biological significance still to be investigated

  14. Future directions • Use on metabolic data (original motivation) • Temporal structures? • Directed graphs? • Any more questions?

More Related