1 / 13

Mixture Density Networks

Mixture Density Networks. Qiang Lou. Outline . Simple Example Introduction of MDN Analysis of MDN Weights Optimization Prediction. Simple Example. The inverse problem of x=t+0.3*sin(2лt) Input: x, output: t. Learn form the example.

ravi
Télécharger la présentation

Mixture Density Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mixture Density Networks Qiang Lou

  2. Outline • Simple Example • Introduction of MDN • Analysis of MDN Weights Optimization Prediction

  3. Simple Example The inverse problem of x=t+0.3*sin(2лt) Input: x, output: t

  4. Learn form the example Obviously, we find the problem of the conventional neural networks. multi-valued mapping Reason: f(x,w*)= E[Y|X], the average of the correct target values, sometimes is not correct solution.

  5. Solution:mixture density networks MDN: overcome the limits mentioned above ---- using a linear combination of kernel function: Three parameters: coefficients: means: variances:

  6. How to model the parameters? ---- using the outputs of the conventional NN • Coefficients: • Variances: • Means can be directly represented by output of NN:

  7. Basic structure of MDN

  8. Weights Optimization Similar to the conventional NN: maximum likelihood (minimize the negative logarithm of the likelihood). We try to minimize E(w), which is equivalent to maximize the likelihood.

  9. Weights Optimization Using chain rule and back propagation: start off the algorithm:

  10. Prediction • General Way take the conditional average of the target data: • Accurate Way take the solution of the most probable components μk , where k = arg maxk( )

  11. Results of example

  12. Problems • The number of the outputs of the MDN Assume: L models in the mixture model K outputs in conventional NN Outputs of MDN: (K+2) L 2)

  13. Thank you !

More Related