1 / 18

Performance of Statistical Learning Methods

Performance of Statistical Learning Methods. Jens Zimmermann zimmerm@mppmu.mpg.de. Max-Planck-Institut für Physik, München Forschungszentrum Jülich GmbH. Performance Examples from Astrophysics Performance vs. Control H1 Neural Network Trigger Controlling Statistical Learning Methods

lew
Télécharger la présentation

Performance of Statistical Learning Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance of Statistical Learning Methods Jens Zimmermann zimmerm@mppmu.mpg.de Max-Planck-Institut für Physik, München Forschungszentrum Jülich GmbH Performance Examples from Astrophysics Performance vs. Control H1 Neural Network Trigger Controlling Statistical Learning Methods Overtraining Efficiencies Uncertainties Comparison of Learning Methods Artificial Intelligence Higgs Parity Measurement at the ILC Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  2. Performance of Statistical Learning Methods: MAGIC Significance and number of excess events scale theuncertainties in the flux calculation. Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  3. Pileup vs. Single photon pileups not recognised by XMM but by NN ? ? classical algorithm „XMM“ Performance of Statistical Learning Methods: XEUS Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  4. Control of Statistical Learning Methods There may be many different successful applicationsof statistical learning methods. There may be great performance improvementscompared to classical methods. This does not impress people who fear thatstatistical learning methods are not well under control. First talk: Understanding and Interpretation Now: Control and correct Evaluation Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  5. „L2NN“ The Neural Network Trigger in the H1 Experiment Trigger Scheme H1 at HERA ep Collider, DESY L1 2.3 µs L2 20 µs L4 100 ms 10 MHz 500 Hz 50 Hz 10 Hz Each neural network on L2 verifies a specific L1 sub-trigger. Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  6. Signal(DVCS) Background(upstreambeam-gasinteraction) • L1 sub-trigger 41 triggers DVCS by requiring • Significant energy deposition in SpaCal • Within Time Window • L2 neural network additional information • Liquid argon energies • SpaCal centre energies • z-vertex information Triggering with 4 Hz Must be reduced to 0.8 Hz Triggering Deeply Virtual Compton Scattering Theory Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  7. 25% selection set 25% test set • Tune training parameters to • avoid overtraining • optimise performance Determine the correct efficiency 50% training set signalshouldpeak at 1 backgroundshouldpeak at 0 Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  8. Determine the Correct Efficiency training set [%] test set [%] Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  9. propagation of uncertainties statistical uncertainty of the efficiency e.g. 80% ± 4% for 80 of 100 Check Statistical Uncertainties efficiency Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  10. Check Systematical Uncertainties There is only a propagation ofsystematical uncertainties of the inputs Assumingx1 with absolute error s1x2 with relative error s2= 5%x3 with relative error s3=10% Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  11. Check Systematical Uncertainties example: DVCS dataset Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  12. smis the variation overdifferent parts of the test set efficiencies for fixed rejection of 80% Comparison of Hypotheses NN: 96.5% vs. SVM: 95.7% Statistically significant? Build 95% confidence interval! Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  13. smis the variationover the different trainings efficiencies for fixed rejection of 60% Comparison of Learning Methods Compare performancesover different training sets! Cross-Validation: Divide dataset into k parts,train k classifiers byusing each part once as test set. Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  14. CC cosmic two events with low NN-output overlay cosmic Artificial Intelligence H1-L2NN: TriggeringCharged Current Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  15. background foundin J/y selection Artificial Intelligence H1-L2NN: Triggering J/y Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  16. Classical approach:fit angular distribution A • Parity induces favourite r-configuration: • anti-parallel for H • parallel for A 0 p 2p Significance is amplitudedivided by its uncertainty Significance measured for500 events and averagedover 600 pseudo-experiments s = 5.09 Higgs Parity Measurement at the ILC H/A t+t- rn rn ppn ppn Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  17. Significance is differenceof measured meansdivided by its uncertainty Significance measured for500 events and averagedover 600 pseudo-experiments s = 6.26 Higgs Parity Measurement at the ILC Statistical learning approach: direct discrimination trained towards 0 trained towards 1 Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

  18. Conclusion Statistical Learning Methods successful in many applications in high energy and astrophysics. Significant performance improvements comparedto classical algorithms. Statistical learning methods are well under control: - efficiencies can be determined - uncertainties can be calculated. Comparison of learning methods revealsstatistically significant differences. Statistical Learning Methods sometimes show more artificial intelligence than expected. Jens Zimmermann, MPI für Physik München, ACAT 2005 Zeuthen

More Related