1 / 17

Exploring Artificial Neural Networks to Discover Higgs at LHC

Exploring Artificial Neural Networks to Discover Higgs at LHC. Using Neural Networks for B-tagging By Rohan Adur www.hep.ucl.ac.uk/~radur. Exploring Artificial Neural Networks to Discover Higgs at LHC. Outline: What are Neural Networks and how do they work?

gilles
Télécharger la présentation

Exploring Artificial Neural Networks to Discover Higgs at LHC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Exploring Artificial Neural Networks to Discover Higgs at LHC Using Neural Networks for B-tagging By Rohan Adur www.hep.ucl.ac.uk/~radur

  2. Exploring Artificial Neural Networks to Discover Higgs at LHC Outline: • What are Neural Networks and how do they work? • How can Neural Networks be used in b-jet tagging to discover the Higgs boson? • What results have I obtained using Neural Networks to find b-jets?

  3. Neural Networks - Introduction • Neural Networks simulate neurons in biological systems • They are made up of neurons connected by synapses • They are able to solve non-linear problems by learning from experience, rather than being explicitly programmed for a particular problem

  4. The Simple Perceptron Output layer • The Simple Perceptron is the simplest form of a Neural Network • It consists of one layer of input units and one layer of output units, connected by weighted synapses Synapses connected by weights Input layer

  5. The Simple Perceptron contd. • Requires a training set, for which the required output is known • Synapse weights start at random values. A learning algorithm then changes the weights until they give the correct output and the weights are frozen • The trained network can then be used on data it has never seen before Output layer Synapses connected by weights Input layer

  6. The Multilayer Perceptron Output layer • The main drawback of the simple perceptron is that it is only able to solve linearly-separable problems • Introduce a hidden layer to produce the Multilayer Perceptron • The Multilayer Perceptron is able to solve non-linear problems Synapses HiddenLayer Synapses Input layer

  7. Finding Higgs • The Higgs boson is expected to decay to b-quarks, which will produce b-jets • b-jet detection at LHC is important in detecting Higgs • 40 million events happening per second • b-taggers must reject light quark jets

  8. b-tagging • B mesons are able to travel a short distance before decaying, so b-jets will originate away from the primary vertex • Several b-taggers exist • IP3D tagger uses the Impact Parameter of the b-jets ~ 1mm Primary Vertex B B-jets Secondary Vertex IP • SecVtx tagger reconstructs the secondary vertex and rejects jets which have a low probability of coming from this vertex

  9. IP3D Tagger • Good amount of separation between b-jets and light jets

  10. b-tagger performance

  11. Neural Network for b-tagging • The current best tagger is a combination of IP3D and SV1 tag weights • Using Neural Networks, can this tagger be combined with others to provide better separation?

  12. The Multilayer Perceptron and b-tagging • The TMultiLayerPerceptron class is an implementation of a Neural Network built into the ROOT framework • It contains several learning methods. The best was found to be the default BFGS method • Train with output = 1 for signal and output = 0 for background • The b-tagging weights were obtained using the ATHENA 10.0.1 release • The data was obtained from Rome ttbar AOD files • Once extracted, the weights were used to train the Neural Network

  13. Results • 5 Inputs used: Transverse momentum, IP3D tag, SV1 tag, SecVtx Tag and Mass • 12 Hidden units and 1 Output unit

  14. Results Contd.

  15. Results Contd. Rejection rates Mistagging efficiency At fixed rejection

  16. Discussion of Results • Using a Neural Network, b-taggers can be combined to provide up to double the purity at fixed efficiency • At fixed rejection rate, the Neural Network provides 5% more signal than the IP3D+SV1 tagger alone • Neural Network performance is not always reproducible. Each time training is undertaken a different network is produced

  17. Conclusions • Neural Networks are a powerful tool for b-jet classification • Neural Networks can be used to significantly increase b-tagging efficiency/rejection ratios and could be useful in the search for Higgs • Training a Neural Network on real data will be the next hurdle

More Related