1 / 9

Near detector scan

Near detector scan. Peter Litchfield. As volunteered at the last meeting I have scanned some near detector data Getting LOON and MAD to work on rel14 development at Minnesota was a non trivial task, at least for me, and took an awful long time and much expert input.

mihaly
Télécharger la présentation

Near detector scan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Near detector scan Peter Litchfield • As volunteered at the last meeting I have scanned some near detector data • Getting LOON and MAD to work on rel14 development at Minnesota was a non trivial task, at least for me, and took an awful long time and much expert input. • I scanned the first 20 snarls of near detector run 11301001 (121 events). Jim had sent me specific events to look at but this was a preliminary scan to see what the data looked like. • I was not impressed by the quality of the events and reconstruction but I scanned events in all of the detector volume, things would probably be better if I restricted to events in the fiducial volume • I flagged 19/121 events as having reconstruction problems, i.e. hits had been missed from or added to a track/shower. I will send Jim a list. The coil hole seemed a particular problem.

  2. Coil hole problem?

  3. Brems as showers • I saw several cases where showers had been reconstructed which were actually brems on the muon track. • These had been added to the hadronic shower. • Need some code which associates showers with the downstream part of a muon track. • If the muon stops they should be added to the muon energy, if the momentum is from curvature in principle the brem could be added in the energy loss. • At least these showers should not be included in the hadronic shower energy.

  4. Brems as showers

  5. Tracks in the hadron shower • I was not impressed by the attempts to find tracks in amongst a hadronic shower. • Most of the time the track found was a random association of hits in the shower. • Gave poor momentum fits with large errors • Subtracted hits from the shower and thus gave systematically low shower energies • I suggest applying an isolation criterion to a track and if it is not separate from the rest of the shower returning the hits to the shower and calling the event nc, at least for the cc analysis. • What is done in calculating the shower energy to account for the fact that for either protons or neutrons rather little of the energy is observed? Should there be a correction as every event has one?

  6. NC event with track

  7. Feldman-Cousins Analysis • I did a likelihood analysis of the Soudan 2 atmospheric neutrino data using the Feldman-Cousins prescription for the allowed region analysis • I would like to apply this to the MINOS analysis as an alternative to the Super-K error analysis that Brian has been investigating • How does F-C work? • The likelihood is calculated on a grid of m2,sin22, the best likelihood found and the difference between the best and the likelihood at each grid point plotted. • If all errors were gaussian and there were no systematic errors the 90% confidence region would be where the likelihood difference was less than 2.3. But they aren’t. • To calculate the 90% confidence limit at any grid point MC experiments are generated and analyzed for those parameter values, with the data statistics. The likelihood difference between the best fit point and the grid point is plotted and the value containing 90% of the experiments found. If the data difference is less than this value the point is within the 90% contour.

  8. Feldman-Cousins Analysis • Systematic errors can be introduced into the MC experiments by randomly varying the MC experiment parameters (beam, cross-section, reconstruction) within their errors. • Advantages of F-C • Gives proper coverage, i.e. proper 90% limits including all errors, statistical and systematic, including correlations • It takes into account physical constraints, e.g sin221 • No fitting, it is all calculation, at least for the oscillation parameters • Nuisance parameters (e.g.Ma) can be fitted at each value of the oscillation parameters, taking account of any correlations. • Systematic errors are easily and naturally included • No requirement to calculate and fit the variation of 2 with the parameters of the cross-sections and beam.

  9. Feldman-Cousins Analysis • Disadvantages of F-C • Very CPU intensive. Need 1000 MC experiments at each grid point. However there are tricks in keeping event parameters and weighting for different conditions. Also CPU is plentiful these days. • Hopefully by the Week in the Woods I will have a trial version of an analysis

More Related