1 / 12

Systematics in Hfitter

Systematics in Hfitter. Reminder: profiling nuisance parameters. Likelihood ratio is the most powerful discriminant between 2 hypotheses What if the hypotheses depend on additional (“nuisance”) parameters ? e.g. the background slope x . -> We “profile them away” :

dalmar
Télécharger la présentation

Systematics in Hfitter

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Systematics in Hfitter

  2. Reminder: profiling nuisance parameters • Likelihood ratio is the most powerful discriminant between 2 hypotheses • What if the hypotheses depend on additional (“nuisance”) parameters ? • e.g. the background slope x. -> We “profile them away” : -> Wilks theorem: qm ~ c12 Note: 1) It’s a c2 2) Important here: it’s independent of q; profiling removes the nuisance parameters from the problem!

  3. Systematics • Nuisance parameters come in 2 kinds • “Good”: parameters constrained by the fit. • The data tells us what their values are • e.g. background slope • “Bad” : not constrained • Need to introduce an outside constraint, which is added “by hand” to the likelihood. • These is what we call “Systematics” normally… • e.g. width of the signal peak sCB • Could let it float in the fit, but no sensitivity (until 2013 ?) • Can measure Z->ee, apply the result to gg : provides a constraint • Technically (on the sCBexample) • Can write sCB= sCB0(1 + a), a energy resolution • a is introduced since it is easily constrained : • it should be close to 0 (if sCB0computed with all corrections applied) • how far from 0 is a measure of the uncertainty (say 10% ?). • How to implement this precisely ?

  4. Bayesian/Frequentist Hybrid treatment • 2 ways of dealing with the constraint in practice. First the Bayesian way, because it is more intuitive and more widespread. • Idea:assumea is distributed according to some PDF. • Obvious choice : a ~ Gaussian(0, 10%). • ais free, but the there is a penalty in the likelihood for a being different from 0. • Toys: • Each toy dataset must be thrown using a random value of a, drawn from the PDF. • Running over many toys effectively integrates out a. • Problem • a is a model parameter. Giving it a PDF is Bayesian • the PDF gives our “degree of belief” of where a should be.=> Not directly linked to something measured. (also, why a Gaussian ?)

  5. Hfitter example (hfitter_Mgg_noCats_hybrid.dat) [Dependents] mgg = L(100 - 150) // gamma-gamma invariant mass [Models] component Signal = HggSigMggPdfBuilder() norm=(muSignal*nSignalSM; muSignal) component Background = HggBkgMggPdfBuilder() [Constraints] constraint dSig = RooGaussian("dSig", “mean_dSig", "sigma_dSig") [Parameters] nSignalSM = 1.225 C L(0 - 50000) muSignal = 1 L(-1000 - 10000) nBackground = 99 L(0 - 100000) [Signal] formula cbSigma = (cbSigma0*(1 + dSig)) dSig = 0 L(-1 - 10) mean_dSig = 0 C sigma_dSig = 0.10 C … The constraint on dSig : a Gaussian with specified parameters The cbSigma parameter is now given by a formula involving dSig dSig defined here. Allowed range is the important part.value really means starting value in the fit Constraint PDF parameters specified here (also could be in [Parameters])

  6. “Frequentist” way • Idea: • like other nuisance parameters, a can be constrained in some way. • Problem here is that the constraint comes from another measurement • e.g. we could include a Z->ee sample in our model and fit everything simultaneously, getting a as a regular NP. But too complex.. • Solution: include the result from that other experiment • Use directly L(data’ |a) ? Too complex… • “Executive summary” : PDF(ames | a). e.g. ames ~ G(a, 10%). • Add this as a penalty term in the likelihood • Differences with Hybrid case • ames is a fixed measured value (if everything calibrated correctly, =0) • Note that a is now a PDF parameter. No PDF on a! (G gives a likelihood for a) • Similarities • ais still floating in the fit. Constraint still comes from penalty term • Note also that in this Gaussian case, L is the same as previously… but not always the case! • Toys: • There is a PDF on ames, so it should be randomized when generating toys • However, amesonly appears in the penalty term: all toys are in fact the same • ais just a parameter, it is not generated in the toys • Where does the smearing come in ? when fitting the toy, ais constrained by the value of ames.

  7. Hfitter example (hfitter_Mgg_noCats_syst.dat) [Dependents] mgg = L(100 - 150) // gamma-gamma invariant mass [Models] component Signal = HggSigMggPdfBuilder() norm=(muSignal*nSignalSM; muSignal) component Background = HggBkgMggPdfBuilder() [Constraints] constraint dSig_aux = RooGaussian("dSig_aux", “dSig", "sigma_dSig") [Parameters] nSignalSM = 1.225 C L(0 - 50000) muSignal = 1 L(-1000 - 10000) nBackground = 99 L(0 - 100000) [Signal] formula cbSigma = (cbSigma0*(1 + dSig)) dSig = 0 L(-1 - 10) dSig_aux = 0 C sigma_dSig = 0.10 C … Constraint now on dSig_aux auxiliary measurement. dSig now a PDF parameter The cbSigma parameter is defined as previously dSig defined here, same as previously Now dSig_aux is constant (but can be randomized when generating toys).

  8. Some results (2010 numbers w/smearing) Bayesian constraints on dSig, dEff, Gaussian with 10% width Frequentist constraints on dSig, dEff, Gaussian with 10% width

  9. Some distributions

  10. The way the constraint works • Bayesian case : • a0 = 0, but toys thrown with agen != 0=> in the fit, a drawn towards agen • Frequentist case : • ames randomized in toys=> in the fit, a drawn towards ames • Everything the same in this case, not true in distributions where a and a0 don’t play symmetric roles (e.g. Log-normal)

  11. Results with 2011 numbers, Lognormal Bayesian constraints on dSig, dEff, Lognormal with 10% width Frequentist constraints on dSig, dEff, Lognormal with 10% width

More Related