1 / 14

Adapting matched filtering searches for compact binary inspirals in LSC detector data.

Adapting matched filtering searches for compact binary inspirals in LSC detector data. Chad Hanna – For the LIGO Scientific Collaboration. Introduction. It is common to have single detector triggers at SNR ~1000 and millions of triggers at SNR 7.

kaili
Télécharger la présentation

Adapting matched filtering searches for compact binary inspirals in LSC detector data.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adapting matched filtering searches for compact binary inspirals in LSC detector data. Chad Hanna – For the LIGO Scientific Collaboration.

  2. Introduction • It is common to have single detector triggers at SNR ~1000 and millions of triggers at SNR 7. • At the end of the pipeline we want only a few interesting candidates to follow • To do this we tune our pipeline with injections. • We try to separate injections from noise while assuring all (>99%) of injections which are detected at the onset survive the pipeline.

  3. Outline When applicable I will discuss the following for three different inspiral searches: Binary Neutron Star (BNS), Primordial Black Hole Binaries (PBHB), Binary Black Hole (BBH). • Coincidence – timing, mass, psi, etc. • Effective distance cuts – amplitude consistency • Signal based vetoes – X2, r2 • detection statistics (or how to separate the good stuff from the noise) • Background estimation – time slides

  4. Coincidence parameter philosophy (BNS, PBHB, BBH) • Injections are used to determine how a coincident event should be defined • For BNS, PBHB, BBH we compare the end time that we inject with what we detect for a single instrument. The error in time found in the single detector establishes a coincident window we then apply to triggers between sites. (Of course the maximum GW travel time between sites is automatically accounted for.) • For BNS, PBHB, we can repeat the procedure for chirp mass and  (which are both functions of the masses.) But the BBH case is more difficult, which I will explain. • In all cases it is our philosophy to choose such windows generously as to not miss a detection.

  5. Coincidence parameters (BNS,PBHB) End time PBHB We compare the end time of found injections with the injected end time in single detectors to place bounds on the coincidence windows between detectors. BNS S3 BNS timing coincidence window 2 ms S3 PBH timing coincidence window 4 ms Seconds difference

  6. Coincidence parameters (BBH) End time BBH – EOB The BBH search is complicated by injecting several physical template families some of which produce tails in the timing distribution when detected with BCV templates. The worst case parameters are chosen. BBH – Taylor T1 S3 BBH timing window 25ms

  7. Coincidence parameters (BNS,PBHB) Chirp mass PBHB We compare the chirp mass of found injections with the injected chirp mass in single detectors to place bounds on the chirp mass coincidence windows between detectors. BNS S3 BNS chirp mass window .02 Mסּ S3 PBHB timing window .002 Mסּ

  8. Coincidence parameters (BBH) 0, 3 BBH EOB The BBH search doesn’t tune mass but rather the BCV parameters 0, 3. These are not injection parameters and therefore the single detector scheme shown in the previous slides doesn’t work. Instead we must look at coincidences before we choose the 0, 3 coincidence windows. BBH EOB The BBH windows are 0 = 40000 3 = Full range.

  9. Effective Distance Cut (BNS,PBHB) Injections (and real GW signals) have effective distance ratios which are close to unity for H1 and H2 (within calibration errors). Therefore any triggers which are found to have non unity effective distances are not consistent with real GW sources and may be disregarded. BNS fractional difference in effective distance cut = 0.45 PBHB fractional difference in effective distance cut = 0.50 PBHB BNS Fractional difference in eff. distance

  10. Signal based vetoes - 2 - (PBHB,BNS) 2 The 2 test is a waveform consistency test to separate real signals from false alarms. We actually adjust the 2to be a function of P (the number of frequency bins), 2 (SNR_, and \ a parameter called 2. I will denote This modified 2 as *2 . 2 should be the bank mismatch But it is tuned to not lose nearby injections. *2 = 2/(P+ 22)

  11. Signal Based Vetoes - r2 - (PBHB,BNS) r2 The 2 test itself is powerful but it can be refined further by examining the time A signal spends above the r2 threshold. An injection spends little time above, whereas glitches (false alarms) spend a lot of time above. r2 = 2/p See poster by Andy Rodriguez BNS

  12. Detection Statistics (BNS, PBHB) PBHB Now that we have a nice waveform consistency test (2 ), SNR alone is not the best way to separate false Coincidences from injections. There is a better statistic which involves a combination of SNR and Χ2 BNS Lines of constant S follow the contours of accidental coincidences quite well. The 250 is found empirically.

  13. Background Estimation / Combined Statistic (PBHB,BNS) In order to differentiate between real signals and background we examine false coincidences by sliding the trigger sets of one detector with respect to another in time. A typical search has more than 50 slides where two of three detectors are slid by ~5-10 seconds each time. PBHB BNS Using the statistic discussed earlier, in a combined way (e.g. H1S + L1S), lines of constant false alarm are approximated by the linear statistic contours between detectors.

  14. Conclusion • BNS, PBHB, and BBH searches are similar in that they must overcome messy data. • Although the procedures for each search vary the philosophy remains to have a few good candidates at the end with little question about missing a detection

More Related