1 / 6

Trigger Data Quality Assurance Workshop

Trigger Data Quality Assurance Workshop. Offline Monitoring, Diagnostics and Validation session May 6, 2008 Ricardo Gon ç alo. Some questions: How do we make sure that the trigger behaves as expected? If something is wrong, how do we find where the problem happened?

edolie
Télécharger la présentation

Trigger Data Quality Assurance Workshop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trigger Data Quality Assurance Workshop Offline Monitoring, Diagnostics and Validation session May 6, 2008 Ricardo Gonçalo

  2. Some questions: • How do we make sure that the trigger behaves as expected? • If something is wrong, how do we find where the problem happened? • What tools do we need? What tools do we have? • In offline trigger monitoring, validation and offline reconstruction monitoring: what overlaps? Where can we save work? • How do we organize to make sure this happens? • What is done online during run? (previous session) • What is done offline? • What should be done at CERN and what should be done remotely? • What computing and human resources will be needed at CERN/remotely? • Where does software validation fit? • Manpower: how many people and what skills are needed? • How are they organised? • Single team for offline and online monitoring tasks? Shifts? On-call experts? • How many people can we count on? • What training/skills are needed? • Documentation; general Atlas member/trigger expert Data Quality Workshop, 8 May 08

  3. The session aims are: • To assess our readiness to: • Verify the quality of the online trigger decision • Verify the trigger data saved in the event • To understand how we should organise ourselves to: • Increase the chances of finding and fixing problems • Minimising the effort needed to verify new data • What extra information is useful? • Where do we need specific information/histograms from each detector? • Where do others need information from us? • Avoid duplication of efforts • Understand the offline data quality plans • Our objective for data taking should be to: • Find problems fast and reliably • Be able to certify all steps in the chain • Have coherent approach between online and offline (in trigger and reco) whenever possible Data Quality Workshop, 8 May 08

  4. The story so far… • Online monitoring: • Immediate access to rates and monitoring histos • Needs to react rapidly and talk to run control: main weapons are “Darth Vader” prescaling together with masking cells/turning off channels • Offline monitoring: • Just starting: initial plan is to have remote DQM shift around the world • Castor: • Online monitoring histos stored in Castor • Tier 0: • Runs offline reconstruction (TrigDecisionMaker runs during reconstruction) • Latency of ~24 hours until data is reconstructed (with updated calibration&alignment) • The CAF: • Can run over a few % of data to monitor data quality • Can run L1 simulation/re-run HLT on raw data (see talk by Christiane in this session) • Software validation: • Uses same histograms as monitoring (configurable) Data Quality Workshop, 8 May 08

  5. Diagnostics & validation Tier 0 Online monitoring Offline monitoring ~10% CAF Data Quality Workshop, 8 May 08

  6. Finally… Try to learn from experience: • See e.g. “previous experience” session in Robustness Workshop in March Data Quality Workshop, 8 May 08

More Related