1 / 28

Novosibirsk BINP contribution to ATLAS in 2009 and plans for 2010

Novosibirsk BINP contribution to ATLAS in 2009 and plans for 2010. LAr calorimeters commissioning and operation O.Beloborodova, V. Bobrovnikov, D. Maximov, A. Maslennikov, I. Orlov, K.Skovpen, A. Talyshev Novosibirsk BINP contribution to LAr (evaluated as 1.8 FTE in 2009) includes:

alder
Télécharger la présentation

Novosibirsk BINP contribution to ATLAS in 2009 and plans for 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Novosibirsk BINP contribution to ATLAS in 2009 and plans for 2010

  2. LAr calorimeters commissioning and operation • O.Beloborodova, V. Bobrovnikov, D. Maximov, A. Maslennikov, I. Orlov, • K.Skovpen, A. Talyshev • Novosibirsk BINP contribution to LAr (evaluated as 1.8 FTE in 2009) includes: • Participation in shifts in Atlas Control Room • Software for Data Quality monitoring (especially for EMEC presampler) • Data analysis (including calibrations) • Software for realistic simulation of LAr calorimeters • Software for slow control, including storing data in online and offline databases • (e.g. temperature control of calorimeter modules and electronics)

  3. EMEC presampler –mini-detector, for which the BINP was fully responsible at all stages, from design to fabrication, installation and commissioning Presampler purpose: restore degradation of energy linearity and resolution caused by inactive (“dead”) material (DM) Eparticle = EDM + ECALO, EDM = wPS∙ EPS In ATLAS optimal wPS 20 ∙ t [X0] Energy distributions before (weight=0) and after (weight=61.8) presampler correction (2002 test beam) – resolution is improved by factor of two. Presampler weight is varied while calorimeter sampling weights are kept fixed as for runs without dead material.

  4. On events with single beams and significant energy deposition in all calorimeter cells, taken in November 2009, it was shown, that all 1536 EMEC presampler channels are working. The time resolution is better than 1.5 ns even without correction to cell position (eta). The pulse shape and noise are in a good agreement with expectations and test beam data. GeV

  5. Realistic description of LAr calorimeters geometry A. Sukharev (Novosibirsk BINP) ** Implementation of complicated non-standard geometry of the calorimeter in Geant4 ** Accounting for various distortions of geometry (e.g. gravitational sagging) and HV problems ** Description of dead material in the calorimeter ** Adaptation to Test Beams conditions ** Simulation of various effects of charge collection in the calorimeter's cells ** Keeping all the code up-to-date with developing ATLAS software framework **Support and optimization of the simulation

  6. The ATLAS liquid argon detector temperature measurement system Rack with ELMBs The ATLAS LAr EM calorimeter signal shows a 2%/K temperature dependence. In order to keep negligible its contribution to the calorimeter energy resolution constant term, a 100 mK temperature stability and uniformity is required. The temperature measurement system based on high precision platinum probes (PT100) has been created. More then 500 probes have been installed in three LAr cryostats. The temperature acquisition system is based on ELMB. This system has been running continuously since the end of 2007.

  7. View of panel of LAr temperature measurement system (EMEC A)

  8. View of FSM panel for EM barrel

  9. Comparison between average temperature of the A-side EMEC (green points), the EMB (red points) and the C-side EMEC (blue points) over 9 months (from ATL-COM-LARG-2010-003). EMEC side A EM Barrel EMEC side C

  10. LAr Temperature Data for 2010 EM Barrel EMEC side A EMEC side C

  11. Work on Physics in BINP. 1. Search for heavy Mayorana neutrino in Left-Right Symmetric Model (LRSM). Since March 2009 in collaboration with Pittsburgh university we study the possibility to observe the process of production and subsequent decay of heavy Mayorana neutrino N in the final state with two high ptleptons and two jets in the framework of the Left-Right Symmetric Model(LRSM, or Mirror Symmetry), see the diagram on the next slide. As follows from the name, the model restores the left-right symmetry at high energies (in particular by introducing the new heavy intermediate vector bosons WR and ZR). This model naturally explains small, but non-zero masses of known “light” neutrinos (via so-called See-Saw mechanism), as well as the present-day prevalence of the matter over the anti-matter (barion number B and lepton number L may be violated separately while B - L is conserved). The cross-section of the process falls down quickly with increasing the WR and N mass, but it has been shown that for masses of WR 1 TeV and N  0.5 TeV the process may be discriminated from the Standard Model background and observed with the luminosity integral of several hundreds inverse picobarn. Even with the integral of a few tens inverse picobarn the current upper limit on WR mass (~0.6 TeV) should be improved.

  12. N

  13. Work on Physics in BINP (continued). • Study of the possibility to measure the tau lepton lifetime in the channel • Z +  - hadrons • 3. Extraction of optimal coefficients for dead material corrections from data • (e.g. by analyzing the relative longitudinal sampling fractions in the process W e  , then (with increasing the statistics) with the process Z e+e- ).

  14. BINP Contribution to ATLAS computing 4 major activities (see details in next slides): 1) Developing software for ATLAS Distributed Computing, in particular for Data Replication Monitoring and ATLAS GRID Information System A.Anisenkov, D.Krivashin, R.Kuskov, A.Makeev (~ 1.5 FTE in 2009) 2) Valuable Participation in Trigger & DAQ SysAdmin group A.Bogdanchikov, A.Korol, A.Zaytsev (~ 1 FTE in 2009) 3) System Administration of ATLAS Central Services A. Buzykaev, V. Gulevich (~ 0.5 FTE in 2009) 4) Deployment of BINP GRID cluster, cooperation with NUSC (Novosibirsk University SuperComputer) cluster S.Belov, V.Kaplin, A.Sukharev, A.Zaytsev

  15. Data Replication Monitoring FT/FDR/CR Repro/Data Replication monitoring • The package provides functionalities to monitor datasets transfers and builds • graphical representation and summary information for controlled data. It includes • Auto updated online plots and statistic information for various range of datasets distributions • Interactive monitoring pages • Checking tools collected information from File Catalogs • Daily/weekly saved history snapshots • PANDA integrated interactive monitoring pages Online plots Datasets distribution PANDA html pages Daily/weekly distribution history pages Cron jobs collecting information from File Catalogs Data Replication Monitoring provides various range of plots, pages and tools to monitor datasets distribution over ATLAS Computing Grid Today, Data replication Monitoring System is widely used by ADC shifts (ADCOS or ADC@P1) and ADC experts during data taking, Functional Tests, Reprocessing and other replication tasks.

  16. Data Replication Monitoring (online plots) Some examples of plots Go to corresponding urls for more http://atladcops.cern.ch:8000/history/ http://atladcops.cern.ch:8000/drmon/crmon.html TIER1S by sites TIER2S by clouds TIER1S by clouds TIER2S by sites TIER1S by time T1-T1 transfers snapshots on 2 Dec, 2009 ATLAS DATA transfer time to TIER1s, hours Log scale datasets transfer time, hours to SARA_MATRIX datasets transfer time to CERN, hours

  17. ATLAS Grid Information System (AGIS) • Base Functionality list • Python API functions to manage data • User-friendly Web interface to control and to manage stored configuration data • Command line interface for base functionalities to retrieve and to modify data • Web interface, python API to query and browsing database information • AGIS design based on Client - Server Architecture model • AGIS is database based information system designed to store and to deploy static and semi-static information about services, resources, configuration parameters needed by Distributed Computing applications and topology of the whole ATLAS computing Grid

  18. Base Information Stored in AGIS • ATLAS topology (clouds, tiers, sites and sites specifics) • Site Resources and Services information • Site information and configuration • Data Replication sharing and pairing policy • List of Activities and its properties • Global Configuration Parameters needed by ADC applications • User related information (privileges, roles, account info) for ADC applications (AGIS, Panda, DaTRI, etc) • Downtime information about ATLAS sites, services • Site Blacklisting data

  19. AGIS applications • Downtime Calendar is actively used by ADC shifters and experts • during data distribution • AGIS provides information to subscriptions programs about site availability to decide include or exclude the site from data replications ATLAS Downtime Calendar shows information about site downtimes (EGEE, NDGF and OSG) getting information directly from the downtime databases • ATLAS Downtimes • Site Blacklistings • ATLAS Topology Information • ADC applications use AGIS API to communicate with AGIS Server

  20. BINP Contribution to ATLAS Trigger & DAQ SysAdmin Group • BINP is contributing to all the activities of ATLAS Trigger/DAQ SysAdmin Group since 2007 • D.Popov (2007-2008) • A.Zaytsev (2008-2010) • A.Korol (2009-2010) • A.Bogdanchikov (2009-2010) • The contribution includes: • Support of the existing TDAQ environment (> 1500 servers,> 200 racks of equipment in SDX1 and USA15, ATLAS MainControl Room and Satellite Control Room equipment) • Support of ATLAS Point 1 users (> 2800 users) • Development of various system administration tools for internal use within the group • Building and validating hardware solutions for future usein the ATLAS TDAQ environment • Taking part in 24-hours TDAQ SysAdmin shifts(since mid-summer 2008) Apr 8, 2010 BINP Contribution to ATLAS TDAQ 21

  21. Previous Achievements (2009) CHEP2009 Poster Contribution, Mar 2009 • Migration of the ATLAS Gateways to the new servers provided with XEN based virtualization: • Initial deployment is performed in 2008Q4 • Migration is to be finalized in 2009Q2 • Implementation of bulk server firmware upgrade toolsfor the netbooted nodes deployed in ATLAS Point 1: • Successfully applied in 2008Q4 for upgrading of more than 1000 nodes installed in SDX1 • Deployment and support of ATLAS Remote Monitoring servers: • Providing SysAdmin help for their users on testing the commercial and free NX servers and the SGD (Sun Global Desktop) based solutions • Implementation of monitoring and accounting data analysis tools based on ROOT toolbox which were successfully applied in 2008Q4-2009Q2 for • ATLAS DCS and Nagios RRD temperature data analysis for SDX1 • ATLAS Gateway accounting system data visualization • Contributing to everyday activities of the group including ATLAS TDAQ SysAdmin shifts since Sep 2008 & taking part in multiple hardware maintenance operations in SDX1 and ATLAS Control Room Apr 8, 2010 BINP Contribution to ATLAS TDAQ 22

  22. Recent Achievements (2010Q1) • Major upgraded of the ATLAS Remote Monitoring nodes: • Reinstalling the nodes under SLC5.4 x86_64 • The current installation is fully documented • Supporting the ATLAS Gateways and ATLAS Remote Monitoring: • Keeping the nodes up-to-date • Adding more functionality and increasing the reliability of these subsystems • Getting through the highest peaks of user activity, e.g. the recentLHC media day (Mar 30, 2010) smoothly • Analyzing and visualizing the data collected on the gateways by monitoring and accounting systems • Continuing to contribute to everyday activities on supporting the ATLAS TDAQ computing environment over the period of LHC data taking • Taking part in commissioning of the new ATLAS TDAQ HLT computing hardware to be deployed in Point1 in 2010Q2 • 10 racks of equipment (new high density computing nodes) • Adding more than 5000 CPU cores to the ATLAS HLT computing farm (SDX1) Apr 8, 2010 BINP Contribution to ATLAS TDAQ 23

  23. ATLAS Computing: Central services Alexey Buzykaev, Vasily Gulevich • Monitoring of Central Catalogue (DQ2) - developed and installed since April 2009 • Secure shared web hosting - developed in November 2009 and started to be actively used among ATLAS central facilities (atlas-runquery, atlas-trigconf, atlas-oracle admin,...). • Created Quattor component: ncm-execscript • Support requests processing: • about 200 computing hosts dedicated to ATLAS in CERN central IT building. • Apply security updates for OS and application software.  • Help to ATLAS service managers to get there software running and to shape in CERN IT rules. • Automatize software package creation and uploading to repository.

  24. Operation of the ATLAS end-cap calorimeters at sLHC luminosities: an experimental study As a part of the R&D work towards the upgrade of the LHC and its detectors, we perform the measurements with HEC, EMEC, and FCAL test modules using a high intensity (up to 1012 pps) beam at the IHEP (Protvino) 70 GeV acceleratorto establish the operating limits of the ATLAS end-cap calorimeters at the luminosity of 1035cm-2s-1 (sLHC). The main topics include: • Studies of calorimeter cell response as a function of beam intensity and HV applied • Analysis of the signal shape as a function of integrated particle flux. • 3. Measurement of the radioactive pollution of liquid argon, calorimeter components • and materials as a function of integrated particle flux.

  25. The mean normalized HEC signal (sum of four channels) for different signal amplitudes. Events with larger amplitudes correspond to regions with relatively higher beam intensity inside spill. At low intensities the effects of the ion build-up are negligible. With increasing beam intensity the current pulse is changing: the falling edge is shorter and sags, producing shorter and higher negative signal after shaping.

  26. The mean signal amplitude for four HEC channels as a function of average beam intensity. Above a beam intensity of 1010 pps the non-linearity of the response starts to get visible.

More Related