1 / 38

Outline

AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MIT. Outline. AMS – particle physics experiment STS91 precursor flight AMS-02 ISS mission Classes of AMS data Data Flow Ground centers

todd
Télécharger la présentation

Outline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AMS-02 Computing and Ground Data HandlingCHEP 2004September 29, 2004. InterlakenAlexei Klimentov — Alexei.Klimentov@cern.chETH Zurich and MIT

  2. Outline • AMS – particle physics experiment STS91 precursor flight AMS-02 ISS mission • Classes of AMS data • Data Flow • Ground centers • Data Transmission SW • AMS-02 distributed Monte-Carlo Production September 29. 2004. CHEP04. Alexei Klimentov.

  3. AMS : a particle physics experiment in space AMS, the Alpha Magnetic Spectrometer, scheduled for a three years mission on the International Space Station (ISS). PHYSICS GOALS : Accurate, high statistics measurements of charged, cosmic ray spectra in space > 0.1GV • The study of dark matter (90% ?) Nuclei and e-/e+ spectra measurement • Determination of the existence or absence of antimatter in the Universe Look for negative nuclei as anti-Helium, anti-Carbon • The study of the origin and composition of cosmic rays Measure isotopes D, He, Li, Be… September 29. 2004. CHEP04. Alexei Klimentov.

  4. AMS-01 The operation principles of the apparatus have been tested in space during a precursor flight : 100M events recorded Trigger rates 0.1-1kHz DAQ lifetime 90% Results : _ • Anti-matter search : He / He = 1.1x10-6 • Charged Cosmic Ray spectra Pr, D, e-/e+, He, N • Geomagnetic effects on CR under/over geomagnetic cutoff components Magnet :Nd2Fe14B TOF : trigger, velocity and Z Si Tracker : charge sign, rigidity, Z Aerogel Threshold Cerenkov : velocity Anticounters : reject multi particle events September 29. 2004. CHEP04. Alexei Klimentov.

  5. September 29. 2004. CHEP04. Alexei Klimentov.

  6. AMS-02. • Super Conducting Magnet (B = 1 Tesla). • Transition Radiation Detector (TRD) , rejects protons better than 10-2, lepton identification up to 300 GeV. • Time Of Flight Counters (TOF), time of flight measurement to an accuracy of 100 ps. • Silicon Tracker, 3D particle trajectory measurement with coordinate resolution 10um, and energy loss measurement. • Anti-Coincidence Veto Counters (ACC)reject particles that leave or enter via the shell of the magnet. • Ring Image Cherenkov Counter (RICH), measures velocity and charge of particles and nucleis. • Electromagnetic Calorimeter (ECAL). Measures energy of gamma-rays, e-,e+ , distinguishes e-/e+ from hadrons. September 29. 2004. CHEP04. Alexei Klimentov.

  7. DAQ Numbers Raw data rate : 3.7 Mbit x 200-2000 Hz = 0.7-7 GBit/s Data reduction, filtering : 2 Mbit/s AMS Power budget : 2KW September 29. 2004. CHEP04. Alexei Klimentov.

  8. AMS September 29. 2004. CHEP04. Alexei Klimentov.

  9. AMS02 Ground Support Centers Payload Operations Control Center (POCC) at CERN (first 2-3 months in Houston TX). “counting room”, usual source of commands. receives Health & Status (H&S), monitoring and science data in real-time receives NASA video. voice communication with NASA flight operations. Science Operations Center (SOC) at CERN (first 2-3 months in Houston TX). receives the complete copy of ALL data. data processing and science analysis. data archiving and distribution to Universities and Laboratories. Ground Support Computers (GSC) at Marshall Space Flight Center Huntsville AL. receives data from NASA -> buffer -> retransmit to Science Center. Regional Centers. Aachen, ITEP, Karlsruhe, Lyon, Madrid, Milan, MIT, Nanjing, Shanghai, Taipei, Yale -> 19centers. analysis facilities to support geographically close Universities. September 29. 2004. CHEP04. Alexei Klimentov.

  10. Classes of AMS Data (Health & Status data) • Critical Health and Status Data. • Status of detector Magnet State (charging, persistent, quenched…) Input Power (1999 W) Temperature (low, high) DAQ state (active, stuck) Rate < 1 Kbit/sec need in Real-Time (RT) to AMS Payload Operation and Control Center (POCC), to ISS crew and NASA ground September 29. 2004. CHEP04. Alexei Klimentov.

  11. Classes of AMS Data (Monitoring data) • Monitoring (House-Keeping, Slow Control) Data All slow control data from all slow control sensors Data rate ~ 10 Kbit/sec needed in Near Real Time (NRT) to AMS POCC visible to ISS crew complete copy “later” (close to NRT) for science analysis September 29. 2004. CHEP04. Alexei Klimentov.

  12. Classes of AMS Data (Science data) • Science Data events, sub-detectors calibrations samples approximately 10% to POCC to monitor detector performance in RT complete copy “later” to SOC for event reconstruction and physics analysis. 2 Mbit/sec orbit/average September 29. 2004. CHEP04. Alexei Klimentov.

  13. Classes of AMS Data (Flight Ancillary Data) • Flight Ancillary Data ISS latitude, attitude, speed, etc needed in Near Real Time (NRT) to AMS POCC complete copy “later” (close to NRT) for science analysis 2 Kbit/sec September 29. 2004. CHEP04. Alexei Klimentov.

  14. Commands • Command Types simple, fixed (few bytes – require H&S data visibility) short, variable length (<1KByte – require monitoring data) files, variable length (K to Mbytes – requires science data) In the beginning we may need to command intensively, over the long haul anticipate : a few simple or short commands per orbit occasional (daily-weeks) periods of heavy commanding very occasional (weekly-monthly) file loading • Command Sources Ground : one source of commands – POCC Crew via ACOP – contingency use only of simple or short commands September 29. 2004. CHEP04. Alexei Klimentov.

  15. September 29. 2004. CHEP04. Alexei Klimentov.

  16. AMS Crew Operations Post (ACOP) ACOP is a general purpose computer, the main duties of ACOP are to : • Serves as internal recording device to preserve data • Allows for burst mode playback operations up to 20 times original speed to assist in data management • Allows access to MRDL link (another path to ground), and will enable AMS to take advantage of future ISS upgrades such as 100baset MRDL • Potential for additional data compression/ triggering functions to minimize data downlink • Serves as additional command interface • Upload of files to AMS (adjust main triggering) • Direct commanding to AMS September 29. 2004. CHEP04. Alexei Klimentov.

  17. AMS Ground Data Handling How much of (and how soon) AMS data gets to the ground centers determines how well : • Detector performance can be monitored • Detector performance can be optimized • Detector performance can be tuned into physics September 29. 2004. CHEP04. Alexei Klimentov.

  18. AMS Ground Data Centers September 29. 2004. CHEP04. Alexei Klimentov.

  19. Ground Support Computers • At Marshall Space Flight Center (MSFC), Huntsville Al • Receives data from NASA Payload Operation and Integration Center (POIC) • Buffers data until retransmission to the AMS Science Operation Center (SOC) and if necessary to AMS Payload Operations and Control Center (POCC) • Runs unattended 24h/day, 7 days/week • Must buffer about 600 GB (data for 2 weeks) September 29. 2004. CHEP04. Alexei Klimentov.

  20. Payload Operation and Control Center • AMS “counting room” • Usual source of AMS commands • Receives H&S, monitoring, science and NASA data in real-time mode • Monitor the detector state and performance • Process about 10% of data in near real time mode to provide fast information to the shift taker • Video distribution “box” • Voice loops with NASA September 29. 2004. CHEP04. Alexei Klimentov.

  21. Science Operation Center • Receives the complete copy of ALL data • Data reconstruction and processing, generates event summary data and does event classification • Science analysis • Archive and record ALL raw, reconstructed and H&S data • Data distribution to AMS Universities and Laboratories September 29. 2004. CHEP04. Alexei Klimentov.

  22. Regional Centers • Analysis facility to support local AMS Universities and Laboratories • Monte-Carlo Production • Mirroring DST (ESD) • Provide access to SOC data storage (event visualization, detector and data production status, samples of data , video distribution) September 29. 2004. CHEP04. Alexei Klimentov.

  23. Telescience Resource Kit (TReK) • TReK is a suite of software applications that provide: • Local ground support system functions. • An interface with the POIC to utilize POIC remote ground support system services. • TReK is suitable for individuals or payload teams that need to monitor and control low/medium data rate payloads. • The initial cost of a TReK system is less than $5,000. M.Schneider MSFC/NASA September 29. 2004. CHEP04. Alexei Klimentov.

  24. ISS Payload Telemetry and Command Flow International Space Station Space Shuttle TDRS White Sands Complex IP’s P/L Uplinks SSCC/ MCC-H P/L User Data Telescience Support Centers (TSC’s) P/L User Data P/L Uplinks P/L Uplinks Processed P/L Data P/L Uplinks POIC (EHS, PDSS, PPS) US. Investigator Sites P/L User Data P/L Uplinks M.Schneider MSFC/NASA

  25. Telemetry Services TReK Telemetry Capabilities: * Receive, Process, Record, Forward, and Playback Telemetry Packets. * Display, Record, and Monitor Telemetry Parameters * View Incoming Telemetry Packets (Hex/Text Format) * Telemetry Processing Statistics M.Schneider MSFC/NASA September 29. 2004. CHEP04. Alexei Klimentov.

  26. Command Services TReK Command Capabilities: * Command System Status & Configuration Information * Remotely Initiated Command (Cmd Built from POIC DB) * Remotely Generated Command (Cmd Built at Remote Site) * Command Updates * Command Responses * Command Session Recording/Viewing * Command Track * Command Statistics M.Schneider MSFC/NASA September 29. 2004. CHEP04. Alexei Klimentov.

  27. Internet Voice Distribution System (IVoDS) 1 NASA, Research, and Public IP Networks • Windows NT/2000 PC with COTS sound card and headset • Web-based for easy installation and use • PC location very mobile – anywhere on LAN • Challenge: minor variations in PC hardware and software configurations at remote sites Administrator Client PC MSFC Payload Operations and Integration Center Remote Sites Conference Servers EVoDS Voice Switch Voice Loops Virtual Private Network Server VOIP Telephony Gateways LAN IVoDS User Client PC’s Administrator Server Encrypted IP Voice Packets EVoDS Keysets PAYCOM Client PC K.Nichols MSFC/NASA September 29. 2004. CHEP04. Alexei Klimentov.

  28. IVoDS User Client Capabilities • Monitor 8 conferences simultaneously • Talk on one of these eight conferences using spacebar, ‘Click to Talk’ button, or ‘Mic Lock’ • User selects from authorized subset of available voice conferences • Volume control/mute for individual conferences • Assign talk and monitor privileges per user and conference • Show lighted talk traffic per conference • Talk to crew on Space (Air) to Ground if enabled by PAYCOM • Save and Load conference configuration • Set password K.Nichols MSFC/NASA September 29. 2004. CHEP04. Alexei Klimentov.

  29. Data Transmission • Facing the long running period (3+ years) and the way how data will be transmitted from the detector to the ground centers. High Rate Data Transfer between MSFC Al and AMS centers (POCC, SOC) will become a paramount importance September 29. 2004. CHEP04. Alexei Klimentov.

  30. Data Transmission SW to speed up data transfer to encrypt sensitive data and not encrypt bulk data to run in batch mode with automatic retry in case of failure • … starting to look around and came up with bbftp (still looking for a good network monitoring tools) (bbftp developed in BaBar and used to transmit data from SLAC to IN2P3@Lyon) adapted it for AMS, wrote service and control programs September 29. 2004. CHEP04. Alexei Klimentov.

  31. Server copy data files between directories (optional) scan data directories and make list of files to be transmitted purge successfully transmitted files and do book-keeping of transmission sessions Client periodically connect to server and check if new data available bbftp new data and update transmission status in the catalogues. Data Transmission SW (the inside details) September 29. 2004. CHEP04. Alexei Klimentov.

  32. Data Transmission Tests September 29. 2004. CHEP04. Alexei Klimentov.

  33. AMS Distributed Data Production • Computer simulation of detector response is a good possibility to study not only detector performance, but also to test HW and SW solutions that will be used for AMS-02 data processing • Data are generated in 19 Universities and Laboratories, transmitted to CERN and then available for the analysis September 29. 2004. CHEP04. Alexei Klimentov.

  34. Year 2004 MC Production • Started Jan 15, 2004 • Central MC Database • Distributed MC Production • Central MC storage and archiving • Distributed access September 29. 2004. CHEP04. Alexei Klimentov.

  35. AMS Distributed Data Production • CORBA client/server for inter-process communication • Central relational database (ORACLE) to store regional centers description, list of authorized users, list of known hosts, jobs parameters, files catalogues, version of programs and executables files etc. • Automated and Standalone mode for processing jobs - automated job description file is generated by remote user request (via Web) user submits job file to a local batch system job requests from central server : calibration constants slow control corrections service info (e.g. path to store DSTs) central server keeps the table of active clients, number of processed events, handle all interactions with database and data transmission - standalone job description file is generated by remote user request (via Web) user receives a stripped database version, submits the job client doesn’t communicate with central server during job execution DSTs and log files are bbftp’ed to CERN by user September 29. 2004. CHEP04. Alexei Klimentov.

  36. MC Production Statistics 185 days, 1196 computers 8.4 TB, 250 PIII 1 GHz/day URL: pcamss0.cern.ch/mm.html September 29. 2004. CHEP04. Alexei Klimentov.

  37. Y2004 MC Production Highlights • Data are generated at remote sites, transmitted to AMS@CERN and available for the analysis (only 20% of data was generated at CERN) • Transmission, process communication and book-keeping programs have been debugged, the same approach will be used for AMS-02 data handling • 185 days of running (~97% stability) • 18 Universities & Labs • 8.4 Tbytes of data produced, stored and archived • Peak rate 130 GB/day (12 Mbit/sec), average 55 GB/day (AMS-02 raw data transfer ~24 GB/day) • 1196 computers • Daily CPU equiv 250 1 GHz CPUs running 184 days/24h Good simulation of AMS-02 Data Processing and Analysis September 29. 2004. CHEP04. Alexei Klimentov.

  38. List of Acronyms Bit per second California European Laboratory for Particle Physics, Geneva, CH Central Europe Time Digital Linear Tape Data Summary Tape Event Summary Data File Transfer Protocol GigaByte September 29. 2004. CHEP04. Alexei Klimentov.

More Related