1 / 16

ALICE Data Challenges

ALICE Data Challenges. Fons Rademakers. Data Challenge. When fully operational ALICE will take data with a rate of 1.5 GB/s An order of magnitude higher than any of the other LHC experiments Since we are unique we can not piggyback on ATLAS/CMS/IT R&D. Objectives.

akamu
Télécharger la présentation

ALICE Data Challenges

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ALICE Data Challenges Fons Rademakers

  2. Data Challenge • When fully operational ALICE will take data with a rate of 1.5 GB/s • An order of magnitude higher than any of the other LHC experiments • Since we are unique we can not piggyback on ATLAS/CMS/IT R&D

  3. Objectives • Simulate the ALICE on-line and off-line data recording and processing chain • Track technologies: • DATE, ROOT, MSS, networking, CPU's and OS'es • Repeat frequently to collect data points to see if we will be ready for 1.5 GB/s in 2005 Be ready for T=0 with as little surprises as possible

  4. First Data Challenge • To store 10TB of raw data in a ROOT DB on HPSS within 5 days (i.e. 25 MB/s) • To test the following technologies: • DATE, ROOT, HPSS, GB ethernet, Linux and PC's • Very simple event model, but fairly realistic raw, tag and catalogue database environment • Executed in March 1999

  5. ADC1 Data Flow Computer center NA57 Data source Intel/Linux PC PowerPC/AIX ROOT objectifier LDC pipe GDC Event Builder 5 MB/s switch GB eth rfcp LDC 5 MB/s disk GB eth DATE = LDC+GDC 5 MB/s HPSS RS6000/AIX 5 node cluster

  6. DATE Data Source • DATE is developed by the ALICE DAQ team and used by NA57 and COMPASS • See talk by R. Divia (B293) this afternoon • LDC = Local Data Concentrator • GDC = Global Data Collector • In this setup the LDC's generated bit patterns simulating (NA57) events (size 400 - 500 KBytes/event)

  7. RAW Data DB • Raw data DB contains AliEvent objects, which contain: • an AliEventHeader object • 16 data members (72 bytes) • an AliRawData object • char array (variable length, 400-500 kB) • AliEvent objects are stored in a single TTree container with two branches (for the two contained objects)

  8. RAW Data DB (cont.) • No compression (need more CPU power) • Size of individual raw DB's typically between 250 MB and 1.5 GB (tunable parameter) • As soon as a raw DB is closed it is moved to HPSS via rfcp (part of CERN SHIFT software interfaced to HPSS)

  9. Tag DB • Tag DB contains AliEventHeader objects, which contain: • size, type, run number, event number, burst number, trigger, time, detector id, etc. • AliEventHeader objects are stored in a single TTree with 16 branches (one for each header data member) • Compressed • Stays on disk • Used for fast event selection

  10. Run Catalog DB • Run DB contains one AliStats object per produced raw data DB • An AliStats object contains: • filename of raw DB, number of events, begin/end run number, begin/end event number, begin/end time, file size of raw DB, compression factor, quality histogram (TH1F) • Compressed • Stays on disk • Used for matching run/event with raw DB

  11. This was one of the largest OODB's in HEP !

  12. NA57 Data source Computer center 9 PowerPC/AIX LDC Intel/Linux PC Cluster 5 MB/s LDC Switch Event Filter 5 MB/s GDC Event Builder GB eth ROOT objectifier Switch Pipe 5 IntelPC/Linux 10 MB/s LDC Switch GB eth 10 MB/s LDC 10 MB/s Catalog Tag DB HPSS/ CASTOR MSS Alice DAQ Lab Data source DATE = LDC+GDC 20 node cluster

More Related