1 / 18

ATLAS computing in Geneva

ATLAS computing in Geneva. the Geneva ATLAS Tier-3 cluster other sites in Switzerland issues with the data movement. Szymon Gadomski, NDGF meeting, September 2009. ATLAS computing in Geneva. 26 8 CPU cores 180 TB for data 70 in a Storage Element special features:

deion
Télécharger la présentation

ATLAS computing in Geneva

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ATLAS computing in Geneva • the Geneva ATLAS Tier-3 cluster • other sites in Switzerland • issues with the data movement Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  2. ATLAS computing in Geneva • 268 CPU cores • 180 TB for data • 70 in a Storage Element • special features: • direct line to CERN at 10 Gb/s • latest software via CERN AFS • SE in Tiers of ATLAS since Summer 2009 • FTS channels from CERN and from NDGF Tier 1 • the analysis facility for Geneva group • Trigger development, validation, commissioning • grid batch production for ATLAS S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  3. Networks and systems S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  4. How it is used • NorduGridproduction since 2005 • login and local batch • trigger development and validation • analysis preparations • 75 accounts, 55 active users, not only Uni GE S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  5. Added value by resource sharing local jobs come in peaks grid always has jobs little idle time, a lot of Monte Carlo done S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  6. Swiss ATLAS Grid Swiss ATLAS Grid • Karlsruhe Tier-1 • CERN Tier-0 and CAF • Uni of Geneva • Tier-3 Uni of Bern Tier-3 • CSCS Tier-2 • (shared) S. Gadomski, ”Swiss ATLAS Grid", SwiNG, June 2009

  7. CSCS • 960 CPU cores, 520 TB (for three LHC experiments) • grid site since 2006 • LCG gLite and NorduGrid • dCache Storage Element • mostly “production” for the three experiments • change of personnel in recent past • large hardware upgrades in 2008 and 2009 • use of Lustre in the near future (worker node disk cache) S. Gadomski, ”Swiss ATLAS Grid", SwiNG, June 2009

  8. Bern • 30 CPU cores, 30 TB in a local cluster • 250 CPU cores in a shared University cluster • grid site since 2005 • NorduGrid • gsiftp storage element • mostly ATLAS production • interactive and local batch use • data analysis preparation S. Gadomski, ”Swiss ATLAS Grid", SwiNG, June 2009

  9. Swiss contribution to ATLAS computing ~1.4% of ATLAS computing in 2008 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  10. Issue 1 - data movement for grid jobs local jobs can read the SE directly S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  11. Issue 1 - data movement for grid jobs grid jobs can not read the SE directly No middleware on worker nodes. This is a good thing, but it hits us a little. Any plans about that? S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  12. Issue 2 - data rates Internal to the Cluster the data rates are OK Transfers to Geneva need improvement S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  13. Test of larger TCP buffers Data rate per server • transfer from fts001.nsc.liu.se • network latency 36 ms (CERN at 1.3 ms) • increasing TCP buffer sizes Fri Sept 11th (Solaris default 48 kB) ~25 MB/s Why? Can we keep the FTS transfer at 25 MB/s/server? 1 MB 192kB S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  14. Summary and outlook • A large ATLAS T3 in Geneva • Special site for Trigger development • In NorduGrid since 2005 • Storage Element in the NDGF since July 2009 • FTS from CERN and from the NDGF-T1 • exercising data transfers, need to improve performance • Short-term to do list • Add two more file servers to the SE. • Move to SLC5 • Write a note, including performance results • Keep working on data transfer rates • Towards a steady–state operation! S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  15. backup slides S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 09

  16. SMSCG • Swiss Multi-Science Computing Grid is using ARC S. Gadomski, ”Swiss ATLAS Grid", SwiNG, June 2009

  17. Performance of dq2-get • rates calculated using timestamps of files • average data rate 6.6 MB/s • large spread • max close to hardware limit of 70 MB/s (NFS write to single server) • average time to transfer 100 GB is 7 hours S. Gadomski, ”Tests of data movement…", June 2009

  18. IdQuantique encryption start next Wednesday S. Gadomski, ”Tests of data movement…", June 2009

More Related