1 / 11

Analysis in STEP09 at TOKYO

Analysis in STEP09 at TOKYO. Hiroyuki Matsunaga University of Tokyo WLCG STEP'09 Post-Mortem Workshop. Site Configuration during STEP09. Tier-2 dedicated to ATLAS SE: DPM 1.7.0 13 disk servers + 1 head node WN: 4 cores/node 120 nodes (480 cores). Job efficiencies.

claire
Télécharger la présentation

Analysis in STEP09 at TOKYO

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis in STEP09 at TOKYO Hiroyuki Matsunaga University of Tokyo WLCG STEP'09 Post-Mortem Workshop

  2. Site Configuration during STEP09 • Tier-2 dedicated to ATLAS • SE: • DPM 1.7.0 • 13 disk servers+ 1 head node • WN: • 4 cores/node • 120 nodes(480 cores)

  3. Job efficiencies • Job submission rates were not stable • Affected fairshare history • Efficiency (CPU/Wall time)~0.5 for both WMS (direct RFIO) and Panda (staging) Waiting Running

  4. “Throughputs” • No clear saturation seen in the current scale • More scattered for Panda jobs • Efficiency was time-dependent

  5. SE-WN network bandwidth Overall network traffic 6GB/s • Good correlationbetween network traffic and number of WMS jobs

  6. Storage Performance SE inbound network traffic (T1->T2) • Data transfer not affected by user analysis • Data delivery from Lyon Tier-1 went well 500MB/s DPM headnode CPU usage DPM diskserver CPU usage

  7. Conclusions • No admin’s intervention during STEP09 • Throughput scales up to ~300 jobs • Storage worked: • 450MB/s reading from a disk server • No network saturation • Some concerns • WN local disk (size and speed) • 8 cores (or more)/node in the near future • Interconnection between core switches

  8. Backup slides

  9. CPU utilization of WNs

  10. Network traffic vs. all analysis jobs

  11. Network traffic for a Panda/WMS job

More Related