1 / 11

The Beijing Tier 2: status and plans

The Beijing Tier 2: status and plans. Xiaomei Zhang CMS Tier1 visit to IN2P3 in Lyon November 30, 2007. Outline. Introduction Manpower Site overview: hardware, software, network Storage status Data transfer status Ongoing upgrade Future plan. Introduction.

dotty
Télécharger la présentation

The Beijing Tier 2: status and plans

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Beijing Tier 2: status and plans Xiaomei Zhang CMS Tier1 visit to IN2P3 in Lyon November 30, 2007

  2. Outline • Introduction • Manpower • Site overview: hardware, software, network • Storage status • Data transfer status • Ongoing upgrade • Future plan IHEP Computing Center

  3. Introduction • The T2_Beijing is the only CMS T2 site in China mainland • The T2_Beijing site is set up and maintained by the computing center of IHEP in Beijing • approved in 2006 • no direct financial support from the government yet • trying to get financial support soon… • T2_Beijing shared (mainly) by CMS and ATLAS • common LCG infrastructure • no dedicated WN • dedicated Storage Element for each experiment IHEP Computing Center

  4. Manpower • 1 FTE for CMS T2_Beijing, 1 FTE for ATLAS T2_Beijing • Xiaomei Zhang (zhangxm@mail.ihep.ac.cn) is responsible for CMS • Erming Pei (pemxz@mail.ihep.ac.cn) is responsible for ATLAS • 1 FTE for technical support • Xiaofei Yan (yanxf@mail.ihep.ac.cn) IHEP Computing Center

  5. Site overview • Computing infrastructure • our computing hall has just finished repairing • the cooling and power systems are in good condition • Hardware • LCG components: IBM x3650 (2 Xeon 5130 cpu and 4 GB RAM) • 14 worknodes (2 Xeon 3.2G cpu and 2 GB RAM) • Software • middleware recently upgraded to gLite 3.0 • work nodes OS system upgraded to SLC 4 • CMSSW1.6.4 has been installed and tested • External Network to T1 • 1Gbps to the CNNIC center, which has 1Gbps connected through CERNET to Europe GEANT, 1Gbps to US Internet • ~80 stream  15MB/s in current tests to IN2P3 and FNAL IHEP Computing Center

  6. Storage Status • The resources are very limited now, but will be improved step by step soon • Use dCache system with 1.1TB storage • 1.1TB is our first step • 10TB disk has arrived  • Headnodes and poolnodes in a single server: IBM x346 (2 Xeon 3.2G CPUs and 4 GB RAM) • one poolnode, one pool with 1.1 TB • it causes much trouble in current link debugging • affect job running in our site, no space for data output and slow response from SE in the period of link debugging IHEP Computing Center

  7. Data Transfers • We try to have two (up/down) links • IN2P3 and FNAL (FNAL is required by our local CMS group) • Status: two links aren’t good enough, but will be promising after the resource increasing soon • IN2P3 - a good rate about 15 MB/s some days, although the download link and upload link are still in the commissioning status • FNAL - download link commissioned, upload link is ongoing • Main reason • only one link at one time is allowed in our poor SE • hard to keep switching links and commissioning four links, even two links • sometimes T1s are also not stable and have problems, making things even worse IHEP Computing Center

  8. PhEDEx • Despite of bad situation, we still have transferred 30 (10) TB from FNAL (IN2P3) • Also we have a very start this month for the upload link from FNAL and IN2P3 IHEP Computing Center

  9. Ongoing Upgrade • 12 new WNs (2 Xeon 5345 cpu 4 cores 16GB RAM) will be added next month • 5 machines will be used to set up the new SE headnodes and poolnodes • 5 pools and 10TB disk will be added soon for each experiment(10TB for CMS, 10TB for ATLAS) • One poolnode with 4GB FC connected to one disk array box with Raid 5+1 xfs file system IHEP Computing Center

  10. Future Plan • Try to maintain stable links with IN2P3 and FNAL • Meet the data demand of local CMS physics group in production instance and provide good service for physics analysis • Try to support MC production after resource situation improves • “Everything is getting better and better”, as our computing center manager said  IHEP Computing Center

More Related