1 / 21

LCG Deployment in Japan

LCG Deployment in Japan. Hiroshi Sakamoto ICEPP, Univ. of Tokyo. Contents. Present status of LCG deployment LCG Tier2 Certification authority Implementation Recent topics KEK-ICEPP joint R&D program Network Upgrade of resources Future plan. LCG in Japan.

randi
Télécharger la présentation

LCG Deployment in Japan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LCG Deployment in Japan Hiroshi Sakamoto ICEPP, Univ. of Tokyo

  2. Contents • Present status of LCG deployment • LCG Tier2 • Certification authority • Implementation • Recent topics • KEK-ICEPP joint R&D program • Network • Upgrade of resources • Future plan

  3. LCG in Japan • Tier2 center at ICEPP, U. Tokyo • Decision made in October 2004 • Man power consideration • A few dedicated people • Including engineers and outsourcing • Contribution to LHC/ATLAS • The size of community ~ 4% of ATLAS • Want to contribute more

  4. Japanese CA for HENP • KEK-CA is ready for operation • Japanese HENP society ~ KEK users • LHC ATLAS • KEK-B Belle • JPARC (50GeV PS at Tokai) • RHIC Phenix (RIKEN) • CP/CPS prepared • Discussion between KEK and ICEPP • To be submitted to the EU Grid PMA • Or to the AP Grid PMA?

  5. TOKYO-LCG2 cluster • LCG-2 cluster@u-tokyo • 52 Worker Nodes • Upgraded to LCG-2_3_1 (last week) • From LCG-2_1_1 last week • With YAIM • Redhad 7.3 (will replace Scientific Linux)

  6. PC Farm • HP ProLiant BL20p • Xeon 2.8GHz 2CPU/node • 1GB memory • SCSI 36GBx2, hardware RAID1 • 3 GbE NIC • iLO remote administration tool • 8 blades/ Enclosure(6U) • Total 108 blades(216 CPUs) in 3 racks

  7. Gateway (dggw0.icepp.jp) WN (hpbwn7-1) . . 52 WNs . CE (dgce0.icepp.jp) SE (dgse0.icepp.jp) RB (dgrb0.icepp.jp) WN (hpbwn13-8) BDII (dgbdii0.icepp.jp) Campus Network (133.11.24.0/23) Private Network (172.17.0.0/24) PXY (dgpxy0.icepp.jp) LCG nodes: HP Blade BL20P G2 CPU Xeon 2.8GHz dual memory 1GB(plan to 2GB) GbE NIC UI (dgui0.icepp.jp) NFS Server: DELL 1750 CPU Xeon 2.8GHz dual/ memory 2GB IDE-FC RAID Infortrend controller 250GB HDD*16*10 NFS sever (dgnas0.icepp.jp) FC SW /storage 1.75TB … /storage 1.75TB /home 1.75TB 1.75TB * 20 = 35TB

  8. KEK-ICEPP joint R&D • Testbed cluster@u-tokyo • 1 Worker Node • LCG-2_4_0 with VOMS • Simple CA for testbed user • Scientific Linux with autorpm • Testbed cluster@KEK • Computing Research Center

  9. KEK LCG2 (remaining) UI Proxy LCFGng BDII-LCG2 AMD Opteron-basedLinux System as WNs (under integration) CE(SiteGIIS) RB Managed by LSF WN WN WN WN CE(SiteGIIS) ClassicSE WN WN WN WN IBM eServer 326 Opteron 2.4GHz 4096MB 20 nodes WN WN WN WN WN WN IBM eServer xSeries Pen III 1.3 GHz 256MB RAM (Test WN) WN Managed by PBS WN WN WN WN WN WN

  10. R&D Menu • Stand-alone grid connecting two clusters • 1Gbps dedicated connection between KEK and ICEPP (SuperSINET) • Exercises understanding LCG middleware • Special interests • SRB • Grid datafarm (Osamu Tatebe AIST)

  11. Network • Peer to peer 1Gbps between CERN and ICEPP • Sustained data transfer study • 10Gbps to US and EU • Still thin, but improving connection among Asia/Pacific countries • JP-TW to 1Gbps very soon. • JP-HK, JP-CN

  12. PC Farm Upgrade • IBM BladeCenter HS20 • Xeon 3.6GHz 2CPU/node • EM64T 2GB memory • SCSI 36GBx2, hardware RAID1 • 2 GbE NIC • Integrated System Management Processor • 14 blades/ Enclosure(7U) • Total 150 blades(300CPU) in 2 rack + 1rack for console&Network SW

  13. FOUNDARY BigIron MG8 • two 4x10GbE modules • four 60xGbE modules • Disk Array • 16x250GB SATA HDD 2 FibreChannel I/F • 27 Cabinets in total

  14. Future plan • LCG Memorandum of Understanding • To be signed in JFY2005 • University of Tokyo as the funding body • LCG Tier2 Resources • More resources added to our testbed • in JFY2005 – approved • LCG SC4 + ATLAS DC3 in 2006 • Production system • Budget request submitted for JFY2006 • Expected to become operational in Jan. 2007

More Related