1 / 6

HEP/NP Computing Facility at Brookhaven National Laboratory (BNL) Bruce G. Gibbard

HEP/NP Computing Facility at Brookhaven National Laboratory (BNL) Bruce G. Gibbard. 17 May 2004. Primary Facility Mission. US Tier 1 Center for ATLAS Basic Tier 1 functions US repository for ATLAS data Generation of distilled data subsets Delivery of data subsets and analysis capability

kenpark
Télécharger la présentation

HEP/NP Computing Facility at Brookhaven National Laboratory (BNL) Bruce G. Gibbard

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HEP/NP Computing Facility atBrookhaven National Laboratory (BNL)Bruce G. Gibbard 17 May 2004

  2. Primary Facility Mission • US Tier 1 Center for ATLAS • Basic Tier 1 functions • US repository for ATLAS data • Generation of distilled data subsets • Delivery of data subsets and analysis capability • Tier 1 hub for US ATLAS Grid computing • Host site computing facility for Relativistic Heavy Ion Collider (RHIC) • Tier 0 functions for 4 RHIC detectors (~1000 physicists) • Online recording of raw data; repository for all data • Reconstruction and distribution of resultant data to major collaborating facilities … LBNL, Riken, etc. • Also Tier 1 functions as described for ATLAS above B. Gibbard

  3. Current Facility Scale • Unified operation for RHIC & ATLAS, staffed by 30 FTE’s • HSM based on HPSS & StorageTek • Capacity – 4.5 PBytes at 1000 MBytes/sec • Processor farms based on dual processor rack mounted Inter/Linux nodes • Capacity – 2600 CPU’s for 1.4 MSI2000 • Central Disk based on Sun/Solaris and FibreChanel SAN connected RAID 5 • Capacity – ~200 TBytes served via NFS (& AFS) • OC12 connected to US ESnet backbone B. Gibbard

  4. Involvement in Multiple Grids • US ATLAS Grid Testbed • Tier 1 & ~11 US ATLAS Tier 2 & 3 sites • Evolving versions of Grid middleware over ~3 years • Used in production for ATLAS Data Challenge 1 (DC1) • Grid3+ (Follow on to Grid3) • ~24, mostly US, sites running ATLAS, CMS, SDSS, etc. • Strongly coupled to tools and services developed by US Grid projects • Currently in production for ATLAS DC2 • LCG-2 • Currently completing transition from LCG-1 • Focus of interests and efforts is … • Understanding and fostering Grid3+ and LCG2 commonality … while addressing near term issues of interoperability B. Gibbard

  5. Guiding Principles • Use commodity hardware and open source software when possible while utilizing high performance commercial technology where necessary • Avoid major development projects in favor of existing commercial or community supported software and systems whenever possible • Maximize flexibility in and modularity of facility components while concealing as much of the complexity as possible from users • Present resource & services (especially on Grids) in as standardized and effective a way possible consistent with the constraints of primary user VO’s B. Gibbard

  6. Goal for IHEPCCC • Foster interactions and exchanges leading to ... • Improved HEP computing effectiveness based on identifying, adapting, and developing very good shared solutions to the common problems we encounter … • within our facility fabrics • within the Grids in which we participate • within the virtual organizations we support • Standardized interfaces to users and between components where realities dictate distinct solutions B. Gibbard

More Related