1 / 10

Tier 2 Regional Centers

Tier 2 Regional Centers. Goals Short-Term: Code development centers Simulation centers Data repository Medium-term Mock Data Challenge (MDC) Long-term Data analysis and calibration Education Contact point between ATLAS, students, post-docs . Tier 2 Definition.

odina
Télécharger la présentation

Tier 2 Regional Centers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tier 2 Regional Centers • Goals • Short-Term: • Code development centers • Simulation centers • Data repository • Medium-term • Mock Data Challenge (MDC) • Long-term • Data analysis and calibration • Education • Contact point between ATLAS, students, post-docs J. Shank US ATLAS Meeting BNL

  2. Tier 2 Definition • What is a Tier 2 Center? • assert( sizeof(Tier 2) < 0.25* sizeof (Tier 1) ); • What is the economy of scale? • Too few FTE’s: better off consolidating at Tier 1. • Too many: the above assert fails and admin. overhead grows. • Detector Sub-system specific? • Detector Calibration center, e.g. • Task specific? • DB development center, e.g. • Find-the-Higgs center. • Purely Regional? • Support all computing activities in the region. J. Shank US ATLAS Meeting BNL

  3. Example 1:Boston Tier 2 Center • Focus on Muon Detector Subsystem • Calibrate the Muon system • How much data? • Special cal. Runs ~10% of real data, just muon ~10% event • Overall ~ 1% of data or 10 Tb/yr • How much CPU? • 100 sec/ev => 30 CPU’s J. Shank US ATLAS Meeting BNL

  4. Example 2: Physics Analysis Center • Find the Higgs • Get 10% of the data to refine algorithms. • How much data? • 10 Tb/yr (reconstructed data from CERN). • CPU: • 103 sec/ev/CPU => 300 CPUs. • We better do better than 103 sec/ev/CPU! • Distribute full production analysis to Tier 1 + other Tier 2 centers. J. Shank US ATLAS Meeting BNL

  5. Example 3: Missing Et • From last US ATLAS computing videoconference: • (see J. Huth’s slides on usatlas web page) • 40 M events (2% of triggers) • 40 TB of data • Would use 10% of a 26000 SpecINT95 tier 2 center • Conclusions: • Needs lots of data storage • CPU requirements modest J. Shank US ATLAS Meeting BNL

  6. CA*Net II MirNET APAN TANet SREN DREN iDREN MCI - vBNS POP vBNS Approved Institution Planned vBNS Approved Institution vBNS Partner Institution Network of vBNS Partner Institutions Planned Network of vBNS Partner Institutions Aggregation Point Planned Aggregation Point DS3 OC3 OC12 OC48 Network Connectivity FNAL ANL UIC Chicago Northwestern Dartmouth Notre Dame UNH Tufts Indiana UMaine Brown Boston U MIT 6 Mbps MREN STARTAP NGIX-C Harvard 70 Mbps UMass 13.8 Mbps SUNY Buffalo 15 Mbps Yale Rensselaer Wayne State Rochester NYSERNET Merit Boston Michigan Syracuse Michigan State Columbia Cornell NYU Rutgers Chicago Princeton Cleveland New York City Ohio State Sprint NY NAP NCSA PSC WVU Drexel UIUC Penn State CMU J. Shank US ATLAS Meeting BNL

  7. Working Definition of Tier 2 Center • Hardware: • CPU: 50 boxes • Each 200 SpecInt95  104SpecInt95 • Storage: 15 Tb • Low maintenance robot system • People: • Post-Docs: 2 • Computer Professionals • Designers: 1 • Facilities Manager: 2 • Need Sys. Admin. type + lower level scripting support type • Could be shared • Infrastructure: • Network Connectivity must be state of the art (OC12  OC192?) • Cost Sharing, integration w/existing facility. J. Shank US ATLAS Meeting BNL

  8. Mass Store Throughput • Do we need HPSS? • 1 GB/s throughput • High maintenance cost(at least now) • DVD jukeboxes • 600 DVD, 3 TB of storage • 10-40 MB/s throughput • $45k • IBM Tape Robot • 7+ TB storage with 4 drives • 10-40 MB/s throughput • Low maintenance IBM ADSM software • Can we expect cheap, low maintenance 100MB/s in 2005? J. Shank US ATLAS Meeting BNL

  9. Cost • People • 3 FTE 525k 2400k • Post Docs ?? • Hardware • 50 boxes x 5k 250k • Mass Storage tape robot 250k • Disk ($100/Gb scaled) 100k • Software • Licenses 10k 50k Yearly 5 yr Total: 3.0M J. Shank US ATLAS Meeting BNL

  10. Summary • Schedule • how many tier 2’s? • Where/when? • Spread geographically, sub-system oriented? • Need them to be relevant to code development => start as many as possible now. • Need presence at CERN now. J. Shank US ATLAS Meeting BNL

More Related