Download
chepreo tier 3 center n.
Skip this Video
Loading SlideShow in 5 Seconds..
CHEPREO Tier-3 Center PowerPoint Presentation
Download Presentation
CHEPREO Tier-3 Center

CHEPREO Tier-3 Center

191 Views Download Presentation
Download Presentation

CHEPREO Tier-3 Center

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. CHEPREO Tier-3 Center Achievements

  2. FIU Tier-3 Center • Tier-3 Centers in the CMS computing model • Primarily employed in support of local CMS physics community • But some also participate in CMS production activities • Hardware & manpower requirement non-trivial • Requires sufficient hardware to make worthwhile • Grid enabled gatekeeper and other grid services are also required • The FIU Tier-3 is deployed to: • Provides services and resources for local physicists • CMS analysis, & computing • Education Research and Outreach Group • Cyberinfrastructure: serves as a “reference implementation” of an operational grid-enabled resource for the FIU CI community at large • FIU Grid community • CHEPREO Computer Science groups

  3. GATE KEEPER FIU Tier-3 at A Glance • ROCKS based meta-cluster consists of: • Grid-enabled computing cluster • Approx: 20 Dual Xeon boxes • Service nodes • User login node, frontend, webservers, frontier/squid server, development… • Local CMS interactive analysis • Purchased with FIU startup funds • A single 8 core server with 16GB RAM • A large 3ware based 16 TB fileserver • A Computing Element site on the OSG production Grid • A production CE even before the OSG (Grid3) • Supports all of the OSG VOs • Maintained with latest vers of OSG software cache

  4. FIU Tier-3 Usage at Glance Current usage: Since Nov. 2007 • About 40K hrs logged through OSG Grid gatekeeper • CMS, LIGO, nanhub, OSGedu… VOs have used the site through this period • About 85% of that was utilized by cmsProd during CSA07 • We generated about 60K events out of the 60M world wide effort • We were one of 2 or 3 Tier-3s that participated in CSA07 world wide effort !

  5. Tier3 Team at FIU • CHEPREO/CIARA funded positions • Micha Niskin: Lead systems administrator • Ernesto Rubi: Networking • CHEPREO undergraduate fellows: Working on deploying CMS and storage services • Ramona Valenzula • David Puldon • Ricardo Leante

  6. CHEPREO Tier-3 Center Proposed Activities

  7. Tier-3 Computing Facility • In support of the Open Science Grid • Still plan to involve our group with OSG integration and validation activities • Participate in integration testbed • Help carryout testing of new OSG releases • Help debug and troubleshoot new grid middleware deployments and applications • This work is currently man-power limited • Our systems administrator is busy with new hardware deployments, and procurement and training new CHEPREO fellows • In support of CMS computing: production activities • Not yet involved in CCRC08 or CSA08: • Will require new workernodes with 1.0+ GB of RAM/core to participate • 10 new dual quadcore nodes + new gatekeeper • Specs have been defined bids will be submitted • Expect quotes by end of March • Will require additional services for CMS production • Storage Element:We evenutually plan to deploy our own dcache based SE but now pointing our cmssoft installation to our local Tier-2 at UF • FRONTIER/squid now a must for participation: use existing Xeon hardware, CHEPREO fellows now working on this task

  8. Tier-3 Computing Facility • In support of local CMS interactive analysis activities: Issue: How to access CMS data from FIU, several solutions being investigated. They involve close coordination with our CHEPREO Tier-2 partners at UF and Caltech • Download data to a Tier-2 and then download to FIU via FDT • Deployment, utilization and benchmarking of FDT tool within CMS’ Distributed Data Management system • With UF, download data to WAN enabled LUSTER filesystem • Tests already demonstrated line speed access with LUSTER over LAN at UF between UF’s Tier-2 and HPC • Testing underway on isolated testbed now with UF’s HPC group • Applied kernel patches and reconfiguring network route to FIU test server • pecial Access to HPC facility, unlikely general CMS wide solution • L-store: • Access CMS data stored in L-store depots in Tennessee region and beyond • Vanderbilt is working on L-store/CMSSW integration and we would like to help in the testing when available • We would also like to deploy our own L-store depot on-site with Vanderbilt supplied hardware currently at FIU