1 / 23

What is Open Science Grid?

What is Open Science Grid?. High Throughput Distributed Facility Shared opportunistic access to existing clusters, storage and networks. Owner controlled resources and usage policies. Supports Science Funded by NSF and DOE projects. Common technologies & cyber-infrastructure.

odetta
Télécharger la présentation

What is Open Science Grid?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What is Open Science Grid? • High Throughput Distributed Facility • Shared opportunistic access to existing clusters, storage and networks. • Owner controlled resources and usage policies. • Supports Science • Funded by NSF and DOE projects. • Common technologies & cyber-infrastructure. • Open and Heterogeneous • Research groups transitioning from & extending (legacy) systems to Grids: • Experiments developing new systems. • Application Computer Scientists • Real life use of technology, integration, operation. R. Pordes, I Brazilian LHC Computing Workshop

  2. Who is OSG: a Consortium • US DOE HENP Laboratory Facilities + Universities • (US) LHC Collaborations + offshore sites • LIGO • Condor Project • Running HENP Experiments - CDF, D0, STAR… • Globus/CDIGS • LBNL SDM Collaboration of users, developers, grid technologists, facility administrators. Training & help for administrators and users R. Pordes, I Brazilian LHC Computing Workshop

  3. Routed from Local UWisconsin Campus Grid OSG 1 day last week: LHC Bioinformatics • 50 Clusters : used locally as well as through the grid • 5 Large disk or tape stores • 23 VOs • >2000 jobs running through Grid; 2000 running jobs Run II 500 waiting jobs R. Pordes, I Brazilian LHC Computing Workshop

  4. Broad Engagement R. Pordes, I Brazilian LHC Computing Workshop

  5. The OSG World: Partnerships • Campus Grids: • GRid Of IoWa, • Grid Laboratory Of Wisconsin, • Crimson Grid, • Texas Advanced Computer Center, • Center for Computational Research /Buffalo, • TIGRE, • FermiGrid • Grid Projects • DISUN • CDIGS • National Grids: TeraGrid, HEP-Brazil • International Grids: EGEE R. Pordes, I Brazilian LHC Computing Workshop

  6. What is an OSG Job? “work done” accomplished by and delivered as “benefit received”; accountable to multiple organizations MyApplication, EGEE RB, , VDS, OSG RESS Job SubmissionCondor-G EGEE OSG Job does work benefiting WLCG. Job Counted on Campus Grid, OSG and EGEE. R. Pordes, I Brazilian LHC Computing Workshop

  7. Common Middleware provided through Virtual Data Toolkit Domain science requirements. Globus, Condor, EGEE etc OSG stakeholders and middleware developer (joint) projects. Test on “VO specific grid” Integrate into VDT Release. Deploy on OSG integration grid Include in OSG release & deploy to OSG production. R. Pordes, I Brazilian LHC Computing Workshop

  8. Reliable: Central Operations Activities • Automated validation of basic services and site configuration Configuration of HeadNode and Storage to reduce errors: • Remove dependence on Shared File System • Condor-managed GRAM fork queue • Scaling tests of WS-GRAM and GridFTP. Daily Grid Exerciser: R. Pordes, I Brazilian LHC Computing Workshop

  9. OSG Drivers: LIGO- gravitational wave physics; STAR - nuclear physics, CDF, D0, - high energy physics, SDSS - astrophysics GADU - bioinformatics Nanohub • Research groups transitioning from & extending (legacy) systems to Grids: • US LHC Collaborations • Contribute to & depend on milestones, functionality, capacity of OSG. • Commitment to general solutions, sharing resources & technologies; • Application Computer Scientists • Real life use of technology, integration, operation. • Federations with Campus Grids • Bridge & interface Local & Wide Area Grids. • Interoperation & partnerships with national/ international infrastructures • Ensure transparent and ubiquitous access. • Work towards standards. NMI, Condor, Globus, SRM GLOW, FermiGrid, GROW, Crimson, TIGRE EGEE, TeraGrid, INFNGrid R. Pordes, I Brazilian LHC Computing Workshop

  10. LHC Physics drive schedule and performance envelope • Beam starts in 2008: • Distributed System must serve 20PB of data in served across 30PB disk distributed across 100 sites worldwide to be analyzed by 100MSpecInt2000 of CPU. • Service Challenges give steps to full system 1 GigaByte/sec R. Pordes, I Brazilian LHC Computing Workshop

  11. Bridging Campus Grid Jobs - GLOW • Dispatch jobs from local security, job, storage infrastructure and “uploading” to wide-area infrastructure. • Fast ramp up in last week. • Currently running the football pool problem which has application in data compression, coding theory, and statistical designs. R. Pordes, I Brazilian LHC Computing Workshop

  12. Genome Analysis and Database Update system • Request: 1000 CPUs for 1-2 weeks. Once a month. • 3 different applications: BLAST, Blocks, Chisel. • Currently ramping up on OSG and receiving 600 CPUs and 17,000 jobs a week. R. Pordes, I Brazilian LHC Computing Workshop

  13. Common Middleware provided through Virtual Data Toolkit Domain science requirements. Globus, Condor, EGEE etc OSG stakeholders and middleware developer (joint) projects. Test on “VO specific grid” Integrate into VDT Release. Deploy on OSG integration grid Include in OSG release & deploy to OSG production. Condor project R. Pordes, I Brazilian LHC Computing Workshop

  14. of course a special grid … it’s the people…(some of them at the consortium meeting in Jan 06) R. Pordes, I Brazilian LHC Computing Workshop

  15. TeraGrid Through high-performance network connections, TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities around the (US) country. • CDF MonteCarlo jobs running on Purdue TeraGrid resource; able to access OSG data areas and be accounted to both Grids. http://www.nsf.gov/news/news_images.jsp?cntn_id=104248&org=OLPA R. Pordes, I Brazilian LHC Computing Workshop

  16. OSG: More than a US Grid Korea Brazil - (D0, STAR, LHC) Taiwan - (CDF, LHC) R. Pordes, I Brazilian LHC Computing Workshop

  17. OSG: Where to find information: • OSG Web site: www.opensciencegrid.org • Work in progress: http://osg.ivdgl.org/twiki/bin/view/Integration/OverviewGuide • Virtual Data Toolkit: http://vdt.cs.wisc.edu//index.html • News about Grids in Science in “Science Grid This Week”: www.interactions.org/sgtw • OSG Consortium meeting Seattle Aug 21st. Thank you! R. Pordes, I Brazilian LHC Computing Workshop

  18. OSG - EGEE Interoperation for WLCG Jobs Data Stores VO UI VO RB VO RB VO RB BDII BDII LDAP URLs SRM SRM SRM T2 SRM T2 SRM T2 SRM T2 SRM T2 T2 T2 T2 T2 T2 Site Site GRAM GRAM GRAM GRAM GRAM GRAM GRAM GRAM GRAM GRAM Picture thanks to I. Fisk R. Pordes, I Brazilian LHC Computing Workshop

  19. OSG Resources - use and policy under owner control. Clusters and storage shared across local, Campus intra-grid, Regional Grid and large federated Inter-Grids. OSG Software Stack - based on Virtual Data Toolkit. Interfaces: Condor-G job submission interface; GridFTP data movement SRM storage management; Glue Schema V1.2; easy to configure GIPs;, CEMON coming in 3 months. OSG Use - Register VO with with Operations Center; Provide URL for VOMS service - this must be propagated to sites. Contact for Support Center. Join operations groups. OSG Job Brokering, Site Selection - no central or unique service. LIGO uses Pegasus; SDSS uses VDS; STAR uses Star-schedule; CMS uses EGEE-RB; ATLAS uses Panda; CDF uses CDF GlideCAF; D0 uses SAM-JIM; GLOW uses “condor-schedd on the side”. Nano-hub uses application portal. OSG Storage & Space Management shared file systems; persistent VO application areas; SRM interfaces. OSG Operations - Distributed including each VO, Campus Grid. Operations is also a WLCG ROC. OSG Accounting & Monitoring -MonaLisa; can support rGMA; OSG meters/probes for Condor being released soon. US Tier-1s reporting monthly to WLCG APEL. Open Science Grid in 1 minute: R. Pordes, I Brazilian LHC Computing Workshop

  20. Services to the US Tier-1 Sites LHCOPNApril 4th, 2006 Joe Metzger metzger@es.net ESnet Engineering GroupLawrence Berkeley National Laboratory

  21. IP core hubs SDN hubs Primary DOE Labs Possible hubs ESnet Target Architecture: High-reliability IP Core Seattle Cleveland Chicago New York Denver Sunnyvale Washington DC IP Core Atlanta LA Albuquerque San Diego R. Pordes, I Brazilian LHC Computing Workshop

  22. IP core hubs SDN hubs Primary DOE Labs Possible hubs ESnet Target Architecture: Science Data Network Science Data Network Core Seattle Cleveland Chicago New York Denver Sunnyvale Washington DC Atlanta LA Albuquerque San Diego R. Pordes, I Brazilian LHC Computing Workshop

  23. IP core hubs SDN hubs Primary DOE Labs Possible hubs ESnet Target Architecture: IP Core+Science Data Network Core+Metro Area Rings international connections international connections international connections Seattle SDN Core Cleveland Chicago New York Denver IP Core Sunnyvale Washington DC MetropolitanArea Rings Atlanta Loop off Backbone international connections LA Albuquerque international connections San Diego international connections 10-50 Gbps circuits Production IP core Science Data Network core Metropolitan Area Networks International connections R. Pordes, I Brazilian LHC Computing Workshop

More Related