1 / 11

The Open Science Grid

The Open Science Grid. A unique alliance of universities, national laboratories, scientific collaborations and software developers bringing petascale computing and storage resources into a uniform shared cyberinfrastructure. The OSG Vision. Transform compute and data intensive science

casta
Télécharger la présentation

The Open Science Grid

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Open Science Grid A unique alliance of universities, national laboratories, scientific collaborations and software developers bringing petascale computing and storage resources into a uniform shared cyberinfrastructure.

  2. The OSG Vision Transform compute and data intensive science through a cross-domain self-managed national distributed cyber-infrastructure that brings together campus and community infrastructure and facilitating the needs of Virtual Organizations at all scales

  3. 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 iVDGL (NSF) OSG Grid3 GriPhyN Trillium (NSF) (DOE+NSF) PPDG (DOE) Evolution of the Projects

  4. OSG Consortium Partners Rochester Institute of Technology Sloan Digital Sky Survey (SDSS) Southern Methodist University Stanford Linear Accelerator Center (SLAC) State University of New York at Albany State University of New York at Binghamton State University of New York at Buffalo Syracuse University T2 HEPGrid Brazil Texas Advanced Computing Center Texas Tech University Thomas Jefferson National Accelerator Facility University of Arkansas Universidade de São Paulo Universideade do Estado do Rio de Janerio University of Birmingham University of California, San Diego University of Chicago University of Florida University of Illinois at Chicago University of Iowa University of Michigan University of Nebraska - Lincoln University of New Mexico University of North Carolina/Renaissance Computing Institute University of Northern Iowa University of Oklahoma University of South Florida University of Texas at Arlington University of Virginia University of Wisconsin-Madison University of Wisconsin-Milwaukee Center for Gravitation and Cosmology Vanderbilt University Wayne State University Academia Sinica Argonne National Laboratory (ANL) Boston University Brookhaven National Laboratory (BNL) California Institute of Technology Center for Advanced Computing Research Center for Computation & Technology at Louisiana State University Center for Computational Research, The State University of New York at Buffalo Center for High Performance Computing at the University of New Mexico Columbia University Computation Institute at the University of Chicago Cornell University DZero Collaboration Dartmouth College Fermi National Accelerator Laboratory (FNAL) Florida International University Georgetown University Hampton University Indiana University Indiana University-Purdue University, Indianapolis International Virtual Data Grid Laboratory (iVDGL) Kyungpook National University Laser Interferometer Gravitational Wave Observatory (LIGO) Lawrence Berkeley National Laboratory (LBL) Lehigh University Massachusetts Institute of Technology National Energy Research Scientific Computing Center (NERSC) National Taiwan University New York University Northwest Indiana Computational Grid Notre Dame University Pennsylvania State University Purdue University Rice University

  5. OSG Principles • Characteristics • Provide guaranteed and opportunistic access to shared resources. • Operate a heterogeneous environment both in services available at any site and for any VO, and multiple implementations behind common interfaces. • Interface to Campus and Regional Grids. • Federate with other national/international Grids. • Support multiple software releases at any one time. • Drivers • Delivery to the schedule, capacity and capability of LHC and LIGO: • Contributions to/from and collaboration with the US ATLAS, US CMS, LIGO software and computing programs. • Support for/collaboration with other physics/non-physics communities. • Partnerships with other Grids - especially EGEE and TeraGrid. • Evolution by deployment of externally developed new services and technologies

  6. OSG Challenges • Develop the organizational and management structure of a consortium that drives, and a project that builds, operates and evolves such Cyber Infrastructure • Maintain and evolve a software stack capable of offering powerful and dependable capabilities that meet the science objectives of the NSF and DOE scientific communities • Focus on open and standard interfaces • Working, robust, and dependable reference implementation • Operate and evolve a dependable and well managed distributed facility • Interoperability with all major National/International Grid efforts

  7. Campus Grids • A fundamental building block of the OSG • the multi-institutional, multi-disciplinary nature of the OSG is a macrocosm of many campus IT cyberinfrastructure coordination issues • Currently have three operational campus grids: • Fermilab, Purdue, Wisconsin • We are working on adding: • Harvard, Clemson, Lehigh • Transparent elevation of jobs from campus CI to the OSG

  8. Grid Laboratory of Wisconsin 2003 Initiative, Six Initial GLOW Sites • Computational Genomics, Chemistry • Amanda, Ice-cube, Physics/Space Science • High Energy Physics/CMS, Physics • Materials by Design, Chemical Engineering • Radiation Therapy, Medical Physics • Computer Science Diverse users with different deadlines and usage patterns.

  9. UW Madison Campus Grid • Users submit jobs to their own private or department scheduler as members of a group (e.g. “CMS” or “MedPhysics”) • Jobs are dynamically matched to available machines • Jobs run preferentially at the “home” site, but may run anywhere when machines are available • Computers at each site give highest priority to jobs from same group (via machine RANK) • Crosses multiple administrative domains • No common uid-space across campus • No cross-campus NFS for file access

  10. Housing the Machines • Condominium Style • centralized computing center • space, power, cooling, management • standardized packages • Neighborhood Association Style • each group hosts its own machines • each contributes to administrative effort • base standards (e.g. Linux & Condor) to make easy sharing of resources • GLOW has elements of both, but leans towards neighborhood style

  11. The Value of Campus Scale simplicitycommon software stack with standard interfaces fluidity high common denominator makes sharing easier and provides richer feature-set collective buying powerwe speak to vendors with one voice standardized administratione.g. GLOW uses one centralized cfengine synergyface-to-face technical meetingsmailing list scales well at campus level

More Related