410 likes | 517 Vues
Virtual Communities and Science in the Large. Dr. Carl Kesselman ISI Fellow Director, Center for Grid Technologies Information Sciences Institute Research Professor Computer Science Viterbi School of Engineering University of Southern California. Acknowledgements.
E N D
Virtual Communities and Science in the Large Dr. Carl Kesselman ISI Fellow Director, Center for Grid Technologies Information Sciences Institute Research Professor Computer Science Viterbi School of Engineering University of Southern California
Acknowledgements • Ian Foster, with whom I developed many of these slides • Bill Allcock, Charlie Catlett, Kate Keahey, Jennifer Schopf, Frank Siebenlist, Mike Wilde @ ANL/UC • Ann Chervenak, Ewa Deelman, Laura Pearlman, Mike D’Arcy, Gaurang Mehta, SCEC @ USC/ISI • Karl Czajkowski, Steve Tuecke @ Univa • Numerous other fine colleagues • NSF, DOE, IBM for research support
Context:System-Level Science Problems too large &/or complex to tackle alone …
Seismic Hazard Analysis (T. Jordan & SCEC) Seismicity Paleoseismology Geologic structure Local site effects Faults Seismic Hazard Model Stress transfer Rupture dynamics Crustal motion Seismic velocity structure Crustal deformation
SCEC Community Model 1 Standardized Seismic Hazard Analysis Ground motion simulation Physics-based earthquake forecasting Ground-motion inverse problem Structural Simulation 2 3 Other Data Geology Geodesy 4 5 Unified Structural Representation Invert 4 5 Faults Motions Stresses Anelastic model Ground Motions AWM SRM FSM RDM 3 2 Earthquake Forecast Model Attenuation Relationship Intensity Measures 1 FSM = Fault System Model RDM = Rupture Dynamics Model AWP = Anelastic WavePropagation SRM = SiteResponseModel
Science Takes a Village … • Teams organized around common goals • People, resource, software, data, instruments… • With diverse membership & capabilities • Expertise in multiple areas required • And geographic and political distribution • No location/organization possesses all required skills and resources • Must adapt as a function of the situation • Adjust membership, reallocate responsibilities, renegotiate resources
Virtual Organizations • From organizational behavior/management: • "a group of people who interact through interdependent tasks guided by common purpose [that] works across space, time, and organizational boundaries with links strengthened by webs of communication technologies" (Lipnack & Stamps, 1997) • The impact of cyberinfrastructure • People computational agents & services • Communication technologies IT infrastructure, i.e. Grid “The Anatomy of the Grid”, Foster, Kesselman, Tuecke, 2001
Forming & Operating (Scientific) Communities • Define VO membership and roles, & enforce laws and community standards • I.e., policy • Build, buy, operate, & share community infrastructure • Data, programs, services, computing, storage, instruments • Define and perform collaborative work • Use shared infrastructure, roles, & policy • Manage community workflow
Forming & Operating (Scientific) Communities • Define VO membership and roles, & enforce laws and community standards • I.e., policy • Build, buy, operate, & share community infrastructure • Data, programs, services, computing, storage, instruments • Service-oriented architecture • Define and perform collaborative work • Use shared infrastructure, roles, & policy • Manage community workflow
A B 1 1 10 10 1 A B 1 2 1 2 16 Defining Community: Membership and Laws • Identify VO participants and roles • For people and services • Specify and control actions of members • Empower members delegation • Enforce restrictions federate policy
Policy Challenges in VOs • Restrict VO operations based on characteristics of requestor • VO dynamics create challenges • Intra-VO • VO specific roles • Mechanisms to specify/enforcepolicy at VO level • Inter-VO • Entities/roles in one VO notnecessarily defined in another VO Effective Access Policy of site to community Access granted by community to user Site admission-control policies
Core Security Mechanisms • Authentication and digital signature • “Identity” of communicating party • Attribute Assertions • C asserts that S has attribute A with value V • Delegation • C asserts that S can perform O on behalf of C • Namespaces and Attribute mapping • {A1, A2… An}vo1 {A’1, A’2… A’n}vo2 • Policy • Entity with attributes A asserted by C may perform operation O on resource R
Security Services for VO Policy • Attribute Authority (ATA) • Issue signed attribute assertions (incl. identity, delegation & mapping) • Authorization Authority (AZA) • Decisions based on assertions & policy • Use with message/transport level security VOUser A Delegation Assertion User B can use Service A Resource Admin Attribute VO AZA VO ATA VO-A Attr VO-B Attr Mapping ATA VO Member Attribute VOUser B VO Member Attribute VO A Service VO B Service
SSL/WS-Security with Proxy Certificates Authz Callout: SAML, XACML Services (running on user’s behalf) Access ComputeCenter Rights CAS or VOMS issuing SAML or X.509 ACs Rights VO MyProxy Local policy on VO identity or attribute authority Rights’ KCA Security Services in Practice Users
Forming & Operating Scientific Communities • Define VO membership and roles, & enforce laws and community standards • I.e., policy • Build, buy, operate, & share community infrastructure • Data, programs, services, computing, storage, instruments • Define and perform collaborative work • Use shared infrastructure, roles, & policy • Manage community workflow
Users Discovery tools Analysis tools Data Archives Fig: S. G. Djorgovski Beyond Science Silos:Service-Oriented Architecture • Decompose across network • Clients integrate dynamically • Select & compose services • Select “best of breed” providers • Publish result as a new service • Decouple resource & service providers Function Resource
“Provide access to data D at S1, S2, S3 with performance P” S1 S2 D ServiceProvider S3 Replica catalog, User-level multicast, … “Provide storage with performance P1, network with P2, …” S1 D S2 ResourceProvider S3 Decomposition EnablesSeparation of Concerns & Roles S1 User S2 D S3
Community A Community Z … Providing VO Services:(1) Integration from Other Sources • Negotiate servicelevel agreements • Delegate and deploy capabilities/services • Provision to deliver defined capability • Configure environment • Host layered functions
Deploying New Services Policy Allocate/provision Configure Initiate activity Monitor activity Control activity Activity Client Environment Resource provider Interface Current mechanisms include: GRAM, Workspaces (Keahey, et al), HAND (Qi, et al)
Virtualizing Existing Services into a VO • Establish service agreement with service • E.g., WS-Agreement, GRAM • Delegate use to VO user User B User A VO User VO Admin Existing Services
Open Science Grid • 50 sites (15,000 CPUs) & growing • 400 to >1000 concurrent jobs • Many applications + CS experiments; includes long-running production operations • Up since October 2003; few FTEs central ops Jobs (2004) www.opensciencegrid.org
EmbeddedResource Management Client-side VO Admin Deleg Deleg GRAM GRAM Cluster Resource Manager Headnode Resource Manager VOUser VOUser Monitoring and control VO Job Deleg GRAM Cluster Resource Manager Other Services VO Scheduler . . . • VO admin delegates credentials to be used by downstream VO services. • VO admin starts the required services. • VO jobs comes in directly from the upstream VO Users • VO job gets forwarded to the appropriate resource using the VO credentials • Computational job started for VO VO Job
The Condor Brick Private Network Public Network VO Admin Deploy Brick Allocate resources Initiate job starters(i.e. glidein) GRAM Allocate resourcesInitiate management services GRAM VOUser Local CondorEnvironment Execute Jobs via Condor-C
Policy for Dynamic VO Service VO PDP DoIt if Role=VO/Doer VO ATA AddUser DoIt if VO_PDP(Attrs)=yes & Role=HE/Doer DoIt Service User Service PDP AddPolicy if Role=VO/Admin Hosting Environment Container PDP Create doit CreateService if Role=HE/ServiceCreator Role=HE/Service_Creator
Providing VO Services:(2) Coordination & Composition • Take a set of provisioned services … … & compose to synthesize new behaviors • This is traditional service composition • But must also be concerned with emergent behaviors, autonomous interactions • See the work of the agent & PlanetLab communities “Brain vs. Brawn: Why Grids and Agents Need Each Other," Foster, Kesselman, Jennings, 2004.
Cardiff AEI/Golm The Globus-BasedLIGO Data Grid LIGO Gravitational Wave Observatory Birmingham• Replicating >1 Terabyte/day to 8 sites >120 million replicas so far MTBF = 1 month www.globus.org/solutions
Data Replication Service • Pull “missing” files to a storage system Data Location Data Movement GridFTP Local ReplicaCatalog Replica LocationIndex Reliable File Transfer Service GridFTP Local Replica Catalog Replica LocationIndex Data Replication List of required Files Data Replication Service “Design and Implementation of a Data Replication Service Based on the Lightweight Data Replicator System,” Chervenak et al., 2005
Deploy hypervisor/OS Hypervisor/OS Composing Resources …Composing Services RLS GridFTP GridFTP Deploy service DRS Deploy container VO Services JVM Deploy virtual machine VM VM Procure hardware Physical machine Provisioning, management, and monitoring at all levels
Community Commons • What capabilities are available to VO? • Membership changes, state changes • Require mechanisms to aggregate and update VO information MORE The age of information A A A VO-specific indexes S Information FRESH S S S
adapter Custom protocols for non-WSRF entities Automated registration in container GridFTP GRAM User Monitoring and Discovery Services Clients (e.g., WebMDS) GT4 Container WS-ServiceGroup MDS-Index Registration & WSRF/WSN Access GT4 Cont. GT4 Container MDS-Index MDS-Index RFT
Forming & Operating Scientific Communities • Define VO membership and roles, & enforce laws and community standards • I.e., policy • Build, buy, operate, & share community infrastructure • Data, programs, services, computing, storage, instruments • Service-oriented architecture • Define and perform collaborative work • Use shared infrastructure, roles, & policy • Manage community workflow
Collaborative Work Executed Executing Query Executable Not yet executable What I Did What I Am Doing Edit … What I Want to Do Execution environment Schedule Time
Managing Collaborative Work • Process as “workflow,” at different scales, e.g.: • Run 3-stage pipeline • Process data flowing from expt over a year • Engage in interactive analysis • Need to keep track of: • What I want to do (will evolve with new knowledge) • What I am doing now (evolve with system config.) • What I did (persistent; a source of information)
Problem Refinement • Given: desired result and constraints • desired result (high-level, metadata description) • application components • resources in the Grid (dynamic, distributed) • constraints & preferences on solution quality • Find: an executable job workflow • A configuration that generates the desired result • A specification of resources to be used • Sequence of operations: create agreement, move data, request operation • May create workflow incrementally as information becomes available "Mapping Abstract Complex Workflows onto Grid Environments," Deelman, Blythe, Gil, Kesselman, Mehta, Vahi, Arbree, Cavanaugh, Blackburn, Lazzarini, Koranda, 2003.
Trident: The GriPhyNVirtual Data System Workflow spec Create Execution Plan Grid Workflow Execution VDL Program Statically Partitioned DAG DAGman DAG Virtual Data catalog DAGman & Condor-G Dynamically Planned DAG Job Planner Job Cleanup Virtual Data Workflow Generator Local planner Abstract workflow
Seismic Hazard Curve Exceeded every year Ground motion that will be exceeded every year Exceeded 1 time in 10 years Ground motion that a person can expect to be exceeded during their lifetime Exceeded 1 time in 100 years Annual frequency of exceedance Typical design for buildings 10% probability of exceedance in 50 years Exceeded 1 time in 1000 years Typical design for hospitals Carl’s house during Northridge Exceeded 1 time in 10,000 years Typical design for nuclear power plant Minor damage Moderate damage 0.1 0.2 0.3 0.4 0.5 0.6 Ground Motion – Peak Ground Acceleration
Spectral Acceleration Hazard Curve Strain Green Tensor Synthetic Seismogram Rupture Forecast SCEC Cybershake • Calculate hazard curves by generating synthetic seismograms from estimated rupture forecast Hazard Map
Cybershake on the SCEC VO Provenance Catalog Data Catalog Workflow Scheduler/Engine VO Service Catalog SCEC Storage TeraGrid Storage TeraGrid Compute VO Scheduler
Summary (1):Community Services • Community roll, city hall, permits, licensing & police force • Assertions, policy, attribute & authorization services • Directories, maps • Information services • City services: power, water, sewer • Deployed services • Shops, businesses • Composed services • Day-to-day activities • Workflows, visualization • Tax board, fees, economic considerations • Barter, planned economy, eventually markets
Summary (2) • Community based science will be the norm • Requires collaborations across sciences— including computer science • Many different types of communities • Differ in coupling, membership, lifetime, size • Must think beyondscience stovepipes • Increasingly the community infrastructure will become the scientific observatory • Scaling requires a separation of concerns • Providers of resources, services, content • Small set of fundamental mechanisms required to build communities
For More Information • Globus Alliance • www.globus.org • NMI and GRIDS Center • www.nsf-middleware.org • www.grids-center.org • Infrastructure • www.opensciencegrid.org • www.teragrid.org • Background • www.isi.edu/~carl 2nd Edition www.mkp.com/grid2