1 / 61

NAREGI Middleware Beta 1 and Beyond

NAREGI Middleware Beta 1 and Beyond. Satoshi Matsuoka Professor, Global Scientific Information and Computing Center, Deputy Director, NAREGI Project Tokyo Institute of Technology / NII http://www.naregi.org. 500GB 48disks. 500GB 48disks. 500GB 48disks.

artan
Télécharger la présentation

NAREGI Middleware Beta 1 and Beyond

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NAREGI Middleware Beta 1 and Beyond Satoshi Matsuoka Professor, Global Scientific Information and Computing Center, Deputy Director, NAREGI Project Tokyo Institute of Technology / NII http://www.naregi.org

  2. 500GB 48disks 500GB 48disks 500GB 48disks The Titech TSUBAME Production Supercomputing Cluster, Spring 2006 Voltaire ISR9288 Infiniband 10Gbps x2 (xDDR) x ~700 Ports Sun Galaxy 4 (Opteron Dual core 8-Way)10480core/655Nodes50.4TeraFlopsOS Linux (SuSE 9, 10) NAREGI Grid MW Unified IB network 10Gbps+External Network 7th on June2006 Top500, 38.18 TFlops NEC SX-8Small Vector Nodes (under plan) ClearSpeed CSX600SIMD accelerator360 boards, 35TeraFlops(Current) Storage1 Petabyte (Sun “Thumper”)0.1Petabyte (NEC iStore)Lustre FS, NFS (v4?)

  3. Titech TSUBAME~80+ racks350m2 floor area1.2 MW (peak)

  4. Campus GridCluster すずかけ台 Titech Supercomputing Grid 2006 • ~13,000 CPUs, 90 TeraFlops, ~26 TeraBytes Mem, ~1.1 Petabytes Disk • CPU Cores: x86: TSUBAME (~10600), Campus Grid Cluster (~1000), COE-LKR cluster (~260), WinCCS (~300) + ClearSpeed CSX600 (720 Chips) WinCCS TSUBAME 計算工学 C (予定) 数理・計算 C (予定) 35km, 10Gbps 大岡山 1.2km COE-LKR(知識) cluster

  5. University Computer Centers (excl. National Labs) circa Spring 2006 10Gbps SuperSINET Interconnecting the Centers Hokkaido UniversityInformation Initiative Center HITACHI SR11000 5.6 Teraflops ~60 SC Centers in Japan University of Tsukuba FUJITSU VPP5000 CP-PACS 2048 (SR8000 proto) Kyoto UniversityAcademic Center for Computing and Media Studies Tohoku UniversityInformation Synergy Center FUJITSU PrimePower2500 10 Teraflops NEC SX-7 NEC TX7/AzusA University of TokyoInformation Technology Center Kyushu UniversityComputing and Communications Center HITACHI SR8000 HITACHI SR11000 6 Teraflops Others (in institutes) FUJITSU VPP5000/64 IBM Power5 p595 5 Teraflops National Inst. of Informatics - 10 Petaflop center by 2011 SuperSINET/NAREGI Testbed17 Teraflops Tokyo Inst. TechnologyGlobal Scientific Informationand Computing Center 2006 NEC/SUN TSUBAME 85 Teraflops Osaka UniversityCyberMedia Center Nagoya UniversityInformation Technology Center NEC SX-5/128M8 HP Exemplar V2500/N 1.2 Teraflops FUJITSU PrimePower250011 Teraflops

  6. Scaling Towards Petaflops… “Keisoku”>10PF(2011) 2010 Titech “PetaGrid” => Interim 200TeraFlops @ 2008=> “Petascale” @ 2010 NORM for a typical Japanese center? →HPC Software is the key! 10PF US 10P (2011~12?) US HPCS (2010) US Petascale (2007~8) 1PF Next Gen“PetaGrid”1PF (2010) TSUBAMEUpgrade >200TF (2008-2H) BlueGene/L 360TF(2005) 100TF Chinese National Machine >100TF (2007~8) Titech SupercomputingCampus Grid (incl TSUBAME )~90TF (2006) Earth Simulator 40TF (2002) Korean Machine >100TF (2006~7) 10TF Titech Campus Grid KEK 59TFBG/L+SR11100 1.3TF 1TF 2002 2004 2006 2008 2010 2012

  7. Nano-Science : coupled simluations on the Grid as the sole future for true scalability … between Continuum & Quanta. Material physics的 (Infinite system) ・Fluid dynamics ・Statistical physics ・Condensed matter theory … Molecular Science ・Quantum chemistry ・Molecular Orbital method ・Molecular Dynamics … m -6 -9 10 10 Limit of Computing Capability Limit of Idealization Multi-Physics Coordinates decoupled resources; Meta-computing, High throughput computing, Multi-Physics simulationw/ components and data from different groups within VO composed in real-time Old HPC environment: ・decoupled resources, ・limited users, ・special software, ... The only way to achieve true scalability!

  8. SuperScheduler GridVM GridVM GridVM LifeCycle of Grid Apps and Infrastructure Application Contents Service HL WorkflowNAREGI WFML VO Application Developers&Mgrs Dist. Grid Info Service Workflows and Coupled Apps / User Many VO Users MetaComputing Place & register data on the Grid GridRPC/Grid MPI UserApps UserApps UserApps DistributedServers Assign metadata to data Meta- data Meta- data Meta- data Data 1 Data 2 Grid-wide Data Management Service (GridFS, Metadata, Staging, etc.) Data n

  9. NAREGI Software Stack (beta 1 2006) - WS(RF) based (OGSA) SW Stack - Grid-Enabled Nano-Applications (WP6) Grid PSE Grid Visualization Grid Programming (WP2) -Grid RPC -Grid MPI WP3 Grid Workflow (WFML (Unicore+ WF)) Distributed Information Service(CIM) Data (WP4) Super Scheduler WP1 Packaging (WSRF (GT4+Fujitsu WP1) + GT4 and other services) Grid VM (WP1) Grid Security and High-Performance Grid Networking(WP5) SuperSINET NII Research Organizations IMS Major University Computing Centers Computing Resources and Virtual Organizations

  10. GGF Standards and Pseudo-standard Activities set/employed by NAREGI GGF “OGSA CIM profile” GGF AuthZ GGF DAIS GGF GFS (Grid Filesystems) GGF Grid CP (GGF CAOPs) GGF GridFTP GGF GridRPC API (as Ninf-G2/G4) GGF JSDL GGF OGSA-BES GGF OGSA-Byte-IO GGF OGSA-DAI GGF OGSA-EMS GGF OGSA-RSS GGF RUS GGF SRM (planned for beta 2) GGF UR GGF WS-I RUS GGF ACS GGF CDDLM Other Industry Standards Employed by NAREGI ANSI/ISO SQL DMTF CIM IETF OCSP/XKMS MPI 2.0 OASIS SAML2.0 OASIS WS-Agreement OASIS WS-BPEL OASIS WSRF2.0 OASIS XACML De Facto Standards / Commonly Used Software Platforms Employed by NAREGI Ganglia GFarm 1.1 Globus 4 GRAM Globus 4 GSI Globus 4 WSRF (Also Fujitsu WSRF for C binding) IMPI (as GridMPI) Linux (RH8/9 etc.), Solaris (8/9/10), AIX, … MyProxy OpenMPI Tomcat (and associated WS/XML standards) Unicore WF (as NAREGI WFML) VOMS List of NAREGI “Standards”(beta 1 and beyond) Implement “Specs” early even if nascent if seemingly viable Necessary for Longevity and Vendor Buy-InMetric of WP Evaluation

  11. Highlights of NAREGI Beta (May 2006, GGF17/GridWorld) • Professionally developed and tested • “Full” OGSA-EMS incarnation • Full C-based WSRF engine (Java -> Globus 4) • OGSA-EMS/RSS WSRF components • GGF JSDL1.0-extension job submission, authorization, etc. • Support for more OSes (AIX, Solaris, etc.) and BQs • Sophisticated VO support for identity/security/monitoring/accounting (extensions of VOMS/MyProxy, WS-* adoption) • WS- Application Deployment Support via GGF-ACS • Comprehensive Data management w/Grid-wide FS • Complex workflow (NAREGI-WFML) for various coupled simulations • Overall stability/speed/functional improvements • To be interoperable with EGEE, TeraGrid, etc. (beta2) • Release next week at GGF17, press conferences, etc.

  12. Large scale computing across supercomputers on the Grid Utilization of remote supercomputers ② Notify results Internet user ① Call remote procedures Call remote libraries Ninf-G: A Reference Implementation of the GGF GridRPC API • What is GridRPC? • Programming model using RPCs on a Grid • Provide easy and simple programming interface • The GridRPC API is published as a proposed recommendation (GFD-R.P 52) • What is Ninf-G? • A reference implementation of the standard GridRPC API • Built on the Globus Toolkit • Now in NMI Release 8 (first non-US software in NMI) • Easy three steps to make your program Grid aware • Write IDL file that specifies interface of your library • Compile it with an IDL compiler called ng_gen • Modify your client program to use GridRPC API

  13. GridMPI • MPI applications run on the Grid environment • Metropolitan area, high-bandwidth environment:  10 Gpbs,  500 miles (smaller than 10ms one-way latency) • Parallel Computation • Larger than metropolitan area • MPI-IO computing resource site A computing resource site B Wide-area Network Single (monolithic) MPI application over the Grid environment

  14. WorkflowService ApplicationContentsService CFDVisualizationService DeploymentService MolecularVisualizationService ParallelVisualizationService (GGF-ACS) • compile • deploy • un-deploy CFDVisualizer MolecularViewer ApplicationRepository ParallelVisualizer Grid Application Environment(WP3) NAREGI Portal Bio VO Portal GUI Portal GUI Nano VO Grid PSE Grid Workflow Grid Visualization Deployment UI Register UI Workflow GUI Visualization GUI Gateway Services NAREGI- WFML File/ExecutionManager JM I/F module BPEL+JSDL Core Grid Services FileTransfer (RFT) UnderlyingGrid Services Workflow Engine &Super Scheduler DistributedInformation Service VOMSMyProxy GridFile System ・・・ ・・・ WSRF

  15. WP-3: User-Level Grid Tools & PSE • Grid PSE - Deployment of applications on the Grid - Support for execution of deployed applications • Grid Workflow - Workflow language independent of specific Grid middleware - GUI in task-flow representation • Grid Visualization - Remote visualization of massive data distributed over the Grid - General Grid services for visualization

  16. The NAREGI SSS Architecture (2007/3) Grid-Middleware PETABUS (Peta Application services Bus) ApplicationSpecific Service ApplicationSpecific Service ApplicationSpecific Service WESBUS (Workflow Execution Services Bus) NAREGI- SSS JM EPS CSG BPEL Interpreter Service CESBUS (Coallocation Execution Services Bus; a.k.a. BES+ Bus) FTS-SC GRAM-SC UniGridS-SC AGG-SC with RS(Aggregate SCs) BESBUS (Basic Execution Services Bus) Grid Resource Globus WS-GRAM I/F (with reservation) UniGridS Atomic Services (with reservation) GridVM

  17. JSDL JSDL JSDL JSDL JSDL JSDL JSDL JSDL JSDL JSDL JSDL JSDL NAREGI beta 1 SSS ArchitectureAn extendedOGSA-EMS Incarnation Abbreviation SS: Super Scheduler JSDL: Job Submission Description Document JM: Job Manager EPS: Execution Planning Service CSG: Candidate Set Generator RS: Reservation Service IS: Information Service SC: Service Container AGG-SC: Aggregate SC GVM-SC: GridVM SC FTS-SC: File Transfer Service SC BES: Basic Execution Service I/F CES: Co-allocation Execution Service I/F (BES+) CIM: Common Information Model GNIS: Grid Network Information Service NAREGI-WP3 WorkFlowTool, PSE, GVS NAREGI- WFML Submit Cancel Status Delete JSDL NAREGI JM(SS)Java I/F module CreateActivity(FromBPEL) GetActivityStatus RequestActivityStateChanges Submit Status Delete Cancel WFML2BPEL BPEL2WFST JM-Client BPEL (include JSDL) S Invoke EPS CES SS Invoke SC NAREGI JM (BPEL Engine) SelectResource FromJSDL CreateActivity(FromJSDL) GetActivityStatus RequestActivityStateChanges JSDL MakeReservation CancelReservation R S EPS CES GenerateCandidate Set AGG-SC /RS MakeReservation CancelReservation GetGroups-OfNodes JSDL JSDL CSG R S JSDL Generate SQL Query From JSDL R S CES R S JSDL CES CES JSDL JSDL JSDL Fork/Exec FTS-SC SC(GVM-SC) Fork/Exec Fork/Exec SC(GVM-SC) is-query globusrun-ws globusrun-ws GRAM4 specific Fork/Exec OGSA-DAI GRAM4 specific GFarmserver SC GridVM SC GridVM uber-ftp WS-GRAM GNIS WS-GRAM CIM IS globus-url-copy DB PostgreSQL PBS, LoadLeveler PBS, LoadLeveler Co-allocation FileTransfer

  18. ⑥ AbstractJSDL (10) (1) 14:00- (3:00) (2) ConcreteJSDL (8) ConcreteJSDL (2) + 15:00-18:00 ⑦ ConcreteJSDL (8) (3)Local RS1 Local RS2 (EPR) (4)Abstract Agreement Instance EPR create an agreement instance create an agreement instance ⑤ AbstractJSDL (10) ⑨ 15:00-18:00 ConcreteJSDL (8) ⑧ ④ ConcreteJSDL (2) AbstractJSDL (10) Candidates: Local RS 1 EPR (8) Local RS 2 EPR (6) ② create an agreement instance ConcreteJSDL (2) ⑩ ③ 3, 4: Co-allocation and Reservation Meta computing scheduler is required to allocate and to execute jobs on multiple sites simultaneously. The super scheduler negotiates with local RSs on job execution time and reserves resources which can execute the jobs simultaneously. Super Scheduler Execution Planning Services Reservation Service Local RS 1 with Meta-Scheduling Service Container ⑪ Cluster (Site) 1 Candidate Set Generator Local RS 2 Service Container GridVM Cluster (Site) 2 Distributed Information Service Local RS #: Local Reservation Service # Distributed Information Service

  19. NAREGI Info Service (beta) Architecture ・ CIMOM Service classifies info according to CIM based schema. ・ The info is aggregated and accumulated in RDBs hierarchically. ・ Client library utilizes OGSA-DAI client toolkit. ・ Accounting info is accessed through RUS. Information Service Node User Admin. Viewer CIM Providers Aggregator Service Data Service Java-API OS Client (Resource Broker etc.) Processor Light- weight CIMOM Service Client Library Grid VM File System Job Queue RDB Performance Resource Usage Service Ganglia ●  ● Chargeable Service (GridVM etc.) ACL Node A Node B Node C Parallel Query … RUS::insertURs … Hierarchical filtered aggregation Client (publisher) Cell Domain Information Service Cell Domain Information Service

  20. NAREGI IS: Standards Employed in the Architecture User Admin. Viewer Information Service Node Information Service Node Client (OGSA- RSS etc.) Distributed Information Service GT4.0.1 Tomcat 5.0.28 Aggregator Service Java-API OGSA-DAI WSRF2.1 Client library CIM Providers CIM spec. CIM/XML APP OGSA-DAI Client toolkit OS APP Processor WS-I RUS RDB Light- weight CIMOM Service Grid VM File System CIM Schema 2.10 /w extension Job Queue Performance ACL Ganglia GridVM (Chargeable Service) ●  ● Node A Node B Node C RUS::insertURs ... Distributed Query … … Hierarchical filtered aggregation GGF/ UR Client (OGSA- BES etc.) GGF/ UR Cell Domain Information Service Cell Domain Information Service

  21. GridVM Features • Platform independence as OGSA-EMS SC • WSRF OGSA-EMS Service Container interface for heterogeneous platforms and local schedulers • “Extends” Globus4 WS-GRAM • Job submission using JSDL • Job accounting using UR/RUS • CIM provider for resource information • Meta-computing and Coupled Applications • Advanced reservation for co-Allocation • Site Autonomy • WS-Agreement based job execution (beta 2) • XACML-based access control of resource usage • Virtual Organization (VO) Management • Access control and job accounting based on VOs (VOMS & GGF-UR)

  22. NAREGI GridVM (beta) Architecture • Virtual execution environment on each site • Virtualization of heterogeneous resources • Resource and job management services with unified I/F Super Scheduler Information Service Advance reservation, Monitoring, Control Resource Info. GRAM4 WSRF I/F GRAM4 WSRF I/F GridVM Scheduler GridVM Scheduler Local Scheduler Local Scheduler GridVM Engine GridVM Engine GridMPI Accounting Sandbox Job Execution site Policy site Policy AIX/LoadLeveler Linux/PBSPro

  23. NAREGI GridVM: Standards Employed in the Architecture Super Scheduler Information Service GT4 GRAM-integration and WSRF-based extension services CIM-based resource info. provider Job submission based on JSDL and NAREGI extensions GRAM4 WSRF I/F GRAM4 WSRF I/F GridVM Scheduler GridVM Scheduler Local Scheduler Local Scheduler GridVM Engine GridVM Engine GridMPI UR/RUS-based job accounting site Policy site Policy xacml-like access control policy

  24. GT4 GRAM-GridVM Integration • Integrated as an extension module to GT4 GRAM • Aim to make the both functionalities available SS Site globusrun RSL+JSDL’ GridVMJobFactory Extension Service GridVMJob Basic job management + Authentication, Authorization SUDO GRAM services GRAM Adapter GridVM scheduler Delegate PBS-ProLoadLeveler… Transfer request Scheduler Event Generator Delegation Local scheduler RFT File Transfer GridVM Engine

  25. Next Steps for WP1 – Beta2 • Stability, Robustness, Ease-of-install • Standard-setting core OGSA-EMS: OGSA-RSS, OGSA-BES/ESI, etc. • More supported platforms (VM) • SX series, Solaris 8-10, etc. • More batchQs – NQS, n1ge, Condor, Torque • “Orthogonalization” of SS, VM, IS, WSRF components • Better, more orthogonal WSRF-APIs, minimize sharing of states • E.g., reservation APIs, event-based notificaiton • Mix-and-match of multiple SS/VM/IS/external components, many benefits • Robutness • Better and realistic Center VO support • Better interoperability with external grid MW stack, e.g. Condor-C

  26. VO-APL1 SS IS VO-RO1 SS IS GridVM GridVM GridVM GridVM IS IS IS IS IS IS GridVM GridVM VO and Resources in Beta 2 Decoupling of WP1 components for pragmatic VO deployment RO3 Client Client Client VO-APL2 RO1 SS IS RO2 "Peter Arzberger" <parzberg@sdsc.edu> VO-RO2 IS SS IS IS • Policy • VO-R01 • Policy • VO-R01 • VO-APL1 • VO-APL2 • Policy • VO-R02 • Policy • VO-R01 • VO-APL1 • Policy • VO-R01 • VO-APL1 • VO-APL2 • Policy • VO-R02 • VO-APL2 a.RO2 b.RO2 n.RO2 A.RO1 B.RO1 N.RO1

  27. NAREGI Data Grid beta1 Architecture (WP4) Grid Workflow Job 1 Job 2 Job n Data Grid Components Data 1 Data 2 Data n Import data into workflow Data Access Management Place & register data on the Grid Job 1 Job 2 Metadata Management Assign metadata to data Grid-wide Data Sharing Service Meta- data Meta- data Job n Meta- data Data Resource Management Data 1 Data 2 Data n Store data into distributed file nodes Currently GFarm v.1.x Grid-wide File System

  28. NAREGI WP4: Standards Employed in the Architecture Workflow (NAREGI WFML =>BPEL+JSDL) Data Access Management Job 1 Job n Import data into workflow Tomcat 5.0.28 Data 1 Data n Place data on the Grid Super Scheduler (SS) (OGSA-RSS) Globus Toolkit 4.0.1 Data Staging OGSA-RSSFTS SC Metadata Construction GGF-SRM (beta2) Computational Nodes GridFTP Data Resource Management OGSA-DAI WSRF2.0 Job 2 Job n Job 1 OGSA-DAI WSRF2.0 PostgreSQL 8.0 Data 1 Data 2 Data n PostgreSQL 8.0 Data Specific Metadata DB Data Resource Information DB Filesystem Nodes Gfarm 1.2 PL4(Grid FS)

  29. User Management Server(UMS) SS client VOMS Proxy Certificate User Certificate Private Key NAREGI-beta1 Security Architecture (WP5) VOMS MyProxy MyProxy+ VOMS Proxy Certificate VOMS Proxy Certificate GridVM Client Environment Super Scheduler Portal WFT NAREGI CA GridVM VOMS Proxy Certificate PSE VOMS Proxy Certificate GVS GridVM Data Grid Grid File System (AIST Gfarm) disk node disk node disk node

  30. MyProxy Proxy Certificate with VO Super Scheduler SS client User Certificate Private Key Client Environment Proxy Certificate with VO Information Service Portal WFT Proxy Certificate with VO Resources Info incl. VO PSE GVM NAREGI-beta1 Security ArchitectureWP5-the standards Subset of WebTrust Programs for CA GRID CP (GGF CAOPs) VO、Certificate Management Service Resource Info. (Incl. VO info) CP/CPS Audit Criteria VOMS query (requirements +VO info) resources in the VO NAREGI CA ProxyCertificate with VO voms-myproxy-init Put ProxyCertificate with VO Get VOMS Attribute CA Service Certificate Management Server Proxy Certificate withVO globusrun-ws GridVM services (incl. GSI) Request/Get Certificate ssh + voms-myproxy-init VO Info、Execution Info, Resource Info Resource GridVM local Info. incl. VO log-in Signed Job Description

  31. VO and User Management Service • Adoption of VOMS for VO management • Using proxy certificate with VO attributes for the interoperability with EGEE • GridVM is used instead of LCAS/LCMAPS • Integration of MyProxy and VOMS servers into NAREGI • with UMS (User Management Server) to realize one-stop service at the NAREGI Grid Portal • using gLite implemented at UMS to connect VOMS server • MyProxy+ for SuperScheduler • Special-purpose certificate repository to realize safety delegation between the NAREGI Grid Portal and the Super Scheduler • Super Scheduler receives jobs with user’s signature just like UNICORE, and submits them with GSI interface.

  32. Workflow Resource configulation VO1 pbg1042 VO2 4 CPU 4 CPU 4 CPU 1 CPU 4 CPU png2041 8 CPU 1 CPU 2 CPU 2 CPU 4 CPU pbg2039 png2040 2 CPU 8 CPU Computational Resource Allocation based on VO Different resource mapping for different VOs

  33. Local-File Access Control (GridVM) • Provide VO-based access control functionality that does not use gridmap files. • Control file-access based on the policy specified by a tuple of Subject, Resource, and Action. • Subject is a grid user ID or VO name. PolicyPermit: Subject=X, Resource=R, Action=read,writeDeny: Subject=Y,Resource=R,Action=read GridVM DN Access Control Grid User X Resource R Local Account Grid User Y

  34. <Resource> <AppliedTo> <Resources> <Action> <Subjects> <Subject> <AccessRule> <AccessControl> <GridVMPolicyConfig> <Actions> +Effect 1 1 Control 1 permit / deny 1 <AccessProtection> What Resouce +Default +RuleCombiningAlgorithm Access Type 1 file / directory 1..* 1 1 Who read / write / execute 0..1 1 1 0..1 +TargetUnit user / VO 1 0..1 1 1 1..* 0..* 1 0..* Structure of Local-File Access Control Policy

  35. Policy Example (1) Default Applying rules <gvmcf:AccessProtection gvmac:Default="Permit" gvmac:RuleCombiningAlgorithm="Permit-overrides"> <!-- Access Rule 1: for all user --> <gvmcf:AccessRule gvmac:Effect="Deny"> <gvmcf:AppliedTo> <gvmac:Subjects> … <gvmac:Resources> <gvmac:Resource>/etc/passwd</gvmac:Resource> </gvmac:Resources> <gvmac:Actions> … <!-- Access Rule 2: for a specific user --> <gvmcf:AccessRule gvmac:Effect=“Permit"> <gvmcf:AppliedTo gvmcf:TargetUnit=“user"> <gvmcf:Subjects> <gvmcf:Subject>User1</gvmcf:subject> </gvmcf:Subjects> </gvmcf:AppliedTo > <gvmac:Resources> <gvmac:Resource>/etc/passwd</gvmac:Resource> </gvmac:Resources> <gvmac:Actions> <gvmac:Action>read</gvmac:Action> </gvmac:Actions>

  36. Policy Example (2)       <gvmcf:AccessRule gvmac:Effect="Permit">         <gvmcf:AppliedTo gvmcf:TargetUnit="vo"> <gvmcf:Subjects> <gvmcf:Subject>bio</gvmcf:Subject> </gvmcf:Subjects >   </gvmcf:AppliedTo> <gvmac:Resources> <gvmac:Resource>/opt/bio/bin</gvmac:Resource>        <gvmac:Resource>./apps</gvmac:Resource> </gvmac:Resources> <gvmac:Actions>          <gvmac:Action>read</gvmac:Action>          <gvmac:Action>execute</gvmac:Action>        </gvmac:Actions>      </gvmcf:AccessRule> VO name Resource name

  37. VO-based Resouce Mapping in Global File System (b2) • Next release of Gfarm (version 2.0) will have access control functionality. • We will extend Gfarm metadata server for the data-resource mapping based on VO. VO1 file server file server Gfarm Metadata Server Client VO2 file server file server

  38. Current Issues and the Future Plan • Current Issues on VO management • VOMS platform • gLite is running on GT2 and NAREGI middleware on GT4 • Authorization control on resource side • Need to implement new functions for resource control on GridVM, such as Web services, reservation, etc. • Proxy certificate renewal • Need to invent a new mechanism • Future plans • Cooperation with GGF security area members to realize interoperability with other grid projects • Proposal of a new VO management methodology and trial of reference implementation.

  39. Mediator A Mediator B Mediator A Mediator A Mediator B NAREGI Application Mediator (WP6) for Coupled Applications Workflow NAREGI WFT Mediator Components co-allocated jobs Support data exchange between coupled simulation Simulation A Simulation A Simulation A Mediator Mediator Mediator Job n Job 1 Data transfer management Simulation B Simulation A Simulation A Information Service ・Synchronized file transfer Super Scheduler SQL ・Multiple protocol GridFTP/MPI GridVM GridVM GridVM SBC* -XML ・Global Job ID ・Allocated nodes ・Transfer Protocol etc. API Data transformation management API Sim.B Sim.A Data1 ・Semantic transform- ation libraries for different simulations OGSA-DAI WSRF2.0 Data3 Sim.A Data2 JNI Sim.B ・Coupled accelerator Sim.A Globus Toolkit 4.0.1 GridFTP GridFTP MPI *SBC: Storage-based communication MPI MPI

  40. NAREGI beta on “VM Grid” Create “Grid-on-Demand” environment using Xen and Globus Workspace Vanilla personal virtual grid/cluster using our Titech Lab’s research results NAREGI beta imagedynamic deployment

  41. “VM Grid” – Prototype “S” • http://omiij-portal.soum.co.jp:33980/gridforeveryone.php • Request # of virtual grid nodes • Fill in the necessary info in the form • Confirmation page appears, follow instructions • Ssh login to NMI stack (GT2+Condor) + selected NAREGI beta MW (Ninf-G, etc.) • Entire Beta installation in the works • Other “Instant Grid” research in the works in the lab

  42. From Interoperation to InteroperabilityGGF16 “Grid Interoperations Now” Charlie Catlett Director, NSF TeraGridon Vacation Satoshi Matsuoka Sub Project Director, NAREGI Project Tokyo Institute of Technology / NII

  43. Interoperation Activities • The GGF GIN (Grid Interoperations Now) effort • Real interoperation between major Grid projects • Four interoperation areas identified • Security, Data Mgmt, Information Service, Job Submission (not scheduling) • EGEE/gLite – NAREGI interoperation • Based on the four GIN areas • Several discussions, including 3 day meeting at CERN mid March, email exchanges • Updates at GGF17 Tokyo next week • Some details in my talk tomorrow

  44. The Ideal World: Ubiquitous VO & user management for international e-Science Differentsoftware stacksbut interoperable Europe: EGEE, UK e-Science, … Grid Regional Infrastructural Efforts US: TeraGrid, OSG, Collaborative talks on PMA, etc. Japan: NII CyberScience (w/NAREGI), … Other Asian Efforts (GFK, China Grid, etc.)… NEES-EDGrid VO Standardization,commonality in software platforms will realize this HEPGridVO Astro IVO

  45. The Reality: Convergence/Divergence ofProject Forces(original slide by Stephen Pickles, edited by Satoshi Matsuoka) EU-China Grid(China) AIST-GTRC interoperable infrastructure talks EGEE(EU) gLite / GT2 CSI (JP) NAREGI (JP) WSRF & OGSA,b:GT4/Fujitsu WSRF LCG(EU) GridPP(UK) interoperable infrastructure talks DEISA(EU) IBMUnicore common staff & procedures Globus(US) OSG(US) GGF Condor(US) NGS(UK) UniGrids(EU) TeraGrid(US) interoperable infrastructure talks common users Own WSRF & OGSA OMII(UK) NMI(US) APAC Grid(Australia) GT4 WSRF (OGSA?) WS-I+ & OGSA?

  46. GGF Grid Interoperation Now • Started Nov. 17 2005 @SC05 by Catlett and Matsuoka • Now participation by all major grid projects • “Agreeing to Agree on what needs to be Agreed first” • Identified 4 Essential Key Common Services • Authentication, Authorization, Identity Management • Individuals, communities (VO’s) • Jobs: submission, auditing, tracking • Job submission interface, job description language, etc. • Data Management • Data movement, remote access, filesystems, metadata mgmt • Resource discovery and Information Service • Resourche description schema, information services

  47. “Interoperation” versus “Interoperability” • Interoperability“The ability of software and hardware on multiple machines frommultiple vendors to communicate“ • Based on commonly agreed documented specifications and procedures • Interoperation“Just make it work together” • Whatever it takes, could be ad-hoc, undocumented, fragile • Low hanging fruit, future interoperability

  48. Interoperation Status • GIN meetings GGF16 and GGF17 • 3-day meeting at CERN end of March • Security • Common VOMS/GSI infrastructure • NAREGI more complicated use of GSI/Myproxy and proxy delegation but should be OK • Data • SRM commonality and data catalog integration • GFarm and DCache consolidation • Information Service • CIM vs. GLUE schema differences • Monitoring system differences fairly • Schema translation (see next slides) • Job Submission • JDL vs. JSDL, Condor-C/CE vs. OGSA SS/SC-VM architectural differences, etc. • Simple job submission only (see next slides)

  49. Information Service Characteristics • Basic syntax: • Resource description schemas (e.g., GLUE, CIM) • Data representations (e.g., XML, LDIF) • Query languages (e.g., SQL, XPath) • Client query interfaces (e.g., WS Resource Properties queries, LDAP, OGSA-DAI) • Semantics: • What pieces of data are needed by each Grid (various previous works & actual deployment experiences already) • Implementation: • Information service software systems (e.g., MDS, BDII) • The ultimate sources of this information (e.g., PBS, Condor, Ganglia, WS-GRAM, GridVM, various grid monitoring systems, etc.).

More Related