1 / 21

GEON Systems Report

GEON Systems Report. Karan Bhatia San Diego Supercomputer Center Friday Aug 13 2004. Goals: Procure and deploy physical resources for partners Provide infrastructure for management of systems including mechanisms for collaboration and communication

courtney
Télécharger la présentation

GEON Systems Report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GEON Systems Report Karan Bhatia San Diego Supercomputer Center Friday Aug 13 2004 www.geongrid.org

  2. Goals: Procure and deploy physical resources for partners Provide infrastructure for management of systems including mechanisms for collaboration and communication Provide basic production services for data Provide basic grid services for applications Year 2 Goals & Accomplishments • Physical Layer • Purchased and Deployed hardware • Systems Layer • Developed management software and collaborations with partner sites • Developed Geon Software Stack • Grid Layer • Beginning to build out Services • Portal & Security done (end of aug) • Naming & Discovery, Data Management & Replication, and mediation • Basic research still being done • Applications Layer • Some apps ready, used as templates for how to build apps in Geon www.geongrid.org

  3. Applications End-user Apps & Services Grid Layer Grid System Services OS & Software layer Systems Layer Physical Deployment Hardware, clusters, networks GEONgrid Development www.geongrid.org

  4. Vendors: Dell (27 prod systems + 9 devel systems) Poweredge 2650-based systems Dual 2.8 GHz Pentium processors 2 GB RAM ProMirco (3 systems) Dual pentium 4 TB + RAID HP Cluster donation (9 systems) Rx2600-based dual 1.4 GHz 15 partner sites 1 PoP node Optional small cluster (4 system) Optional data node Misc equipment as needed Switches, racks, etc. Applications Grid Layer Systems Layer Physical Deployment Physical Deployment www.geongrid.org

  5. Similar to BIRN Architecture Each site runs a PoP Optional cluster and data nodes Users access resources through PoP PoP provides point of entry PoP provides access to global services Developers add services & data hosted on GEON resources Web services/Grid services Deployment Architecture www.geongrid.org

  6. GEONgrid Current Status Physical Resources: - all pops deployed, 3 data nodes deployed, clusters all up - HP cluster delivered Software Stack: - mix of GeonRocks 0.1 (redhat 9-based), redhat 9 www.geongrid.org

  7. Unified Software Stack definition Custom GEON Roll Web/Grid Services software Stack Common GEON Applications and Services Focus on scalable systems management Modified Rocks for wide-area cluster management (See [Sacerdoti94]) Collaborations with partner sites Identified appropriate contacts Applications Grid Layer Systems Layer Physical Deployment Systems Layer www.geongrid.org

  8. Development OGSI 1.0 (from GT3.0.2) --> GT3.2 (packaged by NMI) Web Services (jakarta, axis, ant, etc) GridSphere 2.02 Portal Framework Database IBM DB2 (packaged for Protein Data Bank) Postgres --> PostGIS SRB Client software OPeNDAP roll (UNAVCO) Security DB2 with GSI Plugin (developed by Teragrid) Tripwire System Monitoring Grid Monitor INCA Testing and Monitoring framework (Teragrid) With GRASP benchmarks Network Weather Service (NWS) GEON Software Roll GEON Software Stack Version 1.0 to be deployed starting Sept 1, 2004! www.geongrid.org

  9. Frederico Sacerdoti, Sandeep Chandra, and Karan Bhatia, “Grid Systems Deployment and Management using Rocks”, Cluster 2004, Sept. 20-23 2004, San Diego, California Wide-Area Cluster Management www.geongrid.org

  10. Production/Development servers 8 development servers used for various activities Main Production Portal Blogs, forums, RSS Production application services CVS services cvs.geongrid.org Geon Certificate Authority ca.geongrid.org Additional Infrastructure www.geongrid.org

  11. Goals Evaluate core software infrastructure CAS, Handle.net, RLS (Replica Location Service), VOMS (Virtual Organization Mgmt),Firefish, MCS (Metadata Catalog Service), SRB, CSF (Community Scheduling Framework). Integrate or build as necessary Portal Infrastructure Security Infrastructure Naming and Discovery Infrastructure Data Management and Replication Generic Mediation Applications Grid Layer Systems Layer Physical Deployment Grid Layer www.geongrid.org

  12. GridSphere Portal Framework Developed by GridLab (Jason Novatny, and others) Albert Einstein Institute, Berlin, Germany Java/JSP Portlet Container JSR 168 support, WSRP and JSF coming Supports Collaboration (standard portlet API) Personalization (eg. my.yahoo.com) Grid Services (GSI support) Web Services Other Frameworks Open Grid Computing Environments (OGCE) Apache JetSpeed based --> Sakai 1. Portal Infrastructure www.geongrid.org

  13. GSI Based Collaboration with Telescience & BIRN GEON certificate authority: ca.geongrid.org SDSC CACL system Roll-based access control using Globus Community Authorization System (CAS) geonAdmin, geonPI, geonUser, public Portal Integration Account requests, certificate management 2. Security Infrastructure www.geongrid.org

  14. Naming All service instances, datasets and applications Two level naming scheme to support replication and versioning GeoID similar to LSID (Life Sciences ID) Globally Unique and Resolvable Resolution GeoID --> usable reference (eg. WSDL) Handle system (CNRI) Discovery Discover resources in heterogeneous metadata repositories MCAT, MCS, Geography Network (ESRI), OPeNDAP Firefish (LBL) 3. Naming and Discovery www.geongrid.org

  15. Installed Services GridFTP SRB Server GMR testing Grid Movement and Replication With IBM Research OGSA-DAI performance With GRASP (baru, casanova, snavely) 4. Data Management & Replication www.geongrid.org

  16. GIS Map Integration See next talk (Ludaescher) 5. Mediation Services www.geongrid.org

  17. Physical Layer Purchased and Deployed hardware Systems Layer Developed management software and collaborations with partner sites Developed Geon Software Stack Grid Layer Beginning to build out Services Portal & Security done (end of aug) Naming & Discovery, Data Management & Replication, and mediation Basic research still being done Applications Layer Some apps ready, used as templates for how to build apps in Geon Year 2 Summary www.geongrid.org

  18. Goals: Provide core software infrastructure Integration with outside resources Encourage software development and integration with partners More data, more apps, more tools Looking Ahead, Year 3 www.geongrid.org

  19. Questions? www.geongrid.org

  20. Additional Material www.geongrid.org

  21. Data is stored in the postgres database at UTEP on the GEON node. GMR capture service running at UTEP reads and replicates data to the postgres database running at SDSC. GMR apply and monitor service run at SDSC to store data sent by the capture service. OGSA-DAI data access service provides access to database on both UTEP and SDSC nodes. The user application grid service accepts two parameters, The name of the node you want to access and An SQL query to get data of interest that will be sent to the grav application. Based on the SQL query an XML query document is generated. Also based on the node, an appropriate service handle is selected. The application grid service invokes the OGSA-DAI grid service handle to access data from the database. The application grid service receives the data, and parses it to extract the relevant data values that are submitted to the grav application. Grid Movement and Replication (with IBM) www.geongrid.org

More Related