1 / 10

J-WAN: PSC Lustre-wan Efforts 2009-2010 Josephine Palencia, JRay Scott

J-WAN: PSC Lustre-wan Efforts 2009-2010 Josephine Palencia, JRay Scott. Continuing & new development efforts in progression Close tie up with SUN Lustre roadmap. Lustre 2.0 -- release mid 2009

kieu
Télécharger la présentation

J-WAN: PSC Lustre-wan Efforts 2009-2010 Josephine Palencia, JRay Scott

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. J-WAN: • PSC Lustre-wan Efforts 2009-2010 • Josephine Palencia, JRay Scott

  2. Continuing & new development efforts in progression Close tie up with SUN Lustre roadmap

  3. Lustre 2.0 -- release mid 2009 GSS/Kerberos functionality may be disabled or unsupported to include other security features (newer capabilities, remote client handling) in a later release GSS/Kerberos - public released with 2.x version, or even 3.0; being decided upon Lustre roadmap

  4. Work with kerberos-lustre: strictly on developers time schedule We're their testbed for Lustre kerberos (and all other advanced features)‏

  5. 1. Centralized mgs server: mgs.teragrid.org managed/located by PSC and on TERAGRID.ORG realm o MGS on TERAGRID.ORG o MDS, OSS's on PSC.EDU o RP sites lustre-CLIENTS, MDS and OSS's on remote sites 2. Distributed OSS (OST Pools)‏ o Addition of contributed (from remote sites) OSTS to 'OST pool'. o Remote RP sites contribute 1 or more OSTS and PSC adds them increasing Lustre-wan storage o Capability for OST pools will be part of 2.0 release Work Outline1

  6. 3. Clustered metadata MDS with local multiple failovers a) Failovers - For reliability, one or more MDS failovers will be added by PSC to it's current MDS setup. - This may be replicated in other RP sites. - All MDS resides on each local RP sites respective kerberos realms. - MDS failovers will only be local and not done remotely. Work Outline2

  7. b) Clustered Metadata Servers (CMS)‏ For scalability and better performance, we implement CMS setup at both local and remote sites to distribute MDS operations evenly (or appropriately) among's several MDS servers. CMD feature won't be ready in 2.0 timeframe, but they'll have some "technology preview" releases which contain basically working CMD feature that we can try it out. We use 3'rd party replication tools as lustre's native replication feature won't be ready in 2.0 timeframe. Work Outline3

  8. Work Outline4 4. Make operational lustre UID mapping for Teragrid users o Re-writing/implementing the UID mapping feature o Ready for testing (not public release) within 2009 5. Have J-WAN appear on the PSC's Speedpage 6. Test integration of J-WAN with Teragrid portal

  9. Work Outline5 All these under the umbrella of Lustre-kerberos implementation on all lustre-components with bi-directional kerb auth operational between RP sites with transitivity kerb authentication with TERAGRID.ORG

  10. Reference http://www.teragridforum.org/mediawiki/index.php?title=PSC%27s_Lustre-wan_efforts_2009-2010 PSC's Lustre-wan Efforts 2009-2010

More Related