1 / 13

Welcome to the PVFS BOF!

Welcome to the PVFS BOF!. Rob Ross, Rob Latham, Neill Miller Argonne National Laboratory Walt Ligon, Phil Carns Clemson University. An interesting year for PFSs. At least three Linux parallel file systems out there and in use now PVFS GPFS Lustre

shino
Télécharger la présentation

Welcome to the PVFS BOF!

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Welcome to the PVFS BOF! Rob Ross, Rob Latham, Neill Miller Argonne National Laboratory Walt Ligon, Phil Carns Clemson University

  2. An interesting year for PFSs • At least three Linux parallel file systems out there and in use now • PVFS • GPFS • Lustre • Many “cluster file systems” also available • Verdict is still out on what of these are useful for what applications • Now we’re going to complicate things by adding another option :)

  3. Application High-level I/O Library MPI-IO Library Parallel File System I/O Hardware Goals for PVFS and PVFS2 • Focusing on parallel,scientific applications • Expect use of MPI-IOand high-level libraries • Providing our solution to a wide audience • Ease of install, configuration, administration • Handling and surviving faults is a new area of effort for us

  4. Outline • Shorter discussion of PVFS (PVFS1) • Status • Future • Longer discussion of PVFS2 • Goals • Architecture • Status • Future • Leave a lot of time for questions!

  5. PVFS1 Status • Version 1.6.1 released in the last week • Experimental support for symlinks (finally!) • Bug fixes (of course) • Performance optimizations in stat path • Faster ls • Thanks to Axciom guys for this and other patches

  6. PVFS1 Future • Community support has been amazing • We will continue to support PVFS1 • Bug fixes • Integrating patches • Dealing with RedHat’s changes • We won’t be making any more radical changes to PVFS1 • It is already stable • Maintain it as a viable option for production

  7. Fault Tolerance! • Yes, we do actually care about this • No, it’s not easy to just do RAID between servers • If you want to talk about this, please ask me offline • Instead we’ve been working with Dell to implement failover solutions • Requires more expensive hardware • Maintains high performance

  8. Dell Approach • Failover protection of PVFS meta server, and/or • Failover protection of I/O nodes • Using RH AS 2.1 Cluster Manager framework • Floating IP address • Mechanism for failure detection (heartbeats) • Shared storage and quorum • Mechanism for I/O fencing (power switches, watchdog timers) • Hooks for custom service restart scripts

  9. Metadata Server Protection Communication Network Clients Meta Server IP Dell PowerEdge 2650 Dell PowerEdge 2650 Heartbeat Channel Active Meta Server Standby Meta Server . . . Dell PE 2650 Shared Storage IO Nodes Dell PowerVault 220 HA Service

  10. Metadata and I/O Failover Communication Network Clients Dell PE 2650 Dell PE 2650 Dell PE 2650 Dell PE 2650 Dell PE 2650 . . . Heartbeat Channel Standby Meta Server Active IO Nodes Active Meta Server Dell|EMC CX600 Shared FC Storage FC HA Service HA Service

  11. The Last Failover Slide • These solutions are more expensive than local disks • Many users have backup solutions that allow the PFS to be a scratch space • We’ll leave it to users (or procurers?) to decide what is necessary at a site • We’ll be continuing to work with Dell on this to document the process and configuration

  12. PVFS in the Field • Many sites have PVFS in deployment • Largest known deployment is the CalTech Teragrid installation • 70+ Terabyte file system • Also used in industry (e.g. Axciom) • Lots of papers lately too! • CCGrid, Cluster2003 • Researchers modifying and augmenting PVFS, comparing to PVFS • You can buy PVFS systems from vendors • Dell, HP, Linux Networx, others

  13. End of PVFS1 Discussion Questions on PVFS1?

More Related