1 / 55

Transparently Adapting Scientific Applications for the Grid

Transparently Adapting Scientific Applications for the Grid. Prof. Douglas Thain University of Notre Dame http://www.cse.nd.edu/~dthain. The Cooperative Computing Lab. Our model of computer science research:

kenna
Télécharger la présentation

Transparently Adapting Scientific Applications for the Grid

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Transparently AdaptingScientific Applicationsfor the Grid Prof. Douglas Thain University of Notre Dame http://www.cse.nd.edu/~dthain

  2. The Cooperative Computing Lab • Our model of computer science research: • Understand how users with complex, large-scale applications need to interact with computing systems. • Design novel computing systems that can be applied by many different users == basic CS research. • Deploy code in real systems with real users, suffer real bugs, and learn real lessons == applied CS. • Application Areas: • Astronomy, Bioinformatics, Biometrics, Molecular Dynamics, Physics, Game Theory, ... ??? • External Support: NSF, IBM, Sun http://www.cse.nd.edu/~ccl

  3. Abstract • Users of distributed systems encounter many practical barriers between their jobs and the data they wish to access. • Problem: Users have access to many resources (disks), but are stuck with the abstractions (cluster NFS) provided by administrators. • Solution: Tactical Storage Systems allow any user to create, reconfigure, and tear down abstractions without bugging the administrator.

  4. Transparent Distributed Filesystem shared disk The Standard Model

  5. Transparent Distributed Filesystem Transparent Distributed Filesystem private disk private disk private disk FTP, SCP, RSYNC, HTTP, ... shared disk shared disk private disk The Standard Model

  6. Problems with the Standard Model • Users encounter partitions in the WAN. • Easy to access data inside cluster, hard outside. • Must use different mechanisms on diff links. • Difficult to combine resources together. • Different access modes for different purposes. • File transfer: preparing system for intended use. • File system: access to data for running jobs. • Resources go unused. • Disks on each node of a cluster. • Unorganized resources in a department/lab. • A global file system can’t satisfy everyone!

  7. What if... • Users could easily access any storage? • I could borrow an unused disk for NFS? • An entire cluster can be used as storage? • Multiple clusters could be combined? • I could reconfigure structures without root? • (Or bugging the administrator daily.) • Solution: Tactical Storage System (TSS)

  8. Outline • Problems with the Standard Model • Tactical Storage Systems • File Servers, Catalogs, Abstractions, Adapters • Applications: • Remote Database Access for BaBar Code • Remote Dynamic Linking for CDF Code • Logical Data Access for Bioinformatics Code • Expandable Database for MD Simulation • Improving the OS for Grid Computing

  9. Tactical Storage Systems (TSS) • A TSS allows any node to serve as a file server or as a file system client. • All components can be deployed without special privileges – but with security. • Users can build up complex structures. • Filesystems, databases, caches, ... • Two Independent Concepts: • Resources – The raw storage to be used. • Abstractions – The organization of storage.

  10. App file transfer App Adapter Central Filesystem Distributed Filesystem Abstraction Adapter Distributed Database Abstraction file server file server file server file server file server file server file server 3PT UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX Cluster administrator controls policy on all storage in cluster Workstations owners control policy on each machine. App Adapter ??? file system file system file system file system file system file system file system

  11. Components of a TSS: 1 – File Servers 2 – Catalogs 3 – Abstractions 4 – Adapters

  12. 1 – File Servers • Unix-Like Interface • open/close/read/write • getfile/putfile to stream whole files • opendir/stat/rename/unlink • Complete Independence • choose friends • limit bandwidth/space • evict users? • Trivial to Deploy • run server + setacl • no privilege required • can be thrown into a grid system • Flexible Access Control Chirp Protocol file server A file server B file system owner of server A owner of server B

  13. Related Work • Lots of file services for the Grid: • GridFTP, NeST, SRB, RFIO, SRM, IBP, ... • (Adapter interfaces with many of these!) • Why have another file server? • Reason 1: Must have precise Unix semantics! • Apps distinguish ENOENT vs EACCES vs EISDIR. • FTP always returns error 550, regardless of error. • Reason 2: TSS focused on easy deployment. • No privilege required, no config files, no rebuilding, flexible access control, ...

  14. Access Control in File Servers • Unix Security is not Sufficient • No global user database possible/desirable. • Mapping external credentials to Unix gets messy. • Instead, Make External Names First-Class • Perform access control on remote, not local, names. • Types: Globus, Kerberos, Unix, Hostname, Address • Each directory has an ACL: globus:/O=NotreDame/CN=DThain RWLA kerberos:dthain@nd.edu RWL hostname:*.cs.nd.edu RL address:192.168.1.* RWLA

  15. test.c test.dat a.out cms.exe Problem: Shared Namespace file server globus:/O=NotreDame/* RWLAX

  16. /O=NotreDame/CN=Monk /O=NotreDame/CN=Ted mkdir mkdir /O=NotreDame/CN=Monk RWLA /O=NotreDame/CN=Ted RWLA test.c a.out test.c a.out Solution: Reservation (V) Right file server mkdir only! O=NotreDame/CN=* V(RWLA)

  17. 2 - Catalogs HTTP XML, TXT, ClassAds catalog server catalog server periodic UDP updates

  18. 3 - Abstractions • An abstraction is an organizational layer built on top of one or more file servers. • End Users choose what abstractions to employ. • Working Examples: • CFS: Central File System • DSFS: Distributed Shared File System • DSDB: Distributed Shared Database • Others Possible? • Distributed Backup System • Striped File System (RAID/Zebra)

  19. CFS: Central File System appl appl appl adapter adapter adapter CFS CFS CFS file server file file file

  20. access data lookup file location DSFS: Dist. Shared File System appl appl adapter adapter DSFS DSFS file server file server file server file file file file file ptr file file file file file ptr ptr

  21. DSDB: Dist. Shared Database appl appl adapter adapter DSDB DSDB insert query direct access file server database server file server create file file file file index file file file file file file file file

  22. tcsh tcsh cat cat vi vi 4 - Adapter • Like an OS Kernel • Tracks procs, files, etc. • Adds new capabilities. • Enforces owner’s policies. • Delegated Syscalls • Trapped via ptrace interface. • Action taken by Parrot. • Resources chrgd to Parrot. • User Chooses Abstr. • Appears as a filesystem. • Option: Timeout tolerance. • Option: Cons. semantics. • Option: Servers to use. • Option: Auth mechanisms. system calls trapped via ptrace Adapter - Parrot process table file table Abstractions: CFS – DSFS - DSDB HTTP, FTP, RFIO, NeST, SRB, gLite ???

  23. App file transfer App Adapter Central Filesystem Distributed Filesystem Abstraction Adapter Distributed Database Abstraction file server file server file server file server file server file server file server UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX Cluster administrator controls policy on all storage in cluster Workstations owners control policy on each machine. App Adapter ??? file system file system file system file system file system file system file system

  24. Performance Summary • Nothing comes for free! • System calls: order of magnitude slower. • Memory bandwidth overhead: extra copies. • However: • TSS can take full advantage of bandwidth (!NFS) • TSS can drive network/switch to limits. • Typical slowdown on real apps: 5-10 percent. • Allows one to harness resources that would go unused. • Observation: Most users constrained by functionality.

  25. Outline • Problems with the Standard Model • Tactical Storage Systems • File Servers, Catalogs, Abstractions, Adapters • Applications: • Remote Database Access for BaBar Code • Remote Dynamic Linking for CDF Code • Logical Data Access for Bioinformatics Code • Expandable Database for MD Simulation • Improving the OS for Grid Computing

  26. Credit: Sander Klous @ NIKHEF Remote Database Access • HEP Simulation Needs Direct DB Access • App linked against Objectivity DB. • Objectivity accesses filesystem directly. • How to distribute application securely? • Solution: Remote Root Mount via TSS: parrot –M /=/chirp/fileserver/rootdir DB code can read/write/lock files directly. GSI script DB data TSS file server file system Parrot WAN libdb.so GSI Auth CFS sim.exe

  27. Credit: Igor Sfiligoi @ Fermi National Lab Remote Application Loading • Modular Simulation Needs Many Libraries • Devel. on workstations, then ported to grid. • Selection of library depends on analysis tech. • Constraint: Must use HTTP for file access. • Solution: Dynamic Link with TSS+HTTP: • /home/cdfsoft -> /http/dcaf.fnal.gov/cdfsoft appl select several MB from 60 GB of libraries liba.so HTTP server file system ld.so libb.so Parrot WAN libc.so HTTP

  28. GET /home HTTP/1.0 <HTML> <HEAD> <H1> opendir(/home) opendir(/home) Technical Problem • HTTP is not a filesystem! (No directories) • Advantages: Firewalls, caches, admins. Appl HTTP Server root Parrot home etc bin HTTP Module alice babar cms

  29. GET /home/.dir HTTP/1.0 alice babar cms opendir(/home) .dir opendir(/home) .dir Technical Problem • Solution: Turn the directories into files. • Can be cached in ordinary proxies! Appl HTTP Server make httpfs root Parrot home etc bin HTTP Module alice babar cms

  30. Logical Access to Bio Data • Many databases of biological data in different formats around the world: • Archives: Swiss-Prot, TreMBL, NCBI, etc... • Replicas: Public, Shared, Private, ??? • Users and applications want to refer to data objects by logical name, not location! • Access the nearest copy of the non-redundant protein database, don’t care where it is. • Solution: EGEE data management system maps logical names (LFNs) to physical names (SFNs). Credit: Christophe Blanchet, Bioinformatics Center of Lyon, CNRS IBCP, France http://gbio.ibcp.fr/cblanchet, Christophe.Blanchet@ibcp.fr

  31. Run BLAST on LFN://ncbi.gov/nr.data Where is LFN://ncbi.gov/nr.data? open(LFN://ncbi.gov/nr.data) Find it at: SFN://ibcp.fr/nr.data open(SFN://ibcp.fr/nr.data) RETR nr.data Logical Access to Bio Data gLite Server BLAST nr.data EGEE File Location Service Chirp Server Parrot nr.data FTP Server RFIO gLite HTTP FTP nr.data

  32. Appl: Distributed MD Database • State of Molecular Dynamics Research: • Easy to run lots of simulations! • Difficult to understand the “big picture” • Hard to systematically share results and ask questions. • Desired Questions and Activities: • “What parameters have I explored?” • “How can I share results with friends?” • “Replicate these items five times for safety.” • “Recompute everything that relied on this machine.” • GEMS: Grid Enabled Molecular Sims • Distributed database for MD siml at Notre Dame. • XML database for indexing, TSS for storage/policy.

  33. XML+ Temp>300K Mol==CH4 data Adapter host5:fileZ host6:fileX XML -> host6:fileX host2:fileY host5:fileZ DSFS XML -> host1:fileA host7:fileB host3:fileC A Y C Z X B Credit: Jesus Izaguirre and Aaron Striegel, Notre Dame CSE Dept. GEMS Distributed Database database server catalog server catalog server

  34. Active Recovery in GEMS

  35. GEMS and Tactical Storage • Dynamic System Configuration • Add/remove servers, discovered via catalog • Policy Control in File Servers • Groups can Collaborate within Constraints • Security Implemented within File Servers • Direct Access via Adapters • Unmodified Simulations can use Database • Alternate Web/Viz Interfaces for Users.

  36. Outline • Problems with the Standard Model • Tactical Storage Systems • File Servers, Catalogs, Abstractions, Adapters • Applications: • Remote Database Access for BaBar Code • Remote Dynamic Linking for CDF Code • Logical Data Access for Bioinformatics Code • Expandable Database for MD Simulation • Improving the OS for Grid Computing

  37. OS Support for Grid Computing • Grid computing in general suffers because of limitations in the operating system. • Security and permissions: • No ACLs -> hard to share data • Only root can setuid -> hard to secure services. • Resource allocation: • Cannot reserve space -> jobs crash • Hard to clean up procs -> unreliable systems

  38. root kerberos given to the login server. kerberos httpd alice created by krb5 login. alice bob anon1 anon2 student created at run-time. The web server can create distinct anonymous accounts. No need for global nobody. student visitor visitor These two users are completely different: root:kerberos:alice:visitor root:kerberos:bob:visitor

  39. Tactical Storage Systems • Separate Abstractions from Resources • Components: • Servers, catalogs, abstractions, adapters. • Completely user level. • Performance acceptable for real applications. • Independent but Cooperating Components • Owners of file servers set policy. • Users must work within policies. • Within policies, users are free to build.

  40. Parting Thought • Many users of the grid are constrained by functionality, not performance. • TSS allows end users to build the structures that they need for the moment without involving an admin. • Analogy: building blocks for distributed storage.

  41. Acknowledgments • Science Collaborators: • Christophe Blanchet • Sander Klous • Peter Kunzst • Erwin Laure • John Poirer • Igor Sfiligoi • CS Collaborators: • Jesus Izaguirre • Aaron Striegel • CS Students: • Paul Brenner • James Fitzgerald • Jeff Hemmes • Paul Madrid • Chris Moretti • Phil Snowberger • Justin Wozniak

  42. For more information... Cooperative Computing Lab http://www.cse.nd.edu/~ccl Cooperative Computing Tools http://www.cctools.org Douglas Thain • dthain@cse.nd.edu • http://www.cse.nd.edu/~dthain

  43. Performance – System Calls

  44. Performance - Applications parrot only

  45. Performance – I/O Calls

  46. Performance – Bandwidth

More Related