1 / 72

Overview of GT4 Data Services

Overview of GT4 Data Services. Globus Data Services Talk Outline. Summarize capabilities and plans for data services in the Globus Toolkit Version 4.0.2 Extensible IO (XIO) system Flexible framework for I/O GridFTP Fast, secure data transport The Reliable File Transfer Service (RFT)

abbott
Télécharger la présentation

Overview of GT4 Data Services

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of GT4 Data Services

  2. Globus Data Services Talk Outline Summarize capabilities and plans for data services in the Globus Toolkit Version 4.0.2 • Extensible IO (XIO) system • Flexible framework for I/O • GridFTP • Fast, secure data transport • The Reliable File Transfer Service (RFT) • Data movement services for GT4 • The Replica Location Service (RLS) • Distributed registry that records locations of data copies • The Data Replication Service (DRS) • Integrates RFT and RLS to replicate and register files

  3. The eXtensible Input / Output (XIO) SystemGridFTPThe Reliable File Transfer Service (RFT) Bill Allcock, ANL

  4. 114 genomes 735 in progress You are here Technology Drivers • Internet revolution: 100M+ hosts • Collaboration & sharing the norm • Universal Moore’s law: x103/10 yrs • Sensors as well as computers • Petascale data tsunami • Gating step is analysis • & our old infrastructure? Slide courtesy of Ian Foster

  5. Extensible IO (XIO) system • Provides a framework that implements a Read/Write/Open/Close Abstraction • Drivers are written that implement the functionality (file, TCP, UDP, GSI, etc.) • Different functionality is achieved by building protocol stacks • GridFTP drivers will allow 3rd party applications to easily access files stored under a GridFTP server • Other drivers could be written to allow access to other data stores. • Changing drivers requires minimal change to the application code.

  6. Globus XIO Framework • Moves the data from user to driver stack. • Manages the interactions between drivers. • Assist in the creation of drivers. • Internal API for passing operations down the stack User API Driver Stack Transform Transform Framework Transport

  7. GridFTP • A secure, robust, fast, efficient, standards based, widely accepted data transfer protocol • GridFTP Protocol • Defined through the Global/Open Grid Forum • Multiple Independent implementations can interoperate • The Globus Toolkit supplies a reference implementation: • Server • Client tools (globus-url-copy) • Development Libraries

  8. GridFTP Protocol • FTP protocol is defined by several IETF RFCs • Start with most commonly used subset • Standard FTP: get/put operations, 3rd-party transfer • Implement standard but often unused features • GSS binding, extended directory listing, simple restart • Extend in various ways, while preserving interoperability with existing servers • Parallel data channels • Striped transfers • Partial file transfers • Automatic & manual TCP buffer setting • Progress monitoring • Extended restart

  9. GridFTP Protocol (cont.) • Existing standards • RFC 959: File Transfer Protocol • RFC 2228: FTP Security Extensions • RFC 2389: Feature Negotiation for the File Transfer Protocol • Draft: FTP Extensions • GridFTP: Protocol Extensions to FTP for the Grid • Grid Forum Recommendation • GFD.20 • http://www.ggf.org/documents/GWD-R/GFD-R.020.pdf

  10. 100% Globus code. No licensing issues. Striping support is provided in 4.0 Has IPV6 support includedBased on XIO Extremely modular to allow integration with a variety of data sources (files, mass stores, etc.) Storage Resource Broker (SRB) DSI – Allows use of GridFTP to access data stored in SRB systems High Performance Storage System (HPSS) DSI – Provides GridFTP access to hierarchical mass storage systems (HPSS 6.2) that include tape storage GT4 GridFTP Implementation

  11. GridFTP Clients I/O Network FileSystems ? XIO Drivers -TCP -UDT (UDP) -Parallel streams -GSI -SSH Client Interfaces Globus-URL-Copy C Library RFT (3rd party) Data Storage Interfaces (DSI) -POSIX -SRB -HPSS -NEST GridFTP Server Separate control, data Striping, fault tolerance Metrics collection Access control www.gridftp.org

  12. Control Data Control and Data Channels • GridFTP (and FTP) use (at least) two separate socket connections: • A control channel for carrying the commands and responses • A data Channel for actually moving the data • Control Channel and Data Channel can be (optionally) completely separate processes. Striped Server Typical Installation Separate Processes Control Control Data Data

  13. Striped Server Mode • Multiple nodes work together *on a single file* and act as a single GridFTP server • An underlying parallel file system allows all nodes to see the same file system and must deliver good performance (usually the limiting factor in transfer speed) • I.e., NFS does not cut it • Each node then moves (reads or writes) only the pieces of the file that it is responsible for. • This allows multiple levels of parallelism, CPU, bus, NIC, disk, etc. • Critical if you want to achieve better than 1 Gbs without breaking the bank

  14. TeraGrid Striping results • Ran varying number of stripes • Ran both memory to memory and disk to disk. • Memory to Memory gave extremely good (nearly 1:1) linear scalability. • We achieved 27 Gbs on a 30 Gbs link (90% utilization) with 32 nodes. • Disk to disk we were limited by the storage system, but still achieved 17.5 Gbs

  15. Memory to Memory over 30 Gigabit/s Network (San Diego — Urbana) 30 Gb/s 20 Gb/s 10 Gb/s Striping

  16. Disk to Disk over 30 Gigabit/s Network (San Diego — Urbana) 20 Gb/s 10 Gb/s Striping

  17. Scalability Results

  18. Lots Of Small Files (LOSF) • Pipelining • Many transfer requests outstanding at once • Client sends second request before the first completes • Latency of request is hidden in data transfer time • Cached Data channel connections • Reuse established data channels (Mode E) • No additional TCP or GSI connect overhead

  19. “Lots of Small Files” (LOSF) Optimization Number of files Send 1 GB partitioned into equi-sized files over 60 ms RTT, 1 Gbit/s WAN Megabit/sec (16MB TCP buffer) File size (Kbyte) John Bresnahan et al., Argonne

  20. Reliable File Transfer Service (RFT) • Service that accepts requests for third-party file transfers • Maintains state in a DB about ongoing transfers • Recovers from RFT service failures • Increased reliability because state is stored in a database. • Service interface • The client can submit the transfer request and then disconnect and go away • Similar to a job scheduler for transfer job • Two ways to check status • Subscribe for notifications • Poll for status (can check for missed notifications)

  21. Reliable File Transfer (cont.) • RFT accepts a SOAP description of the desired transfer • It writes this to a database • It then uses the Java GridFTP client library to initiate 3rd part transfers on behalf of the requestor • Restart Markers are stored in the database to allow for restart in the event of an RFT failure • Supports concurrency, i.e., multiple files in transit at the same time • This gives good performance on many small files

  22. IPCReceiver DataChannel DataChannel MasterDSI SlaveDSI Protocol Interpreter SlaveDSI Protocol Interpreter Data Channel MasterDSI IPCReceiver Data Channel IPC Link IPC Link Reliable File Transfer:Third Party Transfer RFT Client • Fire-and-forget transfer • Web services interface • Many files & directories • Integrated failure recovery • Has transferred 900K files SOAP Messages Notifications(Optional) RFT Service GridFTP Server GridFTP Server

  23. Control Control Control Control Data Data Data Data Data Transfer Comparison RFT Client SOAP Messages Notifications(Optional) globus-url-copy RFT Service

  24. Globus Replica Location Service

  25. Replica Management in Grids Data intensive applications produce terabytes or petabytes of data stored as of millions of data objects Replicate data at multiple locations for reasons of: • Fault tolerance: • Avoid single points of failure • Performance • Avoid wide area data transfer latencies • Achieve load balancing • Need tools for: • Registering the existence of data items, discovering them • Replicating data items to new locations

  26. A Replica Location Service • A Replica Location Service (RLS) is a distributed registry that records the locations of data copies and allows replica discovery • Must perform and scale well: support hundreds of millions of objects, hundreds of clients • E.g., LIGO (Laser Interferometer Gravitational Wave Observatory) Project • RLS servers at 8 sites • Maintain associations between 3 million logical file names & 30 million physical file locations • RLS is one component of a Replica Management system • Other components include consistency services, replica selection services, reliable data transfer, etc.

  27. A Replica Location Service • A Replica Location Service (RLS) is a distributed registry that records the locations of data copies and allows discovery of replicas • RLS maintains mappings between logical identifiers and target names • An RLS framework was designed in a collaboration between the Globus project and the DataGrid project (SC2002 paper)

  28. RLS Framework Replica Location Indexes • Local Replica Catalogs (LRCs) contain consistent information about logical-to-target mappings RLI RLI LRC LRC LRC LRC LRC Local Replica Catalogs • Replica Location Index (RLI) nodes aggregate information about one or more LRCs • LRCs use soft state update mechanisms to inform RLIs about their state: relaxed consistency of index • Optional compression of state updates reduces communication, CPU and storage overheads • Membership service registers participating LRCs and RLIs and deals with changes in membership

  29. Replica Location Service In Context • The Replica Location Service is one component in a layered data management architecture • Provides a simple, distributed registry of mappings • Consistency management provided by higher-level services

  30. Components of RLS Implementation • Common server implementation for LRC and RLI • Front-End Server • Multi-threaded • Written in C • Supports GSI Authentication using X.509 certificates • Back-end Server • MySQL, PostgreSQL, Oracle Relational Database • Embedded SQLite DB • Client APIs: C, Java, Python • Client Command line tool

  31. RLS Implementation Features • Two types of soft state updates from LRCs to RLIs • Complete list of logical names registered in LRC • Compressed updates: Bloom filter summaries of LRC • Immediate mode • Incremental updates • User-defined attributes • May be associated with logical or target names • Partitioning (without bloom filters) • Divide LRC soft state updates among RLI index nodes using pattern matching of logical names • Currently, static membership configuration only • No membership service

  32. Alternatives for Soft State Update Configuration • LFN List • Send list of Logical Names stored on LRC • Can do exact and wildcard searches on RLI • Soft state updates get increasingly expensive as number of LRC entries increases • space, network transfer time, CPU time on RLI • E.g., with 1 million entries, takes 20 minutes to update mySQL on dual-processor 2 GHz machine (CPU-limited) • Bloom filters • Construct a summary of LRC state by hashing logical names, creating a bitmap • Compression • Updates much smaller, faster • Supports higher query rate • Small probability of false positives (lossy compression) • Lose ability to do wildcard queries

  33. Immediate Mode for Soft State Updates • Immediate Mode • Send updates after 30 seconds (configurable) or after fixed number (100 default) of updates • Full updates are sent at a reduced rate • Tradeoff depends on volatility of data/frequency of updates • Immediate mode updates RLI quickly, reduces period of inconsistency between LRC and RLI content • Immediate mode usually sends less data • Because of less frequent full updates • Usually advantageous • An exception would be initially loading of large database

  34. Performance Testing • Extensive performance testing reported in HPDC 2004 paper • Performance of individual LRC (catalog) or RLI (index) servers • Client program submits operation requests to server • Performance of soft state updates • Client LRC catalogs sends updates to index servers Software Versions: • Replica Location Service Version 2.0.9 • Globus Packaging Toolkit Version 2.2.5 • libiODBC library Version 3.0.5 • MySQL database Version 4.0.14 • MyODBC library (with MySQL) Version 3.51.06

  35. Testing Environment • Local Area Network Tests • 100 Megabit Ethernet • Clients (either client program or LRCs) on cluster: dual Pentium-III 547 MHz workstations with 1.5 Gigabytes of memory running Red Hat Linux 9 • Server: dual Intel Xeon 2.2 GHz processor with 1 Gigabyte of memory running Red Hat Linux 7.3 • Wide Area Network Tests (Soft state updates) • LRC clients (Los Angeles): cluster nodes • RLI server (Chicago): dual Intel Xeon 2.2 GHz machine with 2 gigabytes of memory running Red Hat Linux 7.3

  36. LRC Operation Rates (MySQL Backend) • Up to 100 total requesting threads • Clients and server on LAN • Query: request the target of a logical name • Add: register a new <logical name, target> mapping • Delete a mapping

  37. Comparison of LRC to Native MySQL Performance LRC Overheads Highest for queries: LRC achieve 70-80% of native rates Adds and deletes: ~90% of native performance for 1 client (10 threads) Similar or better add and delete performance with 10 clients (100 threads)

  38. Bulk Operation Performance • For user convenience, server supports bulk operations • E.g., 1000 operations per request • Combine adds/deletes to maintain approx. constant DB size • For small number of clients, bulk operations increase rates • E.g., 1 client (10 threads) performs 27% more queries, 7% more adds/deletes

  39. Uncompressed Soft State Updates • Perform poorly when multiple LRCs update RLI • E.g., 6 LRCs with 1 million entries updating RLI, average update ~5102 seconds in Local Area • Limiting factor: rate of updates to an RLI database • Advisable to use incremental updates

  40. Bloom Filter Compression • Construct a summary of each LRC’s state by hashing logical names, creating a bitmap • RLI stores in memory one bitmap per LRC Advantages: • Updates much smaller, faster • Supports higher query rate • Satisfied from memory rather than database Disadvantages: • Lose ability to do wildcard queries, since not sending logical names to RLI • Small probability of false positives (configurable) • Relaxed consistency model

  41. Bloom Filter Performance: Single Wide Area Soft State Update (Los Angeles to Chicago)

  42. Scalability of Bloom Filter Updates • 14 LRCs with 5 million mappings send Bloom filter updates continuously in Wide Area (unlikely, represents worst case) • Update times increase when 8 or more clients send updates • 2 to 3 orders of magnitude better performance than uncompressed (e.g., 5102 seconds with 6 LRCs)

  43. Bloom Filter Compression Supports Higher RLI Query Rates • Uncompressed updates: about 3000 queries per second • Higher rates with Bloom filter compression • Scalability limit: significant overhead to check 100 bit maps • Practical deployments: <10 LRCs updating an RLI

  44. RLS Performance Summary Individual RLS servers perform well and scale up to • Millions of entries • One hundred requesting threads • Soft state updates of the distributed index scale well when using Bloom filter compression • Uncompressed updates slow as size of catalog grows • Immediate mode is advisable

  45. Current Work • Ongoing maintenance and improvements to RLS • RLS is a stable component • Good performance and scalability • No major changes to existing interfaces • Recently added features: • WS-RLS: WS-RF compatible web services interface to existing RLS service • Embedded SQLite database for easier RLS deployment • Pure Java client implementation completed

  46. Wide Area Data Replication for Scientific Collaborations Ann Chervenak, Robert Schuler, Carl Kesselman USC Information Sciences Institute Scott Koranda Univa Corporation Brian Moe University of Wisconsin Milwaukee

  47. Motivation • Scientific application domains spend considerable effort managing large amounts of experimental and simulation data • Have developed customized, higher-level Grid data management services • Examples: • Laser Interferometer Gravitational Wave Observatory (LIGO) Lightweight Data Replicator System • High Energy Physics projects: EGEE system, gLite, LHC Computing Grid (LCG) middleware • Portal-based coordination of services (E.g., Earth System Grid)

  48. Motivation (cont.) • Data management functionality varies by application • Share several requirements: • Publish and replicate large datasets (millions of files) • Register data replicas in catalogs and discover them • Perform metadata-based discovery of datasets • May require ability to validate correctness of replicas • In general, data updates and replica consistency services not required (i.e., read-only accesses) • Systems provide production data management services to individual scientific domains • Each project spends considerable resources to design, implement & maintain data management system • Typically cannot be re-used by other applications

  49. Motivation (cont.) • Long-term goals: • Generalize functionality provided by these data management systems • Provide suite of application-independent services • Paper describes one higher-level data management service: the Data Replication Service (DRS) • DRS functionality based on publication capability of the LIGO Lightweight Data Replicator (LDR) system • Ensures that a set of files exists on a storage site • Replicates files as needed, registers them in catalogs • DRS builds on lower-level Grid services, including: • Globus Reliable File Transfer (RFT) service • Replica Location Service (RLS)

  50. Outline • Description of LDR data publication capability • Generalization of this functionality • Define characteristics of an application-independent Data Replication Service (DRS) • DRS Design • DRS Implementation in GT4 environment • Evaluation of DRS performance in a wide area Grid • Related work • Future work

More Related