1 / 13

WP5 Mass Storage

WP5 Mass Storage. UK HEPGrid UCL 11th May Tim Folkes, RAL g.t.folkes@rl.ac.uk. Tasks. Review and evaluate current technologies A common API to heterogeneous MSS Tape exchange, including metadata Metadata publishing. Common API.

dinh
Télécharger la présentation

WP5 Mass Storage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WP5 Mass Storage UK HEPGrid UCL 11th May Tim Folkes, RAL g.t.folkes@rl.ac.uk WP5 UKHEPGRID

  2. Tasks • Review and evaluate current technologies • A common API to heterogeneous MSS • Tape exchange, including metadata • Metadata publishing WP5 UKHEPGRID

  3. Common API • Defines an API which can be used by Grid middle ware to interface to MSS • Side effect of making user programmes portable as well • Original scheme has changed due to ATF activity WP5 UKHEPGRID

  4. Pre ATF • Mass storage was treated as local • Just tape storage • No need to handle grid proxies etc • WP2 datamover, replication manager etc would handle this • We would define an API like RFIO and testbeds would implement locally WP5 UKHEPGRID

  5. ATF • Concept of Grid storage • Defined a StorageElement (SE) that includes direct access from the Grid • Access to disk and tape (i.e. all storage, and the management of the disk space) WP5 UKHEPGRID

  6. ATF • 3 interfaces defined • put/get • open/read • management • Move from files to objects • Required a rethink, need software not just API WP5 UKHEPGRID

  7. ATF - What to do? • Evaluate Castor • Stripped down version for disk management • At RAL for use on datastore • GridFTP for data transfer • GridFTP server as SE WP5 UKHEPGRID

  8. GridFTP • Globus have reworked their I/O plans • GridFTP basis of future data movement • tuneable for network performance • parallel streams • third party transfers, partial file transfer • file and stream interfaces • RAL and CERN have tested alpha code • but just for transfer WP5 UKHEPGRID

  9. GridFTP server as SE • Looks feasible given GASS experience to implement SE using GridFTP server • Uses globus infrastructure • Handles GSI proxies • Gives trivial access to local disk and any HSM with Unix filestore interface • Plan to produce prototype for M9 for Unix filesystem and Castor WP5 UKHEPGRID

  10. Task 2 • If networks don’t deliver, may have to move data by tape between CERN and Regional centres • May be easier to take tape out of robot and move, rather than copy • ANSI standard that covers this • Will investigate this and implement if suitable WP5 UKHEPGRID

  11. Task 3 Metadata • Provide metadata about SE and its contents, not about data itself • Require somewhere to publish static information • Require support for acting as active publisher of dynamic metadata • Still require input from other WP’s on what information they require WP5 UKHEPGRID

  12. Deliverables • WP5 will deliver software for - • data access • metadata production • Provide SE interface based on GrifFTP server • Support for castor and Unix filesystem and access by ReplicaManager at M9 WP5 UKHEPGRID

  13. Future Developments • User API to allow direct user access to remote data • SE on other HSMs • Disk housekeeping as part of SE • Management interface to SE (create, stage, reserve, pinning….) WP5 UKHEPGRID

More Related