130 likes | 191 Vues
WP5 Mass Storage. UK HEPGrid UCL 11th May Tim Folkes, RAL g.t.folkes@rl.ac.uk. Tasks. Review and evaluate current technologies A common API to heterogeneous MSS Tape exchange, including metadata Metadata publishing. Common API.
E N D
WP5 Mass Storage UK HEPGrid UCL 11th May Tim Folkes, RAL g.t.folkes@rl.ac.uk WP5 UKHEPGRID
Tasks • Review and evaluate current technologies • A common API to heterogeneous MSS • Tape exchange, including metadata • Metadata publishing WP5 UKHEPGRID
Common API • Defines an API which can be used by Grid middle ware to interface to MSS • Side effect of making user programmes portable as well • Original scheme has changed due to ATF activity WP5 UKHEPGRID
Pre ATF • Mass storage was treated as local • Just tape storage • No need to handle grid proxies etc • WP2 datamover, replication manager etc would handle this • We would define an API like RFIO and testbeds would implement locally WP5 UKHEPGRID
ATF • Concept of Grid storage • Defined a StorageElement (SE) that includes direct access from the Grid • Access to disk and tape (i.e. all storage, and the management of the disk space) WP5 UKHEPGRID
ATF • 3 interfaces defined • put/get • open/read • management • Move from files to objects • Required a rethink, need software not just API WP5 UKHEPGRID
ATF - What to do? • Evaluate Castor • Stripped down version for disk management • At RAL for use on datastore • GridFTP for data transfer • GridFTP server as SE WP5 UKHEPGRID
GridFTP • Globus have reworked their I/O plans • GridFTP basis of future data movement • tuneable for network performance • parallel streams • third party transfers, partial file transfer • file and stream interfaces • RAL and CERN have tested alpha code • but just for transfer WP5 UKHEPGRID
GridFTP server as SE • Looks feasible given GASS experience to implement SE using GridFTP server • Uses globus infrastructure • Handles GSI proxies • Gives trivial access to local disk and any HSM with Unix filestore interface • Plan to produce prototype for M9 for Unix filesystem and Castor WP5 UKHEPGRID
Task 2 • If networks don’t deliver, may have to move data by tape between CERN and Regional centres • May be easier to take tape out of robot and move, rather than copy • ANSI standard that covers this • Will investigate this and implement if suitable WP5 UKHEPGRID
Task 3 Metadata • Provide metadata about SE and its contents, not about data itself • Require somewhere to publish static information • Require support for acting as active publisher of dynamic metadata • Still require input from other WP’s on what information they require WP5 UKHEPGRID
Deliverables • WP5 will deliver software for - • data access • metadata production • Provide SE interface based on GrifFTP server • Support for castor and Unix filesystem and access by ReplicaManager at M9 WP5 UKHEPGRID
Future Developments • User API to allow direct user access to remote data • SE on other HSMs • Disk housekeeping as part of SE • Management interface to SE (create, stage, reserve, pinning….) WP5 UKHEPGRID