1 / 19

NIIF Storage Services

NIIF Storage Services. Collaboration on Storage Services. Peter Stefan, NIIF stefan@niif.hu 29 June, 2 00 7, Amsterdam , The Netherlands. Agenda. Objective of our storage development Brief NIIF introduction Driving force of storage development: users need it! Storage building blocks: FC,

baylee
Télécharger la présentation

NIIF Storage Services

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NIIF Storage Services Collaboration on Storage Services Peter Stefan, NIIF stefan@niif.hu 29 June, 2007, Amsterdam, The Netherlands

  2. Agenda • Objective of our storage development • Brief NIIF introduction • Driving force of storage development: users need it! • Storage building blocks: • FC, • AoE, • „Design and develop your own” box. • The two storage node solutions at NIIF • How can users access storage volumes? • Storage node integration • Conclusions NIIF– http://www.niif.hu

  3. Who we are? • NIIF is the Hungarian NREN, connecting about 450 academic institutions and 600000 users. • Our service portfolio is as follows: • backbone data network (HBONE), • videoconferencing services, • web, mail, AAI, FTP, DNS services, • supercomputers, grids, and • data storage facilities. • Financed mostly by the Hungarian Ministry of Economics and Transportation. NIIF– http://www.niif.hu

  4. Who we are? NIIF– http://www.niif.hu

  5. Why large storage volumes? • There is a continuous need for large volume, moderate speed and high availability storage solutions. • Key usage areas that we know about are: • library archives, • on-line network monitoring (netflow), • videoconferencing, • grids, • off-site back up of large-volume servers. NIIF– http://www.niif.hu

  6. Bulding blocks - FC • Fibre Channel (FC) SAN/NAS solutions dominate the market. • Experience with StorageTek (15000 USD/TB), T3, CX500 (6000 USD/TB), SFS (45000 USD/TB). • Generic experience: • FC works well  • high price/{capacity,performance} figures  • rather large operational costs (disk replacement, support)  • not optimal choice for disk-based back ups.  NIIF– http://www.niif.hu

  7. Bulding blocks - AoE • Interesting idea: use only Ethernet as storage interconnection! • ATA-over-Ethernet protocol by Coraid Inc. • Key idea: ATA commands are wrapped into Ethernet frames; the AoE driver strips Ethernet frames, then extracts ATA commands and conveys them as block devices. NIIF– http://www.niif.hu

  8. Bulding blocks - AoE • Two conceptually different AoE implementations: • Parallel ATA (older, 5 MB/s), • Serial ATA (newer, 60 MB/s). • Generic experience with the SATA boxes: • moderate performance figures in certain cases  • very good price/performance value (1200 USD/TB), and prices keep dropping  • low operational costs (standard disks, very simple architecture)  • to build a PByte-scale system further virtualization layer is required  NIIF– http://www.niif.hu

  9. „Design and develop your own box” • Use a multi-disk PC chassis with Linux! • SATA or SAS disks, • standard RAID cards, • PCI-express bus system, • iSCSI and AoE protocols, • universal storage box: GE, Infiniband, 10 GE. • Generic experience with ours: • modular  • can be driven up to the hardware boundaries  • our record is: 700 MB/s using RAID0 and 450 MB/s using RAID10  • no vendor support  NIIF– http://www.niif.hu

  10. Node solution 1 - PATA storage • The purpose is to provide a single, large disk volume for grid users. • Furbrished from 7 PATA boxes, 2 Cisco Catalyst switches, and 2 64-bit servers. • Involves 7x10x400 MBs of disks. Sum: 28 TBs. • Two storage processors are in master-slave layout. • Volumes are organized in ten vertical RAID6 arrays concatenated by LVS. This provides box redundancy. • Available via grid middleware and SCP-only. • Weak points: low IO (50-55 MB/s) , lack of switch-level redundancy , not scalable . NIIF– http://www.niif.hu

  11. Node solution 1 - PATA storage NIIF– http://www.niif.hu

  12. Node solution 2 - SATA storage • The purpose is to provide a single, large disk volume that can be split to smaller, yet scalable units for generic usage. • Made up of 4 SATA boxes, 2 Cisco Catalyst GE switches, and 2 servers. • GE is used as storage interconnection. • Involves 4x15x500 GBs of disks. Sum: 30 TBs. • Two Linux storage processors with HA. • Volumes are organized in 15 vertical RAID5 arrays concatenated by LVS. • RAID + volume management is performed on storage proeccors. NIIF– http://www.niif.hu

  13. Node solution 2 - SATA storage • Full box-, switch-, and storage processor-level redundancy . • Available via iSCSI for local users . • Has virtualization capability, i.e. any boxes and any storage protocols can be used (FC, AoE, iSCSI, Infiniband, Tape) . • Can be used for building up hierarchical storage systems . • Not-yet FC-equivalent in terms of e.g. cache coherency . • IO is about 100-120 MB/s. NIIF– http://www.niif.hu

  14. Solution #2 - SATA storage NIIF– http://www.niif.hu

  15. Solution #2 - SATA storage NIIF– http://www.niif.hu

  16. How to give access to users? • Storage nodes can provide block-level, file-level, and application-level access to the large disk volumes. • Block-level: • AoE, • iSCSI (particularly kind to NRENs). • File-level: • NFS, SMBFS, • Cluster file systems (GFS, Lustre). • Application-level: • SSH, SCP, • storage-capable grid middleware. NIIF– http://www.niif.hu

  17. Storage node integration • Current status: 2 nodes, lot of plans for the future. • Create multiple redundant storage nodes at different parts of our network, i.e. at regional centers, or at large customers. • Integrate them at application level, by using storage management software, like SRM, or Grid Underground (GUG) storage management modules. • The target system is a geographically distributed storage system providing distributed, replicated, yet safely protected data sets, and standard interfaces. • First inter-node service: distributed and encrypted backup service. NIIF– http://www.niif.hu

  18. Conclusions • During our storage development activities we revealed that building cost-efficient, reasonable-speed and reliable storage nodes is possible based on cheap building blocks ands free software. • To handle complexity we believe that application-integrated hierarchical storage systems are needed. • NRENs can play integrating role due to their positions. • QoS, monitoring and, in general, strict operational disciplines are necessary compared to, say, grid systems. • There is still a lot to develop: management, advanced monitoring, cache coherency. NIIF– http://www.niif.hu

  19. Thanks & Questions ? stefan@niif.hu http://www.niif.hu

More Related