1 / 7

Connect Ceph Infrastructure

Connect Ceph Infrastructure. Production Ceph cluster. 2 Dell R510s Dual Intel X5660s 96 GB RAM 2 PERC H800s, each with 2 MD1200 shelves Total of 56 disks per host . 420TB raw, using 2x replication Ceph version 0.72 (Emperor ). Ceph features currently used by Connect. CephFS

gitano
Télécharger la présentation

Connect Ceph Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Connect Ceph Infrastructure

  2. Production Ceph cluster • 2 Dell R510s • Dual Intel X5660s • 96 GB RAM • 2 PERC H800s, each with 2 MD1200 shelves • Total of 56 disks per host. • 420TB raw, using 2x replication • Ceph version 0.72 (Emperor)

  3. Ceph features currently used by Connect CephFS • Distributed network filesystem • Fully POSIX compatible. • CI Connect “Stash” service. • Accessible via POSIX, HTTP, and Globus on all Connect instances • In our experience, generally pretty good but: • Requires custom kernels (3.12+) • Problems with large numbers (1000s) of files in directories

  4. Ceph features currently used by Connect RADOS Block Device (RBD) • Allows us to carve off a portion of the Ceph pool and expose it to a machine as a block device (e.g., “/dev/rbd1”) • Block device can be formatted with any filesystem of our choosing (XFS, EXT4, Btrfs, ZFS, ...) • Currently using RBD as the backend storage for FAXbox. • Simply an XRootD service running on top of normal RBD device • Files in XRootD are also available via Globus, POSIX, and HTTP.

  5. TestbedCeph cluster • Scrappy, triply redundant deployment built out of retired machines for testing new Ceph releases and features • 14 disk nodes with 6 disks each • Fairly ancient dual-CPU, dual-core Opteron boxes • (6) 500 – 750GB disks each • 3 redundant head nodes and 3x replication • Continues to grow as we retire old hardware

  6. Services running on Testbed • Using latest stable Ceph version (v0.80 Firefly) • Currently testing RADOSGW: • Implements the Amazon S3 API and the OpenStack Swift API on top of the Ceph object store • Knowledge Lab group at UChicago successfully using RADOSGW to store job input/outputs via S3 API • All files also get stored on Amazon S3 as a backup. • Very cost effective – since transferring into S3 is free, they only have to pay Amazon to keep the data on disk.

  7. Upcoming experiments – Tiered storage • In the Tiered Storage scenario, two pools are created • “Hot” cache pool with recently accessed data living on SSDs, no replication. • “Cold” pool with traditional HDDs, using erasure coding scheme to maximize available disk space at cost of performance • Ideal for scenarios where the majority of data is written, popular for a while, and then seldom accessed afterwards. • Compare to, say, HDFS deployments where 2/3 of storage is immediately lost to replication

More Related