1 / 34

Open Storage: Intel’s Investments in Object Storage

Open Storage: Intel’s Investments in Object Storage. Paul Luse and Tushar Gohad, Storage Division, Intel. Transforming the Datacenter through Open Standards. Speed-up new application and services deployment on software-defined infrastructure created from widely available IA servers.

lonato
Télécharger la présentation

Open Storage: Intel’s Investments in Object Storage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Open Storage: Intel’s Investmentsin Object Storage Paul Luse and Tushar Gohad, Storage Division, Intel

  2. Transforming the Datacenter through Open Standards Speed-up new application and services deployment on software-defined infrastructure created from widely available IA servers. Strengthen open solutions with Intel code contributions and silicon innovations to speed-up development, while building a foundation of trust. Assure OpenStack based cloud implementations offer highest levels of agility, automation and efficiency using IA platform innovations. Transforming the Business Transforming the Ecosystem Transforming the Infrastructure

  3. Legal Disclaimers All products, computer systems, dates and figures specified are preliminary based on current expectations, and are subject to change without notice. Intel® Advanced Vector Extensions (Intel® AVX)* are designed to achieve higher throughput to certain integer and floating point operations.  Due to varying processor power characteristics, utilizing AVX instructions may cause a) some parts to operate at less than the rated frequency and b) some parts with Intel® Turbo Boost Technology 2.0 to not achieve any or maximum turbo frequencies.  Performance varies depending on hardware, software, and system configuration and you should consult your system manufacturer for more information.  *Intel® Advanced Vector Extensions refers to Intel® AVX, Intel® AVX2 or Intel® AVX-512.  For more information on Intel® Turbo Boost Technology 2.0, visit http://www.intel.com/go/turbo No computer system can provide absolute security.  Requires an enabled Intel® processor, enabled chipset, firmware and/or software optimized to use the technologies.   Consult your system manufacturer and/or software vendor for more information. No computer system can provide absolute security. Requires an Intel® Identity Protection Technology-enabled system, including an enabled Intel® processor, enabled chipset, firmware, software, and Intel integrated graphics (in some cases) and participating website/service. Intel assumes no liability for lost or stolen data and/or systems or any resulting damages. For more information, visit http://ipt.intel.com/.  Consult your system manufacturer and/or software vendor for more information. No computer system can provide absolute security.  Requires an enabled Intel® processor, enabled chipset, firmware, software and may require a subscription with a capable service provider (may not be available in all countries).  Intel assumes no liability for lost or stolen data and/or systems or any other damages resulting thereof.  Consult your system or service provider for availability and functionality.  No computer system can provide absolute reliability, availability or serviceability.  Requires an Intel® Xeon® processor E7-8800/4800/2800 v2 product families or Intel® Itanium® 9500 series-based system (or follow-on generations of either.)  Built-in reliability features available on select Intel® processors may require additional software, hardware, services and/or an internet connection.  Results may vary depending upon configuration.  Consult your system manufacturer for more details. For systems also featuring Resilient System Technologies:  No computer system can provide absolute reliability, availability or serviceability.  Requires an Intel® Run Sure Technology-enabled system, including an enabled Intel processor and enabled technology(ies).  Built-in reliability features available on select Intel® processors may require additional software, hardware, services and/or an Internet connection.  Results may vary depending upon configuration.  Consult your system manufacturer for more details. For systems also featuring Resilient Memory Technologies:  No computer system can provide absolute reliability, availability or serviceability.  Requires an Intel® Run Sure Technology-enabled system, including an enabled Intel® processor and enabled technology(ies).  built-in reliability features available on select Intel® processors may require additional software, hardware, services and/or an Internet connection.  Results may vary depending upon configuration.  Consult your system manufacturer for more details.  The original equipment manufacturer must provide TPM functionality, which requires a TPM-supported BIOS. TPM functionality must be initialized and may not be available in all countries. Requires a system with Intel® Turbo Boost Technology. Intel Turbo Boost Technology and Intel Turbo Boost Technology 2.0 are only available on select Intel® processors. Consult your system manufacturer. Performance varies depending on hardware, software, and system configuration. For more information, visit http://www.intel.com/go/turbo Intel® Virtualization Technology requires a computer system with an enabled Intel® processor, BIOS, and virtual machine monitor (VMM). Functionality, performance or other benefits will vary depending on hardware and software configurations. Software applications may not be compatible with all operating systems. Consult your PC manufacturer. For more information, visit http://www.intel.com/go/virtualization

  4. Agenda • Storage Policies in Swift • Swift Primer / Storage Policies Overview • Swift Storage Policy Implementation • Usage Models • Erasure Coding Policy in Swift • Erasure Coding (EC) Policy • Swift EC Design Considerations and Proposed Architecture • Python EC Library (PyECLib) • Intel® Intelligent Storage Acceleration Library (ISA-L) • COSBench • Cloud Object Storage Benchmark • Status, User adoption, Roadmap Public Swift Test Cluster

  5. Storage Policies for OpenStack Swift

  6. Swift Primer • OpenStack Object Store • Distributed, Scale-out Object Storage • CAP • Eventually consistent • Highly Available – no single point of failure • Partition Tolerant • Well suited for unstructured data • Uses container model for grouping objects with like characteristics • Objects are identified by their paths and have user-defined metadata associated with them • Accessed via RESTfulinterface • GET, PUT, DELETE • Built on standard hardware • Cost effective, efficient object container

  7. The Big Picture Clients Obj A Obj A Upload Download RESTful API Access Tier • Handle incoming requests • Handle failures, ganged responses • Scalable shared nothing architecture • Consistent hashing ring distribution Auth Service Load Balancer Proxy Proxy Proxy Capacity Tier • Actual object storage • Variable replication count • Data integrity services • Scale-out capacity Storage Nodes Storage Nodes Storage Nodes Storage Nodes Storage Nodes Copy3 Copy 1 Copy 2 Zone 1 Zone 2 Zone 3 Zone 4 Zone 5 Scalable for concurrency and/or capacity independently

  8. Why Storage Policies? New Opportunities for Swift Are all nodes equal? Would you like 2x or 3x? Can I add something like Erasure Codes?

  9. Why Storage Policies? New Opportunities for Swift • Support Grouping of Storage • Expose or make use of differentiated hardware with a single cluster • Performance tiers – a tier with high-speed SSDs can be defined for better performance characteristics • Multiple Durability Schemes • Erasure coded • Mixed-mode replicated (Gold 3x, Silver 2x etc) • Other Usage models • Geo tagging – ensure geographical location of data within a container object container Community effort w/primary contributions from Intel and SwiftStack*

  10. Adding Storage Policies to Swift 3 Different Policies: 3 Different Rings • Introduction of multiple object rings • Introduction of container tag: X-Storage-Policy Triple Replication Reduced Replication Erasure Codes 3 locations, same object 2 locations, same object n locations, object fragments

  11. Storage Policy Touch Points wsgi server middleware (partially, modules like list_endpoints) Proxy Nodes object controller account controller swift proxy wsgi application helper functions container controller wsgi server middleware expirer replicator swift object wsgi application updater auditor Storage Nodes reaper replicator swift account wsgi application helper functions auditor DB schema updates sync replicator swift container wsgi application updater auditor

  12. Usage Model – Reduced Redundancy Container with 3x Policy Container with 2x Policy

  13. Performance Tier Container with HDD Policy SSDs – previously limited to being Used for account/container DB Container with SSD Policy Note: entire systems can comprise a policy as well…

  14. Geo Tagging Geo #1 Geo #2

  15. Erasure Codes Container with 3x Policy Container with EC Policy EC Fragments Note: EC could also be on dedicated HW…

  16. Erasure Coding Policy in OpenStack Swift

  17. Erasure Codes Object • Object split into k data and m parity chunks and distributed across cluster • Space-optimal Redundancy and High Availability • k = 10, m = 4 translates to 50% space requirement when compared to 3x replication • Higher Compute and Network Requirements • Suitable for Archival workloads (high write %) D1 D2 D3 Dk P1 Pm

  18. Swift with Erasure Coding Policy Clients Download Upload RESTful API, Similar to S3 Access Tier (Concurrency) Obj A Auth Service Obj A Load Balancer • Applications control policy • Inline EC EC Encoder ECDecoder Proxy Proxy Proxy Capacity Tier (Storage) • Supports multiple policies • EC flexibility via plug-in Frag 2 Storage Storage Storage Storage Storage Storage Storage Storage Storage Storage Frag 4 Storage Storage Storage Frag 1 Storage Storage Frag k + m Frag 3 Zone 1 Zone 2 Zone 3 Zone 4 Zone 5 redundancy n = k data fragments + m parity fragments

  19. EC Policy – Design Considerations • First significant (non-replication) Storage Policy in OpenStack Swift • In-line Proxy-centric Datapath Design • Erasure Code encode/decode during PUT/GET done at the proxy server • Aligned with Swift architecture to focus demanding services in the access tier • Erasure Coding Policy applied at the Container-level • New container metadata will identify whether objects within it are erasure coded • Follows from the generic Swift storage policy design • Keep it simple and leverage current architecture • Multiple new storage node services required to assure Erasure Code chunk integrity as well as Erasure Code stripe integrity; modeled after replica services • Storage nodes participate in Erasure Code encode/decode for reconstruction analogous to replication services synchronizing objects Community effort w/ primary contributions from Intel, Box*, SwiftStack*

  20. Erasure Coding Policy Touchpoints wsgi server middleware Proxy Nodes controller modifications existing modules EC Library Interface swift proxy wsgi application Plug in 2 Plug in 1 wsgi server middleware EC Auditor swift object wsgi application existing modules metadata changes Storage Nodes EC Library Interface EC Reconstructor Plug in 2 Plug in 1 existing modules swift container wsgi application swift account wsgi application metadata changes

  21. Python Erasure Code Library (PyECLib) • Python interface wrapper library with pluggable C erasure code backends • Backend support planned in v1.0: Jerasure, Flat-XOR, Intel® ISA-L EC • BSD-licensed, hosted on bitbucket: https://bitbucket.org/kmgreen2/pyeclib • Use by Swift at Proxy server and Storage node level – most of the Erasure Coding details opaque to Swift • Jointly developed by Box*, Intel and the Swift community existing modules PyECLib (Python) EC modifications to the Object Controller swift proxy server Jerasure (C) ISA-L (C, asm) EC Auditor swift object server EC Reconstructor existing modules PyECLib (Python) Jerasure ISA-L (C, asm)

  22. Intel® ISA-L EC library • Part of Intel® Intelligent Storage Acceleration Library • Provides primitives for accelerating storage functions • Encryption, compression, de-duplication, integrity checks • Current Open Source version provides Erasure Code support • Fast Reed Solomon (RS) Block Erasure Codes • Includes optimizations for Intel® architecture • Uses Intel® SIMD instructions for parallelization • Order of magnitude fasterthan commonly used lookup table methods • Makes other non-RS methods designed for speed irrelevant • Hosted at https://01.org/storage-acceleration-library • BSD Licensed

  23. Project Status • Target: Summer ’14 • PyECLib upstream on bitbucket and PyPi • Storage Policies in plan for OpenStack Juno • EC Expected to coincide with OpenStack Juno • Ongoing Development Activities • The community uses a Trello discussion board: • https://trello.com/b/LlvIFIQs/swift-erasure-codes • Launchpad blueprints: • https://blueprints.launchpad.net/swift • Additional Information • Attend the Swift track in the design summit (B302, Thu 5:00pm) • Talk to us on #openstack-swift or on the Trello discussion board • To give PyECLib a test run, install it from https://pypi.python.org/pypi/PyECLib • For information on ISA-L, check out http://www.intel.com/storage

  24. COSBench: Cloud Object Storage Benchmark

  25. What is COSBench? • Open Source Cloud Object Storage Benchmarking Tool • Announced at the Portland design summit 2013 • Open Source (Apache License) • Cross Platform (Java + Apache OSGI) • Distributed load testing framework • Pluggable adaptors for multiple object storage backends • Flexible workload definition • Web-based real-time performance monitoring • Rich performance metric reporting (Performance timeline, Response time histogram) Storage backend Auth tempauth swauth OpenStack* Swift Iometer (block) keystone direct none Amplidata* Amplistor basic/digest librados Ceph rados GW (swift) COSBench (object) rados GW (s3) Amazon* S3 integrated Scality sproxyd CDMI CDMI-base basic/digest CDMI-swift swauth/keystone None none Mock mock

  26. Workload Configuration Flexible load control object size distribution Read/Write Operations Workflow for complex stages Flexible configuration for complex workloads

  27. Web Console Test Generators Active Workloads History

  28. Performance Reporting Rich performance data help characterization

  29. Progress since Havana • New Features • New Object Store Backends • Amazon S3 adapter • Cephadapter (Librados based, and Radosgw based) • CDMI adapter (swift through cdmi middleware, scality) Authentication Support • HTTP basic and digest CoreFunctionality • New selectors / new operator • Object integrity checking • Response time breakdown Jobmanagement • User Interface Improvements Batch Workload Configuration UI • Adds Batch Test Configuration to COSBench • Makes COSBench workload configuration more like IOmeter Bug Fixes • 85 issues resolved Roadmap

  30. User Adoption github activity (2 weeks)

  31. Contributing to COSbench • Active code repository and community Repository: https://github.com/intel-cloud/cosbench License: Apache v2.0 Mailing-List: http://cosbench.1094679.n5.nabble.com

  32. Public Swift Test Cluster

  33. Public Swift Test Cluster • Joint effort by SwiftStack*, Intel and HGST* 6 Swift PACO Nodes 8-core Intel(R) Atom(TM) CPU C2750 @ 2.40GHz 16GB main memory, 2x Intel X540 10GbE, 4x 1GbE Storage 12x HGST* 6TB Ultrastar(R) He6 Helium-filled HDDs Operating Environment: Ubuntu/Red Hat Linux, OpenStack Swift 1.13 Load Balancing / Management / Control / Monitoring Using SwiftStack* Controller

More Related