1 / 23

MICA : A Holistic Approach to Fast In-Memory Key-Value Storage

MICA : A Holistic Approach to Fast In-Memory Key-Value Storage. Hyeontaek Lim 1. Dongsu Han, 2 David G. Andersen, 1 Michael Kaminsky 3 1 Carnegie Mellon University 2 KAIST , 3 Intel Labs. Goal: Fast In-Memory Key-Value Store. Improve per-node performance (op/sec/node)

schuyler
Télécharger la présentation

MICA : A Holistic Approach to Fast In-Memory Key-Value Storage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MICA: A Holistic Approach to Fast In-Memory Key-Value Storage Hyeontaek Lim1 Dongsu Han,2 David G. Andersen,1 Michael Kaminsky3 1Carnegie Mellon University 2KAIST, 3Intel Labs

  2. Goal: Fast In-Memory Key-Value Store • Improve per-node performance (op/sec/node) • Less expensive • Easier hotspot mitigation • Lower latency for multi-key queries • Target: small key-value items (fit in single packet) • Non-goals: cluster architecture, durability

  3. Q: How Good (or Bad) are Current Systems? • Workload: YCSB [SoCC 2010] • Single-key operations • In-memory storage • Logging turned off in our experiments • End-to-end performance over the network • Single server node

  4. End-to-End Performance Comparison Throughput (M operations/sec) - Published results; Logging on RAMCloud/Masstree - Using Intel DPDK (kernel bypass I/O); No logging - (Write-intensive workload)

  5. End-to-End Performance Comparison Throughput (M operations/sec) - Published results; Logging on RAMCloud/Masstree - Using Intel DPDK (kernel bypass I/O); No logging - (Write-intensive workload) Performance collapses under heavy writes

  6. End-to-End Performance Comparison Throughput (M operations/sec) Maximum packets/secattainableusing UDP 13.5x 4x

  7. MICA Approach Server node CPU Memory Client NIC CPU 1. Parallel data access 2. Requestdirection 3. Key-valuedata structures(cache & store) • MICA: Redesigning in-memory key-value storage • Applies new SW architecture and data structuresto general-purpose HW in a holistic way

  8. Parallel Data Access Server node CPU Memory Client NIC CPU 1. Parallel data access 2. Requestdirection 3. Key-valuedata structures Modern CPUs have many cores (8, 15, …) How to exploit CPU parallelism efficiently?

  9. Parallel Data Access Schemes Exclusive ReadExclusive Write Concurrent Read Concurrent Write CPU core Partition CPU core Memory Partition CPU core CPU core + Good load distribution - Limited CPU scalability(e.g., synchronization) - Cross-NUMA latency + Good CPU scalability - Potentially low performanceunder skewed workloads

  10. In MICA, Exclusive Outperforms Concurrent Throughput (Mops) End-to-end performance with kernel bypass I/O

  11. Request Direction Server node CPU Memory Client NIC CPU 1. Parallel data access 2. Requestdirection 3. Key-valuedata structures • Sending requests to appropriate CPU cores forbetter data access locality • Exclusive access benefits from correct delivery • Each request must be sent to corresp. partition’s core

  12. Request Direction Schemes Object-based Affinity Flow-based Affinity Server node Server node Key 1 Client CPU Client CPU Key 2 NIC NIC Key 1 Client CPU Client CPU Key 2 Classification using 5-tuple Classification depends on request content + Good locality for flows(e.g., HTTP over TCP) - Suboptimal for smallkey-value processing + Good locality for key access - Client assist or special HW support needed for efficiency

  13. Crucial to Use NIC HW for Request Direction Throughput (Mops) Using exclusive access for parallel data access

  14. Key-Value Data Structures Server node CPU Memory Client NIC CPU 1. Parallel data access 2. Requestdirection 3. Key-valuedata structures Significant impact on key-value processing speed New design required for very high op/secfor both read and write “Cache” and “store” modes

  15. MICA’s “Cache” Data Structures • Each partition has: • Circular log (for memory allocation) • Lossy concurrent hash index (for fast item access) • Exploit Memcached-like cache semantics • Lost data is easily recoverable (not free, though) • Favor fast processing • Provide good memory efficiency & item eviction

  16. Circular Log New item is appended at tail Head Tail (fixed log size) Insufficient spacefor new item? Evict oldest itemat head (FIFO) Tail Head Support LRU by reinserting recently accessed items • Allocates space for key-value items of any length • Conventional logs + Circular queues • Simple garbage collection/free space defragmentation

  17. Lossy Concurrent Hash Index bucket 0 Hashindex bucket 1 hash(Key) … bucket N-1 Circularlog Key,Val • Indexes key-value items stored in the circular log • Set-associative table • Full bucket? Evict oldest entry from it • Fast indexing of new key-value items

  18. MICA’s “Store” Data Structures • Required to preserve stored items • Achieve similar performance by trading memory • Circular log -> Segregated fits • Lossy index -> Lossless index (with bulk chaining) • See our paper for details

  19. Evaluation • Going back to end-to-end evaluation… • Throughput & latency characteristics

  20. Throughput Comparison Throughput (Mops) Similar performance regardless of skew/write Largeperformancegap Bad at high write ratios End-to-end performance with kernel bypass I/O

  21. Throughput-Latency on Ethernet Average latency (μs) 200x+ throughput Throughput (Mops) Original Memcached using standard socket I/O; both use UDP

  22. MICA Server node CPU Memory Client NIC CPU 1. Parallel data access 2. Requestdirection 3. Key-valuedata structures(cache & store) Redesigning in-memory key-value storage 65.6+ Mops/node even for heavy skew/write Source code: github.com/efficient/mica

  23. Reference [DPDK] http://www.intel.com/content/www/us/en/intelligent-systems/intel-technology/packet-processing-is-enhanced-with-software-from-intel-dpdk.html [FacebookMeasurement] BerkAtikoglu, Yuehai Xu, EitanFrachtenberg, Song Jiang, and Mike Paleczny. Workload analysis of a large-scale key-value store. In Proc. SIGMETRICS 2012. [Masstree] Yandong Mao, Eddie Kohler, and Robert Tappan Morris. Cache Craftiness for Fast Multicore Key-Value Storage. In Proc. EuroSys 2012. [MemC3] Bin Fan, David G. Andersen, and Michael Kaminsky. MemC3: Compact and Concurrent MemCache with Dumber Caching and Smarter Hashing. In Proc. NSDI 2013. [Memcached] http://memcached.org/ [RAMCloud] Diego Ongaro, Stephen M. Rumble, Ryan Stutsman, John Ousterhout, and Mendel Rosenblum. Fast Crash Recovery in RAMCloud. In Proc. SOSP 2011. [YCSB] Brian F. Cooper, Adam Silberstein, Erwin Tam, Raghu Ramakrishnan, and Russell Sears. Benchmarking Cloud Serving Systems with YCSB. In Proc. SoCC 2010.

More Related