1 / 25

Chimera: Data Sharing Flexibility, Shared Nothing Simplicity

Chimera: Data Sharing Flexibility, Shared Nothing Simplicity. Umar Farooq Minhas University of Waterloo David Lomet , Chandu Thekkath Microsoft Research. Distributed database architectures. In a shared nothing system a single node can only access local data

sahkyo
Télécharger la présentation

Chimera: Data Sharing Flexibility, Shared Nothing Simplicity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chimera: Data Sharing Flexibility, Shared Nothing Simplicity Umar FarooqMinhas University of Waterloo David Lomet, ChanduThekkath Microsoft Research

  2. Distributed database architectures • In a shared nothing system a single node can only access local data • less complex, easier to implement • provides good performance if data is partitionable • e.g., Microsoft SQL Server, IBM DB2/UDB • Data sharing allows multiple nodes to shareaccess to common data • complex, difficult to implement • provides increased responsiveness to load imbalances • e.g., Oracle RAC, IBM Mainframe DB2 Goal: Design and implement a hybrid database system

  3. Shared nothing vs data sharing Node 1 Node 2 Node 3 Node 1 Node 2 Node 3 CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU Memory Memory Memory Memory Memory Memory Data sharing software layer Disk Disk Disk Disk Disk Disk Data sharing Shared nothing • Hardware configuration can be identical for both systems • Software managing the system is different

  4. Our approach • Start with shared nothing cluster of low-cost desktop machines • each node hosts a standalone shared nothing DBMS with locally attached storage • Extend shared nothing system with data sharing capability • a remote node can access a database hosted at a local node • Additional code required for • distributed locking • cache consistency Techniques presented here are applicable to any shared nothing DBMS

  5. Outline • Introduction • Chimera: Overview • Chimera: Implementation Details • Experimental Evaluation • Conclusion

  6. Chimera: Best of both worlds • Chimera is an “extension” to a shared nothing DBMS • built using off-the-shelf components • Provides the simplicity of shared nothing, flexibility of data sharing • Provides effective scalability and load balancing with less than2% overhead

  7. Chimera: Main components • Shared file system • to store data accessible to all nodes of a cluster • e.g., Common Internet File System (CIFS) or Network File System (NFS) • Generic distributed lock manager • provides ownership control • e.g., ZooKeeper, Chubby, Boxwood • Extra code in the shared nothing DBMS • for data access and sharing among nodes

  8. Advantages of Chimera • Load balancing at table granularity • offloads execution cost of database functionality • Scale-out for read-mostly workloads • read-mostly workloads are very common and important • e.g., a service hosted at Microsoft, Yahoo, or Google. • non-partitionable data is stored in a centralized database • Chimera provides effective scale-out for such workloads • Close to shared nothing simplicity • key point: allow only a single node to update a database at a time • greatly simplifies data sharing, transaction log, and recovery

  9. Outline • Introduction • Chimera: Overview • Chimera: Implementation Details • Experimental Evaluation • Conclusion

  10. Chimera: Overall system architecture GLM Queries Queries Queries DBMS 2 (remote) DBMS N (remote) DBMS 1 (local) SP SP SP LC LC LC EBM EBM EBM … SP – Stored Procedure LC – Lock Client EBM – Enhance Buffer Manager GLM – Global Local Manager CIFS – Common Internet File System DB CIFS

  11. Stored Procedure • Most of the required changes are implemented in a user defined stored procedure • invoked like a standard stored procedure • An instance of this stored procedure is installed at each node • accepts user queries • does appropriate locking and buffer management • executes the query against a local or remote table • returns the results to the caller

  12. Enhanced Buffer Manager • Implement a cross-node cache invalidation scheme • maintain cache consistency across nodes • Dirty pages need to be evicted from all readers after an update • we do not know in advance which pages will get updated • Selective cache invalidation • updating node captures a list of dirty pages • sends a message to all the readers to evict those pages

  13. Global Lock Manager • We need a richer lock manager that can handle locks on shared resources across machines • implemented using an external global lock managerwith corresponding local lock clients • A lock client is integrated with each DBMS instance • Lock types: Shared or Exclusive • Lock resources: an abstract name (string)

  14. Read sequence • Acquire a shared lock on the abstract resource (table) • ServerName.DBName.TableName • On lock acquire, proceed with Select • Release the shared lock

  15. Write sequence • Acquire an exclusive lock on • ServerName.DBName • ServerName.DBName.TableName • On lock acquire, proceed with the Update • Do selective cache invalidation on all reader nodes • Release the exclusive locks

  16. Outline • Introduction • Chimera: Overview • Chimera: Implementation Details • Experimental Evaluation • Conclusion

  17. Experimental setup • We use a 16 node cluster • 2x AMD Opteron CPU @ 2.0GHz • 8GB RAM • Windows Server 2008 Enterprise with SP2 • patched Microsoft SQL Server 2008 • buffer pool size = 1.5GB • Benchmark • TPC-H: A decision support benchmark • scale factor 1 • total size on disk ~3GB

  18. Overhead of our prototype • Run the 22 TPC-H queries on a single node with and without the prototype code Avg Slowdown: 1.006 X

  19. Remote execution overhead (cold cache) • Run the 22 TPC-H queries on the local node and remote node • measure the query run time and calculate the slowdown factor • flush DB cache between subsequent runs

  20. Remote execution overhead (warm cache) • Repeat the previous experiment with warm cache AvgSlowdown (before): 1.46 X AvgSlowdown (now): 1.03 X

  21. Cost of updates • Baseline: A simple update on a node with no readers • Test Scenarios: Perform update while 1, 2, 4, or 8other nodes read the database in an infinite loop

  22. Cost of reads with updates • Perform simple updates at local node with varying frequency: 60s, 30s, 15s, and 5s • Run one of the TPC-H read queries at a remote node for a fixed duration of 300s and calculate • Response time: average runtime • Throughput: queries completed per second

  23. Cost of reads with updates (1) Non-conflicting read

  24. Scalability • Run concurrent TPC-H streams • start with a single local node • incrementally add remote nodes up to a total of 16 nodes

  25. Conclusion • Data-sharing systems are desirable for load-balancing • We enable data-sharing as an extension to a shared nothing DBMS • We presented design and implementation of Chimera • enables data sharing at table granularity • uses global locks for synchronization • implements cross-node cache invalidation • does not require extensive changes to shared nothing DBMS • Chimera provides effective scalability and load balancing with overhead

More Related